text
stringlengths 256
16.4k
|
|---|
Two players play a game with a polynomial with undetermined coefficients
\[
1 + c_1 x + c_2 x^2 + \dots + c_7 x^7 + x^8.
\]
Players, in turn, assign a real number to an undetermined coefficient until all coefficients are determined. The first player wins if the polynomial has no real zeros, and the second player wins if the polynomial has at least one real zero. Find who has the winning strategy.
The best solution was submitted by Ha, Seokmin (하석민, 수리과학과 2017학번). Congratulations!
Here is his solution of problem 2018-23.
Alternative solutions were submitted by 채지석 (수리과학과 2016학번, +3), 권홍 (중앙대 물리학과, +2).
GD Star Rating loading...
Two players play a game with a polynomial with undetermined coefficients
\[ 1 + c_1 x + c_2 x^2 + \dots + c_7 x^7 + x^8. \] Players, in turn, assign a real number to an undetermined coefficient until all coefficients are determined. The first player wins if the polynomial has no real zeros, and the second player wins if the polynomial has at least one real zero. Find who has the winning strategy.
GD Star Rating loading...
Let \(f_1(x)=x^2+a_1x+b_1\) and \(f_2(x)=x^2+a_2x+b_2\) be polynomials with real coefficients. Prove or disprove that the following are equivalent.
(i) There exist two positive reals \(c_1, c_2\) such that \[ c_1f_1(x)+ c_2 f_2(x) > 0\] for all reals \(x\).
(ii) There is no real \(x\) such that \( f_1(x)\le 0\) and \( f_2(x)\le 0\).
The best solution was submitted by Gil, Hyunjun (길현준, 2018학번). Congratulations!
Here is his solution of problem 2018-22.
Alternative solutions were submitted by 김태균 (수리과학과 2016학번, +3), 서준영 (수리과학과 대학원생, +3), 이본우 (수리과학과 2017학번, +3), 채지석 (수리과학과 2016학번, +3), 하석민 (수리과학과 2017학번, +3), 최백규 (생명과학과 2016학번, +2). There was one incorrect submission.
GD Star Rating loading...
Let \(f_1(x)=x^2+a_1x+b_1\) and \(f_2(x)=x^2+a_2x+b_2\) be polynomials with real coefficients. Prove or disprove that the following are equivalent.
(i) There exist two positive reals \(c_1, c_2\) such that \[ c_1f_1(x)+ c_2 f_2(x) > 0\] for all reals \(x\).
(ii) There is no real \(x\) such that \( f_1(x)\le 0\) and \( f_2(x)\le 0\).
GD Star Rating loading...
Does there exist a (possibly \(n\)-dependent) constant \( C \) such that
\[
\frac{C}{a_n} \sum_{1 \leq i < j \leq n} (a_i-a_j)^2 \leq \frac{a_1+ \dots + a_n}{n} – \sqrt[n]{a_1 \dots a_n} \leq \frac{C}{a_1} \sum_{1 \leq i < j \leq n} (a_i-a_j)^2
\]
for any \( 0 < a_1 \leq a_2 \leq \dots \leq a_n \)?
The best solution was submitted by Jiseok Chae (채지석, 수리과학과 2016학번). Congratulations!
Here is his solution of problem 2018-21.
Alternative solutions were submitted by 하석민 (수리과학과 2017학번, +3),
이본우 (수리과학과 2017학번, +2). One incorrect submission was received.
GD Star Rating loading...
Does there exist a (possibly \(n\)-dependent) constant \( C \) such that
\[ \frac{C}{a_n} \sum_{1 \leq i < j \leq n} (a_i-a_j)^2 \leq \frac{a_1+ \dots + a_n}{n} - \sqrt[n]{a_1 \dots a_n} \leq \frac{C}{a_1} \sum_{1 \leq i < j \leq n} (a_i-a_j)^2\]for any \( 0 < a_1 \leq a_2 \leq \dots \leq a_n \)?
GD Star Rating loading...
Let \(f:\mathbb R\to\mathbb R\) be a function such that \[ -1\le f(x+y)-f(x)-f(y)\le 1\] for all reals \(x\), \(y\). Does there exist a constant \(c\) such that \( \lvert f(x)-cx\rvert \le 1\) for all reals \(x\)?
The best solution was submitted by Ha, Seokmin (하석민, 수리과학과 2017학번). Congratulations!
Here is his solution of problem 2018-20.
An alternative solution was submitted by 채지석 (수리과학과 2016학번, +3). There were two incorrect submissions.
GD Star Rating loading...
Let \(f:\mathbb R\to\mathbb R\) be a function such that \[ -1\le f(x+y)-f(x)-f(y)\le 1\] for all reals \(x\), \(y\). Does there exist a constant \(c\) such that \( \lvert f(x)-cx\rvert \le 1\) for all reals \(x\)?
GD Star Rating loading...
|
C1.2 Godel's Incompleteness Theorem - Material for the year 2019-2020
This course presupposes knowledge of first-order predicate logic up to and including soundness and completeness theorems for a formal system of first-order predicate logic (B1.1 Logic).
16 lectures.
Assessment type:
The starting point is Gödel's mathematical sharpening of Hilbert's insight that manipulating symbols and expressions of a formal language has the same formal character as arithmetical operations on natural numbers. This allows the construction for any consistent formal system containing basic arithmetic of a `diagonal' sentence in the language of that system which is true but not provable in the system. By further study we are able to establish the intrinsic meaning of such a sentence. These techniques lead to a mathematical theory of formal provability which generalizes the earlier results. We end with results that further sharpen understanding of formal provability.
Understanding of arithmetization of formal syntax and its use to establish incompleteness of formal systems; the meaning of undecidable diagonal sentences; a mathematical theory of formal provability; precise limits to formal provability and ways of knowing that an unprovable sentence is true.
Gödel numbering of a formal language; the diagonal lemma. Expressibility in a formal language. The arithmetical undefinability of truth in arithmetic. Formal systems of arithmetic; arithmetical proof predicates. $\Sigma_0$-completeness and $\Sigma_1$-completeness. The arithmetical hierarchy. $\omega$-consistency and 1-consistency; the first Gödel incompleteness theorem. Separability; the Rosser incompleteness theorem. Adequacy conditions for a provability predicate. The second Gödel incompleteness theorem; Löb's theorem. Provable $\Sigma_1$-completeness. The $\omega$-rule. The system GL for provability logic. The fixed point theorem for GL. The Bernays arithmetized completeness theorem; undecidable $\Delta_{2}$-sentences of arithmetic.
Lecture notes for the course.
Raymond M. Smullyan, Gödel's Incompleteness Theorems(Oxford University Press, 1992). George S. Boolos and Richard C. Jeffrey, Computability and Logic(3rd edition, Cambridge University Press, 1989), Chs 15, 16, 27 (pp 170-190, 268-284). George Boolos, The Logic of Provability(Cambridge University Press, 1993).
|
Suppose we have the
inhomogeneous advection equation$$\left(\frac{\partial}{\partial x}+\frac{1}{c}\frac{\partial}{\partial t}\right)u(t,x)=v(t,x)$$for $u,v:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ (with boundary conditions not yet specified).
Assuming that we had no $v$, i.e. the homogeneous part of the equation, the Crank-Nicolson method would yield
$$-c\frac{\mu}{4}u^{n+1}_{\ell-1}+u^{n+1}_{\ell}+c\frac{\mu}{4}u^{n+1}_{\ell+1}=c\frac{\mu}{4}u^n_{\ell-1}+u^n_{\ell}-c\frac{\mu}{4}u^n_{\ell+1},$$
where $\mu=\frac{\Delta t}{\Delta x}$ and $u^n_\ell=u(n\Delta t,\ell\Delta x)$.
I don't know how to deal with the inhomogeneity in these schemes though.
|
I have an algorithm that produces a set of real outputs given real inputs. For practical purposes, let's say I have two inputs and one output, and the algorithm can be represented by the function $\phi: \Re² \rightarrow \Re$.
I need to calculate $\frac{\partial^2 \phi(u_1,u_2)}{\partial u_1\partial u_2} \Bigr|_{\bar{u_1}\bar{u_2}}$ but I don't have an explicit formula for $\phi$. My only option is to run the algorithm to obtain the output given the input values.
My original idea was to run the program with a series of values around $\bar{u_1}$ and $\bar{u_2}$, then calculate $\frac{\partial \phi}{\partial u_1}$ at the different fixed $u_2$ values, and finally taking $\frac{\partial }{\partial u_2}$ to that (i.e. taking three values for $u_1$ and three for $u_2$ this would be running the algorithm for all the combinations $\{\bar{u_1} - \delta, \bar{u_1} , \bar{u_1} + \delta\} \times \{\bar{u_2} - \delta, \bar{u_2} , \bar{u_2} + \delta\}$, computing the derivative of $\phi$ with respect to $u_1$ for each of the three fixed $u_2$ values, and finally taking the derivative of that with respect to $u_2$ .)
However I notice that the calculated derivate depends heavily on the spacing between the values of the inputs $u_1$ and $u_2$ (i.e. $\delta$.) Moreover, I don't see it converging as I make the spacing smaller.
Is this a numerical problem or is it that the function is simply not differentiable? How can I tell?
In the following picture there's the resulting plot for $\delta = 0.01$ (left) and $\delta = 0.02$ (right). The derivative was calculated in $\texttt{R}$ using the $\texttt{splines}$ package.
EDIT: If I plot $\phi$ with 21 values for each independent variable I get,
With $\delta = 0.05$
With $\delta = 0.001$
|
This is an MCMC algorithm for uniform sampling over singular $n$ by $n$ Bernoulli matrices.
Let $H$ (for "hypercube") be the set of all 0/1 vectors of length $n$.
One step of the MCMC algorithm is as follows:
Generate an $(n-1)$ by $n$ matrix $A$, filled with 0/1 iid Bernoullisamples. This will be the first $n-1$ rows of our proposal matrix.
Find $K$, the kernel of $A$. Let $r$ be the rank of $K$ (and notethat $r>0$).
Consider extending $A$ to an $n$ by $n$ matrix by adding a final 0/1row. Let $L$ be the number of possible completions.
If $r>1$, then $L=2^n$. Generate a proposal matrix $M$ with thesame first $n-1$ rows as $A$, and the last row selected uniformly atrandom from $H$ (transposed).
If $r=1$, let $L=|K\cap H|$. Generate a proposal matrix $M$ withthe same first $n-1$ rows as $A$, and the last row selected uniformly at random from $K\cap H$. (We will discuss how to enumerate $K\cap H$ below.)
Let $L'$ be the corresponding value for $L$ for the previous acceptedproposal. Let $p=\min(1, L/L')$. Accept the new proposal matrix withprobability $p$.
(To generate the initial proposal, we can just follow steps 1-3.)
This appears to provide a straightforward MCMC algorithm for uniformsampling from singular Bernoulli matrices. All of the steps areefficient and straightforward except for one: enumerating $K \cap H$.How can we compute that in a (somewhat) practical way?
Observe that the probability that $r>1$ is less than the probabilitythat a uniform Bernoulli matrix is singular. My earlier post on thisquestion addresses suggests that this probability is on the orderof $2^{-n}$ for large $n$ (e.g. $n=100$). Therefore, we willessentially always be in the case $r=1$.
We can find an element in $K\cap H$ by using a mixed (binary) integer linear program.Specifically, we try to find a binary integer vector that is in the kernel $K$ (which is a linear constraint). We only need a feasible solution (i.e. "phase 1" of the MILP); the objective function doesn't matter. This produces a single element of $K\cap A$ (or shows that no such element exists).
Note that, for any $z\in H$, we can construct a linear function $f_z$ on $R^n$ that coincides with the Hamming distance from any on vector $H$ to $z$, to wit: $f_z(x)=\sum_i^{n}(z_i+(-1)^z_i x_i)$. That is, for any $x,z\in H$, $f_z(x)$ is equal to the Hamming distance between the vectors.
This provides us with a method to enumerate all elements of $K\cap A$. First, we use the MILP to enumerate one solution, $z\in H$. Then we add a linear constraint to the MILP forcing the Hamming distance from $z$ to be $\geq 1$. This removes $z$ from the feasible set but keeps all other integer solutions. We keep repeating this until the MILP finds no feasible binary solutions. This procedure will enumerate $K\cap H$.
Note that the all-zero vector provides us with an initial feasible solution (so $|K\cap H|\geq 1$). In the typical case that the first $n-1$ rows are distinct and non-zero, there are at least $n$ solutions: the all-zeroes vector, and each of the other $n-1$ rows. Therefore, we can usually skip the first $n$ steps and just add those constraints to the MILP directly.
For a particularly bad $A$, this solution may take exponentially much time. For example, if $A$ is the first $n-1$ rows of an identity matrix, then $r=1$ but $|K\cap H|=2^{n-1}$, so the outer loop may take exponentially many steps before terminating. Moreover, the MILP might take exponentially long to identify a single feasible solution.
In practice (i.e. for a random $A$), I imagine that the $n$ solutions mentioned above are almost certainly the only solutions, so we probably only need to solve a single MILP (which will prove that there are no more feasible solutions). The practical question then becomes how long this MILP takes. I think we would just need to try it to find out. In my experience, CPLEX tends to be much faster than GLPK at solving MILPs, but I don't have a license handy to try it out.
There are a few optimizations that might be worth mentioning.
We could modify the MCMC algorithm so that with probability 1/2 it proceeds as above, but with probability 1/2 it reuses the same $A$ and just resamples the last row from $K\cap H$. Since $K\cap H$ has already been enumerated, and since these states are all equally likely, this second possibility takes essentially no compute time and MCMC always accepts. The net effect is much faster "local" mixing but longer scale dependence on $A$. Whether that is a good idea or not depends on what the samples are being used for. Similarly, with a certain probability we can permute the rows, columns, or transpose the matrix (or some combination of all three).
Finally, it might be worth mentioning an alternate approach. We might call the previous suggestion the "kernel/MILP" approach, and the following suggestion the "cokernel/dynamic program" approach. (I'm not sure if I should break this off as a separate post, but the thought of providing 3 separate "answers" to this question made me feel sheepish.)
Note that if the matrix $M$ is singular, then there exists some row that is a linear combination of the other rows; therefore, there is some row vector $v\in R^n$ such that $vA=0$. I believe that in the preponderance of cases, $v$ is extremely sparse and has small entries. For example, if a row is all zeros, then $v$ has weight 1 and coefficient 1; if two rows are co-linear, then $v$ has weight 2 and coefficients $1$ and $-1$. (By "weight", I mean the number of non-zero entries in $V$.)
Suppose that we make a proposal for $v\in Z^n$, where we (strongly) bias our distribution to favor sparse vectors with small coefficients. For all the zero entries in $v$, we choose the entries of the corresponding rows of $M$ as uniformly Bernoulli samples. For the remaining rows, we can compute all valid sets of 0/1 entries such that the dot product with $v$ is zero. This involves solving the subset sum problem. This is NP-complete, but it is pseudo-polynomial-time solvable (with a dynamic program). We have "stacked the deck" in our favor by sampling from vectors with small entries, so this step will be efficient (that is, we can achieve polynomial expected work).
If $v$ has weight $k$, the solution to the dynamic program tells us the set $S$ of valid $k$-tuples for each column; the number of consistent solutions is then $S^n$. We can then perform an MCMC random walk with acceptance probability proportional to $|S|$ (eliding details). I imagine that the dynamic program would be vastly faster than the MILP in practice (in addition to being more theoretically tractable), so an approach along these lines would probably be much faster.
By the way, it is possible to bound the maximum possible value for an entry of $v$. This is essentially equivalent to Hadamard's maximum determinant problem, but unfortunately the bound is $(n+1)^{(n+1)/2}2^{-n}$ (so the corresponding dynamic program would be exponentially large.)
That said, I have gotten stuck with some technical issues when trying to construct the MCMC, so I presented the "kernel/MILP" approach above.
|
The concept of a "proliferating random walk" on a lattice is that at any time $t \in \Bbb N \cup 0$, there is some set consisting of at least one particle, each of which is on its own lattice point. When taking a time step, each particle also randomly decides whether to split into two particles, each of which moves to a different lattice point. If two particles would reach the same lattice point at some step, they coalesce back into one particle.
To be precise for the problem of interest, define for any real constant $0 \leq \rho \leq 1$ a $\rho$-proliferating walk in $d$ dimensions as a Markov process evolving in discrete integer time steps, with the following properties:
The state $S_t$ at any time $t$ is described by a finite subset of $\Bbb Z^d$.
The "evolution component" attributed to any point $x \in \Bbb Z^d$ is the set $$ e_{t+1}(x) = \left\{ \begin{array}{cl} \emptyset & x \not\in S_t \\ \left\{\begin{array}{l} r_2(x) \mbox{ with probability }\rho \\r_1(x) \mbox{ with probability }1-\rho \end{array}\right.& x \in S_t \end{array}\right. $$ where $r_1(x)$ is set containing a single point one lattice step from $x$ with all such sets chosen with equal probability, and $r_2(x)$ is set containing two points each one lattice step from $x$ with all such sets chosen with equal probability.
The state evolves as $$S_{t+1} = \bigcup_{x\in \Bbb R^d} e_{t+1}(x) = \bigcup_{x\in S_t} e_{t+1}(x) $$
The initial state is $S_0 = \{ 0^d \}$, that is, one particle at the origin.
Further, for some $d$ and $\rho$, define the probability $RV(d,\rho)$ of revisiting the origin as the probability that at some time $t>0$ the origin will be an element of $S_t$. I don't use the term "return to origin" because that ought to be reserved to mean that $S_t = S_0$. The walk can revisit the origin even if $S_t$ has many elements. If the $d$-dimensional random walk with proliferation $\rho$ revists the origin a.s, we say that $RV(d,\rho) = 1$.
For $\rho = 0$, the proliferating random walk is an ordinary symmetrical random walk, and will return to the origin with probability of Polya's random walk constant, roughly 34%. But for $\rho > 0$ the number of particles in $S_t$ tends to increase with $t$ (and this increase is rapid for $d>2$), so it is plausible that for sufficiently large $\rho$ the Markov process for $S_t$ revisits the origin almost surely, at least in $3$ dimensions.
It is easy to prove that $RV(d,\rho) \geq RV(d,0)$, and not much harder to show that if $\xi > \rho$ then $RV(d,\xi) \geq RV(d,\rho)$. I strongly suspect that $RV(3,1) = 1$, that is, the 3-D random walk with particle doubling at every time step revisits the origin a.s. -- but I can't prove that.
The generic question that arises is to characterise, for any given $d$, the range of $\rho$ such that $RV(d,\rho) < 1$. In particular:
Is $RV(3,1) = 1$, and if so, what can be said about the minimum value of $\rho$ such that $RV(3,\rho) = 1$?
|
I am stuck with a basic understanding of the generalized (and even the ordinary version of) Gauss-Bonnet theorem. For a compact 2-dimensional Riemannian manifold $M$ with boundary $\partial M$, let $K$ be the Gaussian curvature of $M$ and $k_g$, the geodesic curvature of $\partial M$. Then
$$\int_M K\;dA+\int_{\partial M}k_g\;ds=2\pi\chi(M),$$
where $\chi(M)$ is the Euler characteristic of $M$. My questions are:
The Gaussian curvature and the geodesic curvature are functions of the connection that one puts on $M$, and in the standard version of the theorem, we usually put the induced Euclidean connection on the 2-manifold $M$ from its embedding space $\mathbb{R}^3$; whereas, the right hand side of the above equation is a topological invariant of $M$, and, thus, is independent of any connection that we adorn $M$ with. Then the left hand side, as well, should be independent of the connection. How is this invariance with respect to the connection on $M$ is concealed in the left hand side integrals?
I have only seen the statement of the generalized Gauss-Bonnet theorem (from the book "From Calculus to Cohomology") and I guess it partially answers my question, but I don't have a clear understanding of its underpinning. The generalized version says that, for any 2$n$-dimensional compact oriented smooth manifold $M$,
$$\int_M Pf\bigg(\frac{-F^{\nabla}}{2\pi}\bigg)=\chi(M)$$
holds, where $F^\nabla$ is the curvature associated with
any metric connection on the tangent bundle of $M$. Here, $Pf:\mathfrak{so}_{2n}\to\mathbb{R}$ is something called the Pfaffian and is defined on the space of skew-symmetric matrices.
So the definition of the Pfaffian must be the answer to my question. So how does the Pfaffian make the left hand side invariant with respect to the connection? Why do we require the evenness of the dimension and orientation of $M$? And finally, why do we require a metric-compatible (perhaps, torsion free) connection in the first place?
A detailed explanation would be much appreciated. I have just started reading on Gauss-Bonnet theorem and I guess my query lies at the heart of the underlying philosophy of this gem theorem of differential topology.
Looking forward to a detailed explanation or references on this particular explanation.
(I think partial answer to my question is in Prof. Bryant's answer to this A question on Generalized Gauss-Bonnet Theorem.)
|
If tidal power plants are slowing down Earth's rotation then is it theoretically possible to build a power plant that would drain energy from Earth's angular momentum (thus slowing down it's rotation)?
What would such machine look like?
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
If tidal power plants are slowing down Earth's rotation then is it theoretically possible to build a power plant that would drain energy from Earth's angular momentum (thus slowing down it's rotation)?
What would such machine look like?
There are a few important points to make before I attempt an answer. The first is just a point about terminology: you have to be careful to distinguish between energy and (angular) momentum. They are not the same thing - they have different units - and so it doesn't strictly make sense to talk about "draining energy from Earth's angular momentum", since it's impossible to convert one of those quantities into the other. This doesn't mean it's a bad question, it's just that you have to be careful to understand the difference.
The second important point is that angular momentum is conserved. If you drain it from one thing, it has to go into something else. Energy is the same, but energy can come in useful forms that we mostly call "work" and no-so-useful forms that we mostly call "heat". When we say "generating energy" we really mean converting it into a useful form.
Although you can't convert angular momentum into energy, it is possible to generate useful energy by transferring angular momentum from one body to another. The rule is that you can do this whenever the angular velocities of the two bodies are different, as Peter Shor mentioned in his answer. This is exactly the same rule as the one that allows you to extract energy by transferring charge when two voltages are different, or by transferring linear momentum when the velocities are different, or by transferring heat when the temperatures are different.
The main point is that if you want to generate energy by decreasing Earth's angular momentum, you have to transfer that anglar momentum somewhere else. One place to put it is the Moon, which is effectively what happens when you extract energy from the tides. (The moon orbits the Earth more slowly than the Earth rotates, so their angular velocities around the centre of the Earth are different.) However, it's difficult to imagine how the flow of angular momentum from the Earth to the Moon could be sped up.
However, there is another way you could do it, and that's to launch things into space. If you can get mass far enough away from Earth then it will have a lower angular velocity, and so in theory you could generate useful work by transferring matter from Earth's surface to deep space.
But the question is how you could do it. Launching stuff in rockets won't do the job, since it takes loads of energy to accelerate things to escape velocity, and there's no practical way to get it back. However, with a space elevator it would be different. Imagine a space elevator that's so long that the centrifugal force at the counterweight end is much larger than $1g$. You have to expend some energy lifting mass up to the zero-g point, but this will be more than outweighed by the energy you can gain by letting the mass power a generator as it moves from the zero-g point to the counterweight. Once it gets there its velocity will be enough to escape the Earth's gravity well and you can just let it go, sending the lift back to collect the next mass.
If you do this repeatedly you will have a net gain of energy, just from sending garbage into space, and it ultimately works because it transfers angular momentum from a region of high angular velocity (the Earth) to one of low angular velocity (objects moving in space far from Earth). However, there are enormous practical issues that would probably prevent it from ever being done. Aside from the strength of cable needed, the angular momentum will be transferred not directly from the surface of the Earth but from the whole length of the cable as the mass moves along it, which will make it swing around like crazy unless it can somehow be made to behave as if it's rigid. But nevertheless, it's possible in principle.
Come to think of it, if we're coming up with crazy schemes that would only work in principle, I guess you could build a rail track around the equator of the Earth, put a very heavy train on it, and attach the train via a cable to the moon so that it pulls the train along, powering a generator. This would be an effective way to generate work by increasing the transfer of angular momentum between the two bodies, but of course it's pure fantasy.
Yes, many things rob Earth's angular momentum and slow it down. For example, when a rocket lands on Earth, it gains/loses speed due to the Earth's rotation. Other than that, when a rocket leaves Earth, they use the fact that the Earth rotates in order to gain kinetic energy from the Earth's rotation. In other words, the rocket robs the Earth's energy.
NASA uses this fact to their advantage. Their 'rob' Earth's energy to speed up their spacecraft during launch.
This relates to angular momentum because the rocket acts as a mass being added or removed from the side of a spinning 'sphere' (the Earth). If you use calculations, you would find things like conservation of energy and conservation of angular momentum.
However, we don't have power plants doing that because it is highly impractical.
It is impractical because we have an atmosphere which moves along with the Earth's rotation
Edit: Let's imagine that Earth's atmosphere doesn't exist. So we have a big rotating sphere in vacuum we could launch a spacecraft in the direction of rotation, have some turbines 'take' the spacecraft's kinetic energy, let the spacecraft land, and repeat. It's a simplified version of some system. We could even use magnets and Faraday's law (the spacecraft is a magnet) instead of some turbine.
$\Delta L = mv_{initial} r - (\Sigma m)v_{final} r $
$= m_{earth} r_{earth} v_{earth,initial} - {v_{earth,final} (m_{earth} r_{earth} + m_{rocket} r_{rocket})}$
Since $v_{initial} \approx v_{final}$, we have $\Delta L = -vrm_{rocket}$
To harness the energy of the Earth's rotation, you need something with different angular velocity (just like harnessing thermal energy requires two things at different temperature). Everything on Earth is rotating with the same speed, and so has the same angular velocity; the closest thing with a different angular velocity is the Moon.
This is in fact what is powering the tides: the Moon is gaining angular momentum and changing its orbit. However, we have no practical way to speed up the transfer of angular momentum to the Moon, so harnessing Earth's angular momentum in any way other than using the existing tides is completely impractical.
Technically speaking, every time you exert a force on planet Earth, you are accelerating it. Take this case for example:
Earth spins counter-clockwise, or towards the East as it would be observed on Earth. Thus, should you exert a force in the same motion (walking West, applying a force on the planet to the East), you would be increasing the rate at which Earth spins (albeit this is a TINY increase). In the same way, walk to the East and you'll apply a tangential force towards the West that actually slows Earth's rotation slightly.
This behavior would obviously scale up for more massive objects exerting larger forces, resulting in the slight decrease apparently caused by the power plants you mentioned.
Back to your case for building a plant to harness this energy, though... There are currently no plans to try and harness this energy, as the facility required to do it would likely be ridiculously impractical.
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
Schedule of the Workshop "Picard-Fuchs Equations and Hypergeometric Motives" Monday, March 26
10:15 - 10:50 Registration & Welcome coffee 10:50 - 11:00 Opening remarks 11:00 - 12:00 Frits Beukers: Some supercongruences of arbitrary length 12:00 - 13:45 Lunch break 13:45 - 14:45 Alexander Varchenko: Solutions of KZ differential equations modulo p 15:00 - 16:00 Bartosz Naskręcki: Elliptic and hyperelliptic realisations of low degree hypergeometric motives 16:00 - 16:30 Tea and cake 16:30 - 17:30 Roberto Villaflor Loyola: Periods of linear algebraic cycles in Fermat varieties afterwards Reception Tuesday, March 27
09:30 - 10:30 Mark Watkins: Computing with hypergeometric motives in Magma 10:30 - 11:00 Group photo and coffee break 11:00 - 12:00 Madhav Nori: Semi-Abelian Motives 12:00 - 13:45 Lunch break 13:45 - 14:45 Wadim Zudilin: A q-microscope for hypergeometric congruences 15:00 - 16:00 Masha Vlasenko: Dwork Crystals and related congruences 16:00 - 16:30 Tea and cake Wednesday, March 28
09:30 - 10:30 Jan Stienstra: Zhegalkin Zebra Motives, digital recordings of Mirror Symmetry 10:30 - 11:00 Coffee break 11:00 - 12:00 R. Paul Horja: Spherical Functors and GKZ D-modules 12:00 - 13:45 Lunch break 13:45 - 14:45 Duco van Straten: Frobenius structure for Calabi-Yau operators 15:00 - 16:00 Kiran S. Kedlaya: Frobenius structures on hypergeometric equations: computational methods 16:00 - 16:30 Tea and cake Thursday, March 29
09:30 - 10:30 Danylo Radchenko: Goursat rigid local systems of rank 4 10:30 - 11:00 Coffee break 11:00 - 12:00 Damian Rössler: The arithmetic Riemann-Roch Theorem and Bernoulli numbers 12:00 - 13:45 Lunch break 13:45 - 14:45 Robert Kucharczyk: The geometry and arithmetic of triangular modular curves 15:00 - 16:00 John Voight: On the hypergeometric decomposition of symmetric K3 quartic pencils 16:00 - 16:30 Tea and cake Friday, March 30: no talks (holiday) Abstracts Frits Beukers: Some supercongruences of arbitrary length
In joint work with Eric Delaygue it is shown that truncated hypergeometric sums with parameters ½,...,½ and 1,...,1 and evaluated at the point 1 are equal modulo p to Dwork's unit-root eigenvalue modulo p2. Congruences modulo p follow directly from Dwork's work, the fact that the congruence holds modulo p2 accounts for the name 'supercongruence'.
R. Paul Horja: Spherical Functors and GKZ D-modules
Some classical mirror symmetry results can be recast using the more recent language of spherical functors. In this context, I will explain a Riemann-Hilbert type conjectural connection with the GKZ D-modules naturally appearing in toric mirror symmetry.
Kiran S. Kedlaya: Frobenius structures on hypergeometric equations: computational methods
Current implementations of the computation of L-functions associated to hypergeometric motives in Magma and Sage rely on a p-adic trace formula. We describe and demonstrate (in Sage) an alternate approach based on computing the right Frobenius structure on the hypergeometric equation. This gives rise to a conjectural formula for the residue at 0 of this Frobenius structure in terms of p-adic Gamma functions, related to Dwork's work on generalized hypergeometric functions.
Robert Kucharczyk: The geometry and arithmetic of triangular modular curves
In this talk I will take a closer look at triangle groups acting on the upper half plane. Except for finitely many special cases, which are highly interesting in themselves, these are non-arithmetic groups. However, a notion of congruence subgroup is well-defined for these, and there are natural moduli problems that are classified by quotients of the upper half plane by such subgroups, giving rise to models over number fields. These curves have much to do with very classical mathematics, and they build a bridge between the hypergeometric world and the world of Shimura varieties. This is ongoing joint work with John Voight, who is also present at this conference.
Bartosz Naskręcki: Elliptic and hyperelliptic realisations of low degree hypergeometric motives
In this talk we will discuss what are the so-called hypergeometric motives and how one can approach the problem of their explicit construction as Chow motives in explicitely given algebraic varieties. The class of hypergeometric motives corresponds to Picard-Fuchs equations of hypergeometric type and forms a rich family of pure motives with nice L-functions. Following recent work of Beukers-Cohen-Mellit we will show how to realise certain hypergeometric motives of weights 0 and 2 as submotives in elliptic and hyperelliptic surfaces. An application of this work is the computation of minimal polynomials of hypergeometric series with finite monodromy groups and proof of identities between certain hypergeometric finite sums, which mimics well-known identities for classical hypergeometric series. This is a part of the larger program conducted by Villegas et al. to study the hypergeometric differential equations (special cases of differential equations '"coming from algebraic geometry'") from the algebraic perspective.
Madhav Nori: Semi-Abelian Motives
joint work with Deepam Patel
Danylo Radchenko: Goursat rigid local systems of rank 4
I will talk about certain rigid local systems of rank 4 considered by Goursat, with emphasis on explicit constructions and examples. The talk is based on joint work with Fernando Rodriguez Villegas.
Damian Rössler: The arithmetic Riemann-Roch theorem and Bernoulli numbers
(with V. Maillot) We shall show that integrality properties of the zero part of the abelian polylogarithm can be investigated using the arithmetic Adams-Riemann-Roch theorem. This is a refinement of the arithmetic Riemann-Roch theorem of Bismut-Gillet-Soulé-Faltings, which gives more information on denominators of Chern classes than the original theorem. We apply this theorem to the Poincaré bundle on an abelian scheme and and the final calculation involves a variant of von Staudt’s theorem.
On a canonical class of Green currents for the unit sections of abelian schemes. Documenta Math. 20 (2015), 631–668
Jan Stienstra: Zhegalkin zebra motives, digital recordings of Mirror Symmetry
I present a very simple construction of doubly-periodic tilings of the plane by convex black and white polygons. These tilings are the motives in the title. The vertices and edges in the tiling form a quiver (=directed graph) which comes with a so-called potential, provided by the polygons. Dual to this graph one has the bipartite graph formed by the black/white polygons and the edges in the tiling. We deform this structure by putting weights on the edges and connect this with representations of the Jacobi algebra of the quiver with potential and with the Kasteleyn matrix of the bi-partite graph.
Duco van Straten: Frobenius structure for Calabi-Yau operators
This is a report on joint work in progress with P. Candelas and X. de la Ossa on the (largly conjectural) computation of Euler factors from Calabi-Yau operators. The method uses Dworks deformation method starting from a simple Frobenius matrix at the MUM-point that involves a p-adic version of ζ(3). We give some new applications, in particular to the determination of congruence levels.
Alexander Varchenko: Solutions of KZ differential equations modulo p
Polynomial solutions of the KZ differential equations over a finite field Fp will be constructed as analogs of multidimensional hypergeometric solutions.
Roberto Villaflor Loyola: Periods of linear algebraic cycles in Fermat varieties
In this talk we will show how a theorem of Carlson and Griffiths can be used to compute periods of linear algebraic cycles inside Fermat varieties of even dimension. As an application we prove that the locus of hypersurfaces containing two linear cycles whose intersection is of low dimension, is a reduced component of the Hodge locus in the underlying parameter space. Our method can be used to verify similar statements for other kind of algebraic cycles (for example complete intersection algebraic cycles) by means of computer assistance. This is joint work with Hossein Movasati.
Masha Vlasenko: Dwork crystals and related congruences
In the talk I will describe a realization of the p-adic cohomology of an affine toric hypersurface which originates in Dwork's work and give an explicit description of the unit-root subcrystal based on certain congruences for the coeficients of powers of a Laurent polynomial. This is joint work with Frits Beukers.
John Voight: On the hypergeometric decomposition of symmetric K3 quartic pencils
We study the hypergeometric functions associated to five one-parameter deformations of Delsarte K3 quartic hypersurfaces in projective space. We compute all of their Picard–Fuchs differential equations; we count points using Gauss sums and rewrite this in terms of finite field hypergeometric sums; then we match up each differential equation to a factor of the zeta function, and we write this in terms of global $L$-functions. This computation gives a complete, explicit description of the motives for these pencils in terms of hypergeometric motives.
This is joint work with Charles F. Doran, Tyler L. Kelly, Adriana Salerno, Steven Sperber, and Ursula Whitcher.
Mark Watkins: Computing with hypergeometric motives in Magma
We survey the computational vistas that are available for computing with hypergeometric motives in the computer algebra system Magma. Various examples that exemplify the theory will be highlighted.
Wadim Zudilin: A q-microscope for hypergeometric congruences
By examining asymptotic behavior of certain infinite basic ($q$-) hypergeometric sums at roots of unity (that is, at a "$q$-microscopic" level) we prove polynomial congruences for their truncations. The latter reduce to non-trivial (super)congruences for truncated ordinary hypergeometric sums, which have been observed numerically and proven rarely. A typical example includes derivation, from a q-analogue of Ramanujan's formula $$ \sum_{n=0}^\infty\frac{\binom{4n}{2n}{\binom{2n}{n}}^2}{2^{8n}3^{2n}}\,(8n+1)=\frac{2\sqrt{3}}{\pi}, $$ of the two supercongruences $$ S(p-1)\equiv p\biggl(\frac{-3}p\biggr)\pmod{p^3} \quad\text{and}\quad S\Bigl(\frac{p-1}2\Bigr) \equiv p\biggl(\frac{-3}p\biggr)\pmod{p^3}, $$ valid for all primes $p>3$, where $S(N)$ denotes the truncation of the infinite sum at the $N$-th place and $(\frac{-3}{\cdot})$ stands for the quadratic character modulo $3$.
|
Let $(X,d)$ be a metric space, show that if $A \subset X$ is connected, $B \subset X$ with $A \subset B \subset cl(A)$ then B is connected.
My approach:
Let's assume that B is not connnected, then by definition there exist two open, disjunct subsets $U, V \subset X$ such that $ B \subset U \cup V, B \cap V \neq \emptyset, B \cap U \neq \emptyset$. Let $b_1 \in B \cap U, b_2 \in B \cap V$. Then since $B \subset cl(A) \Rightarrow \forall \epsilon \gt 0: K(b_1, \epsilon) \cap A \neq \emptyset$ and $\forall \epsilon \gt 0: K(b_2, \epsilon) \cap A \neq \emptyset$
Now I'm not sure whether the next step is correct, to me it seems as if since $K(b_1, \epsilon) \cap A \neq \emptyset$ is true for all $\epsilon \gt 0$, there has to be an element $a_1 \in A : a_1 = b_1$ and the same logic would apply to b_2.
Then there would exist $U, V \subset X : A \subset U \cup V, A \cap V \neq \emptyset, A \cap U \neq \emptyset$, therefore A is not connected.
Is that correct?
|
The least common multiple of two integers, $a$ and $b$, can be calculated by the following procedure:
Decompose $a$ and $b$ into prime factors;
For every prime factor that is in $a$ but is not in $b$, put it in the $lcm$ factorization with the exponent it has in $a$;
For every prime factor that is in $b$ but is not in $a$, put it in the $lcm$ factorization with the exponent it has in $b$;
For every prime factor that is both in $a$ and in $b$, put it in the $lcm$
but with the biggest exponent of the two.
Example:
$a = 2^2 \cdot 5 \cdot 101^3\\b = 2^3 \cdot 7^4 \cdot 11$
$5$ and $101$ appear in $a$ but not in $b$, so we get $5 \cdot 101^3$.
$7$ and $11$ appear in $b$ but not in $a$, so we get $5 \cdot 101^3 \cdot 7^4 \cdot 11$.
Finally, we have $2$ in both factorizations. Since the greatest exponent is $3$, we get $5 \cdot 101^3 \cdot 7^4 \cdot 11 \cdot 2^3 = 2^3 \cdot 5 \cdot 7^4 \cdot 11 \cdot 101^3$.
Therefore, if we have $lcm(n, m) = 600$, then neither $n$ nor $m$ can have prime factors other than $2, 3$ or $5$. What is more, none of them can have $2$ to a greater power than $3$, $3$ to a greater power than $1$ and $5$ to a greater power than $2$.
We can see that we must have the following:
$n = 2^{a_1} \cdot 3^{a_2} \cdot 5^{a_3}\\m = 2^{b_1} \cdot 3^{b_2} \cdot 5^{b_3}$
But also, we must have the following:
$$\begin{cases}\max\{a_1, b_1\} = 3\\\max\{a_2, b_2\} = 1\\\max\{a_3, b_3\} = 2\\0 \leq a_1,b_1 \leq 3\\0 \leq a_2, b_2 \leq 1\\0 \leq a_3, b_3 \leq 2\end{cases}$$
Therefore it is a simple matter of counting.Also note that for every possible factorization of $n$, if $n$ has one of its prime factors to a power that is not the maximum needed, then $m$'s prime is determined. For example, if you are counting the possibilities for $m$ when $n = 2\cdot3\cdot5^2$, you know
immediately that $m$ has $2^3$ in its factorization and thus $m = 2^3 \cdot 3^x \cdot 5^y$ with $x$ and $y$ varying.
EDIT: included final computation:
Note that if $n$ does not have any of $2^3, 3$ and $5^2$, then $m$ is automatically determined. For those such $n$, you can pick any of $2^0, 2^1, 2^2$, then you have to pick $3^0$ and you can pick $5^0$ or $5^1$. Thus there are $3\cdot1\cdot2 = 6$ such pairs where $m$
must be $600$.
The remaining cases are counted by fixing what powers $n$ already has complete:
$n$ only has $2^3$. There are $2$ ways to finish it (because it does not have $3$ nor $5^2$. $m$ has $4$ possibilities for the exponent of $2$ and the others are already determined. Thus there are $2\cdot4 = 8$ pairs.
$n$ only has $3$. There are $3\cdot2$ ways to finish it. $m$ can only have $3^0$ or $3^1$ thus there are $6\cdot2 = 12$ such pairs.
$n$ only has $5^2$. There are $3\cdot1$ ways to finish it. $m$ can have either $5^0$ or $5^1$ or $5^2$ thus there are $3\cdot2\cdot2 = 12$ such pairs.
$n$ has $2^3$ and $3$: gives $2 \cdot 4 \cdot 2 = 16$ pairs.
$n$ has $2^3$ and $5^2$: gives $1 \cdot 4 \cdot 3 = 12$ pairs.
$n$ has $3$ and $5^2$: gives $3 \cdot 2 \cdot 3 = 18$ pairs.
$n$ has $2^3$, $3$ and $5^2$: gives $4 \cdot 2 \cdot 3 = 24$ pairs.
Summing up, we have $24 + 18 + 12 + 16 + 12 + 12 + 8 + 6 = 108$ pairs.
|
Use small o notation with a special care on which variable goes to infinity. What you said is basically the following.
Claim(?): Let $f(n,x)$ be a function in $n$ and $x$. Suppose $\lim \frac{f(n,x)}{x/n} = 0$. Then we have $\lim \frac{\sum_{n\leq x} f(n,x)}{\sum_{n \leq x} x/n} = 0$.
Is this claim true? It depends on context (so I intentionally omit how the limit goes). What you wrote in your question, you interpreted small o notation by $\lim_{n\leq x, x\rightarrow\infty} f(n,x) = 0 $ uniformly in $n\leq x$.
I haven't looked into his thesis, but based on what was written, I assume what the author meant by the formula $o(x/n)$ is any function of the form $\phi(x/n)$ where $\phi$ is a function which satisfies $\lim_{u\rightarrow \infty} \phi(u) / u = 0$. (or, $o(x/n)$ uniformly as $x/n \rightarrow \infty$). What makes you more confusing is that, in the last formula $o(x log x)$, his limit was taken as $x \rightarrow \infty$. So it is indeed the same notation with different meaning from line to line. So, I'll write $o_{x/n}$ to denote the former and $o_x$ for the latter. Then your interpretation is$$ \sum_{n\leq x} o_x \left ( \frac{x}{n} \right ) = x o_x \left (\sum_{n\leq x} \frac{1}{n} \right ) = o_x (x \log x),$$which is obviously true. I believe the author's intention was$$ \sum_{n\leq x} o_{x/n} (x/n) = o_x ( x \log x),$$ which is not automatic(for instance, when $x/2\leq n\leq x$, $x/n$ is bounded so we don't know if it is $o_x(1)$. Thus the author goes in two steps to handle this.
What reuns said I believe is that $\sum_{n\leq x} o_n (1/n^2) \neq o_x(1)$. So it all depends on the context after all.
|
I already asked a question regarding how to solve a nonlinear pde in mathematica which was answered nicely.
Actually this was a 1-dimensional form of a general 2D problem that I was trying to solve with matlab, but first I wanted to have a clue about the solution using mathematica. Now I think I have to restrict myself into solving it in mathematica. Here is the 2D form of the problem with its initial and boundary conditions:
$$\frac{\partial h}{\partial t} = -0.01 \bigg(\frac{\partial}{\partial x}\Big(h^3 \big(\frac{\partial^3h}{\partial x^3}+\frac{\partial ^3h}{\partial x\partial y^2}\big)\Big)+ \frac{\partial }{\partial y}\Big(h^3 \big(\frac{\partial ^3h}{\partial y^3}+\frac{\partial ^3h}{\partial y\partial x^2}\big)\Big)\bigg)$$ $$\frac{\partial h}{\partial x}=0,\ \ \frac{\partial^3h}{\partial x^3}+\frac{\partial ^3h}{\partial x\partial y^2}=0\ \ \text{when}\ \ x=\pm1$$ $$\frac{\partial h}{\partial y}=0,\ \ \frac{\partial ^3h}{\partial y^3}+\frac{\partial ^3h}{\partial y\partial x^2}=0\ \ \text{when}\ \ y=\pm1$$ $$h(0,x,y)=1+\cos(\pi x) \cos(\pi y)$$ I tried to do the same as the 1D case, however it gives me an error with the boundary conditions. I tried this:
t0 = 6;BCLx1 = Derivative[0, 1, 0][u][t, -1, y];BCRx1 = Derivative[0, 1, 0][u][t, 1, y];BCLx3 = Derivative[0, 3, 0][u][t, x, -1]+Derivative[0, 1, 2][u][t, -1, y];BCRx3 = Derivative[0, 3, 0][u][t, x, 1]+Derivative[0, 1, 2][u][t, 1, y];BCLy1 = Derivative[0, 0, 1][u][t, x, -1];BCRy1 = Derivative[0, 0, 1][u][t, x, 1];BCLy3 = Derivative[0, 0, 3][u][t, x, -1]+Derivative[0, 2, 1][u][t, x, -1];BCRy3 = Derivative[0, 0, 3][u][t, x, 1]+Derivative[0, 2, 1][u][t, x, 1];sol = NDSolve[{D[u[t, x, y], t] == -0.0192* (D[u[t, x, y]^3*(D[u[t, x, y], {x, 3}] + D[D[u[t, x, y], {y, 2}], x]), x] + D[u[t, x, y]^3*(D[u[t, x, y], {y, 3}] + D[D[u[t, x, y], {x, 2}], y]), y]), u[0, x, y] == Cos[π y] Cos[π x] + 1, BCLx1 == 0, BCRx1 == 0, BCLx3 == 0, BCRx3 == 0, BCLy1 == 0, BCRy1 == 0, BCLy3 == 0, BCRy3 == 0}, u, {t, 0, t0}, {x, -1, 1}, {y, -1, 1}, Method -> {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", "MinPoints" -> 100}}, PrecisionGoal -> 2]; Plot3D[{u[t0, x, y] /. sol}, {y, -1, 1}, {x, -1, 1}, PlotRange -> All]
It states that the boundary conditions include non-normal derivatives! but they are all meaningful BCs based on the physics of the problem.
Any help in this regard is highly appreciated.
|
Let \(I, J\) be connected open intervals such that \(I \cap J\) is a nonempty proper sub-interval of both \(I\) and\(J\). For instance, \(I = (0, 2)\) and \(J = (1, 3)\) form an example.
Let \(f\) (\(g\), resp.) be an orientation-preserving homeomorphism of the real line \(\mathbb{R}\) such that the set of points of \(\mathbb{R}\) which are not fixed by \(f\) (\(g\), resp.) is precisely \(I\) (\(J\), resp.).
Show that for large enough integer \(n\), the group generated by \(f^n, g^n\) is isomorphic to the group with the following presentation
\[ <a, b | [ab^{-1}, a^{-1}ba] = [ab^{-1}, a^{-2}ba^2] = id>. \]
GD Star Rating loading...
Find the smallest prime number \( p \geq 5 \) such that there exist no integer coefficient polynomials \( f \) and \( g \) satisfying
\[
p | ( 2^{f(n)} + 3^{g(n)})
\]
for all positive integers \( n \).
The best solution was submitted by 김태균 (수리과학과 2016학번). Congratulations!
Here is his solution of problem 2019-11.
Other solutions were submitted by 고성훈 (2018학번, +3), 조재형 (수리과학과 2016학번, +3), 채지석 (수리과학과 2016학번, +3), 최백규 (생명과학과 2016학번, +3).
GD Star Rating loading...
Let \(G\) be a group. A topology on \(G\) is said to be a group topology if the map \(\mu: G \times G \to G\) defined by \(\mu(g, h) = g^{-1}h\) is continuous with respect to this topology where \(G \times G\) is equipped with the product topology. A group equipped with a group topology is called a topological group. When we have two topologies \(T_1, T_2\) on a set S, we write \(T_1 \leq T_2\) if \(T_2\) is finer than \(T_1\), which gives a partial order on the set of topologies on a given set. Prove or disprove the following statement: for a give group \(G\), there exists a unique minimal group topology on \(G\) (minimal with respect to the partial order we described above) so that \(G\) is a Hausdorff space?
The best solution was submitted by 이정환 (수리과학과 2015학번). Congratulations!
Here is his solution of problem 2019-10.
An incomplete solutions were submitted by 채지석 (수리과학과 2016학번, +2).
GD Star Rating loading...
Find the smallest prime number \( p \geq 5 \) such that there exist no integer coefficient polynomials \( f \) and \( g \) satisfying
\[ p | ( 2^{f(n)} + 3^{g(n)}) \] for all positive integers \( n \).
GD Star Rating loading...
For the 10th problem for POW this year, I added a condition that we only consider the group topologies which make the given group a Hausdorff space. Since the problem has been modified, I decided to extend the deadline for this problem. Please hand in your solution by 12pm on Friday (May 31st).
GD Star Rating loading...
Let \(G\) be a group. A topology on \(G\) is said to be a group topology if the map \(\mu: G \times G \to G\) defined by \(\mu(g, h) = g^{-1}h\) is continuous with respect to this topology where \(G \times G\) is equipped with the product topology. A group equipped with a group topology is called a topological group. When we have two topologies \(T_1, T_2\) on a set S, we write \(T_1 \leq T_2\) if \(T_2\) is finer than \(T_1\), which gives a partial order on the set of topologies on a given set. Prove or disprove the following statement: for a give group \(G\), there exists a unique minimal group topology on \(G\) (minimal with respect to the partial order we described above) so that \(G\) is a Hausdorff space?
GD Star Rating loading...
Suppose that \( X \) is a discrete random variable on the set \( \{ a_1, a_2, \dots \} \) with \( P(X=a_i) = p_i \). Define the discrete entropy
\[
H(X) = -\sum_{n=1}^{\infty} p_i \log p_i.
\]
Find constants \( C_1, C_2 \geq 0 \) such that
\[
e^{2H(X)} \leq C_1 Var(X) + C_2
\]
holds for any \( X \).
The best solution was submitted by 길현준 (2018학번). Congratulations!
Here is his solution of problem 2019-09.
Alternative solutions were submitted by 최백규 (생명과학과 2016학번, +3). Incomplete solutions were submitted by, 이정환 (수리과학과 2015학번, +2), 채지석 (수리과학과 2016학번, +2).
GD Star Rating loading...
Suppose that \( X \) is a discrete random variable on the set \( \{ a_1, a_2, \dots \} \) with \( P(X=a_i) = p_i \). Define the discrete entropy
\[ H(X) = -\sum_{n=1}^{\infty} p_i \log p_i. \] Find constants \( C_1, C_2 \geq 0 \) such that \[ e^{2H(X)} \leq C_1 Var(X) + C_2 \] holds for any \( X \).
GD Star Rating loading...
Let \(G\) be a group acting by isometries on a proper geodesic metric space \(X\). Here \(X\) being proper means that every closed bounded subset of \(X\) is compact. Suppose this action is proper and cocompact,. Here, the action is said to be proper if for all compact subsets \(B \subset X\), the set \[\{g \in G | g(B) \cap B \neq \emptyset \}\] is finite. The quotient space \(X/G\) is obtained from \(X\) by identifying any two points \(x, y\) if and only if there exists \(g \in G\) such that \(gx = y\), and equipped with the quotient topology. Then the action of \(G\) on \(X\) is said to be cocompact if \(X/G\) is compact. Under these assumptions, show that \(G\) is finitely generated.
The best solution was submitted by 이정환 (수리과학과 2015학번). Congratulations!
Here is his solution of problem 2019-08.
Alternative solutions were submitted by 조재형 (수리과학과 2016학번, +3), 채지석 (수리과학과 2016학번, +3), 김태균 (수리과학과 2016학번, +2).
GD Star Rating loading...
Suppose that \( f: \mathbb{R} \to \mathbb{R} \) is differentiable and \( \max_{ x \in \mathbb{R}} |f(x)| = M < \infty \). Prove that
\[
\int_{-\infty}^{\infty} (|f'|^2 + |f|^2) \geq 2M^2.
\]
The best solution was submitted by 채지석 (수리과학과 2016학번). Congratulations!
Here is his solution of problem 2019-07.
Other solutions were submitted by 고성훈 (2018학번, +3), 길현준 (2018학번, +3), 김기택 (수리과학과 2015학번, +3), 김민서 (2019학번, +3), 김태균 (수리과학과 2016학번, +3), 박재원 (2019학번, +3), 오윤석 (2019학번, +3), 윤영환 (한양대학교, +3), 이본우 (수리과학과 2017학번, +3), 이원용 (2019학번, +3), 이정환 (수리과학과 2015학번, +3), 정의현 (수리과학과 대학원생, +3), 최백규 (생명과학과 2016학번, +3).
GD Star Rating loading...
|
Using the reflection formula $$\Gamma(\theta)\Gamma(1-\theta)=\frac{\pi}{\sin \theta\pi}\quad,0<\theta<1$$ your population density is simply
$$f_{\theta}(x)=\frac{e^{-\theta x}x^{-\theta}\theta^{1-\theta}}{\Gamma(1-\theta)}\mathbf 1_{x>0}\quad,0<\theta<1$$
(This 'simplification' is obviously not needed for the given problem)
Let $f_{\theta}(x)$, with $\theta\in\Theta$ and $x\in\mathfrak X$, be the pdf of a random variable $X$. Assume that
The parameter space $\Theta$ is an open interval. The support $\mathfrak X=\{x:f_{\theta}(x)>0\}$ is independent of $\theta$.
We say that the joint density $f_{\theta}(\mathbf x)$ of $(X_1,\cdots,X_n)$ is a member of the one-parameter exponential family if it can be written as
\begin{align}f_{\theta}(\mathbf x)=\exp\left[A(\theta)T(\mathbf x)+B(\theta)+C(\mathbf x)\right]\end{align}
, where $A(\theta)$ and $B(\theta)$ are real valued functions of $\theta$ only, and $C(\mathbf x)$ and $T(\mathbf x)$ are real valued functions of $\mathbf x$ only.
All the functions mentioned above come from a simple analysis of the case of equality in the Cramer-Rao lower bound, from which the one-parameter exponential family can be derived.
Yes, the family of distributions $\{f_{\theta}:0<\theta<1\}$ is a member of the one-parameter exponential family (the first two assumptions are seen to be valid) since the joint density of the sample $\mathbf X=(X_1,X_2,\cdots,X_n)$ is of the form
\begin{align}f_{\theta}(\mathbf x)&=\prod_{i=1}^nf_{\theta}(x_i)\\&=\left(\frac{\theta^{1-\theta}}{\Gamma(1-\theta)}\right)^n\exp\left(-\theta\sum_{i=1}^nx_i\right)\left(\prod_{i=1}^nx_i\right)^{-\theta}\mathbf1_{x_1,\cdots,x_n>0}\\&=\exp\left[-\theta\sum_{i=1}^n(\ln x_i+x_i)+n\ln \left(\frac{\theta^{1-\theta}}{\Gamma(1-\theta)}\right)+\ln (\mathbf 1_{\min x_i>0})\right] \end{align}
Thus we have expressed the joint density in the general structure. That is, for some functions $A,B,C$ and $T$, we have expressed the joint density as
\begin{align}f_{\theta}(\mathbf x)&=\exp\left[A(\theta)T(\mathbf x)+B(\theta)+C(\mathbf x)\right]\\&=g(\theta, T(\mathbf x))h(\mathbf x)\end{align}
, where $g(\theta,T(\mathbf x))=\exp\left[A(\theta)T(\mathbf x)+B(\theta)\right]$ depends on $\theta$ and on $x_1,\cdots,x_n$ through $T$, and $h(\mathbf x)=\exp(C(\mathbf x))$ is independent of $\theta$.
So it is justified that a sufficient statistic for $\theta$ by
Factorization theorem is
$$T(\mathbf X)=\sum_{i=1}^n(\ln X_i+X_i)$$
|
Online auctions in which items are sold in an online fashion with little knowledge about future bids are common in the internet environment. We study here a problem in which an auctioneer would like to sell a single item, say a car. A bidder may make a bid for the item at any time but expects an immediate irrevocable decision. The goal of the auctioneer is to maximize her revenue in this uncertain environment. Under some reasonable assumptions, it has been observed that the online auction problem has strong connections to the classical secretary problem in which an employer would like to choose the best candidate among
n
competing candidates [HKP04]. However, a direct application of the algorithms for the secretary problem to online auctions leads to undesirable consequences since these algorithms do not give a fair chance to every candidate and candidates arriving early in the process have an incentive to delay their arrival.
In this work we study the issue of incentives in the online auction problem where bidders are allowed to change their arrival time if it benefits them. We derive incentive compatible mechanisms where the best strategy for each bidder is to first truthfully arrive at their assigned time and then truthfully reveal their valuation. Using the linear programming technique introduced in Buchbinder et al [BJS10], we first develop new mechanisms for a variant of the secretary problem. We then show that the new mechanisms for the secretary problem can be used as a building block for a family of incentive compatible mechanisms for the online auction problem which perform well under different performance criteria. In particular, we design a mechanism for the online auction problem which is incentive compatible and is 3/16 ≈ 0.187-competitive for revenue, and a (different) mechanism that is
$\frac{1}{2\sqrt{e}} \approx 0.303$
-competitive for efficiency.
|
I have the following question when I was going through the proof of the following theorem.
Theorem. For XOR function $f \circ XOR$, $rank(M_{f \circ XOR}) = ||\hat f ||_0$ where $M_{f \circ XOR}$ is a matrix such that $M_{f \circ XOR}(x,y) = f(x + y)$.
The proof of this theorem essentially shows that fourier coefficients of $f$ are the eigen values of $M$: $\hat f(S)$ is an eigen value and the corresponding eigen vector is $\chi_S$. And now, using spectral decomposition, one can conclude that the rank is equal to the sparsity.
My question is: Consider any function $F:\{0,1\}^N \rightarrow \{0,1\}$, and let $M_F \in \{0,1\}^{N/2} \times \{0,1\}^{N/2}$ be a $2^{N/2} \times 2^{N/2}$ dimensional matrix whose entries are as follows:$$M_F(x,y) = F(xy).$$Is there any relationship between $||\hat F||_0$ and $rank(M_F)$?
|
I saw this question in another post and I proved it differently than the others who answered. I was wondering if my proof works.
Problem
Let $f: \mathbb{R} \to \mathbb{R}$ be a function. Suppose that $f$ is differentiable, that $f(0)=1$, and that $|f'(x)| \leq 1$ for all $x \in \mathbb{R}$. Prove that $|f(x)| \leq |x|+1$ for all $x \in \mathbb{R}$.
My Proof
Assume the conclusion didn't follow from the hypotheses, namely that $\exists x\in \mathbb{R}$ such that $|f(x)|>|x|+1$. We can show this for $x=0$. Then we get $|f(0)|>1 \Longrightarrow 1>1$, contradiction.
And here's where I found the question Practice problem from Mean Value Theorem in Real Analysis
|
Strange baryon and in particular multi-strange baryon production is suggested to be a useful probe in the search for quark gluon plasma formation in heavy ion collisions. We have measured the (Ω − + Ω + ) (Ξ − + Ξ + ) production ratio to be 0.8±0.4 at central rapidity and ϱ T > 1.6 GeV/c.
We report on measurements of the inclusive production rate of Sigma+ and Sigma0 baryons in hadronic Z decays collected with the L3 detector at LEP. The Sigma+ baryons are detected through the decay Sigma+ -> p pi0, while the Sigma0 baryons are detected via the decay mode Sigma0 -> Lambda gamma. The average numbers of Sigma+ and Sigma0 per hadronic Z decay are measured to be: < N_Sigma+ > + < N_Sigma+~ > = 0.114 +/- 0.011 (stat) +/- 0.009 (syst), < N_Sigma0 > + < N_Sigma0~ > = 0.095 +/- 0.015 (stat) +/- 0.013 (syst). These rates are found to be higher than the predictions from Monte Carlo hadronization models and analytical parameterizations of strange baryon production.
Deep--inelastic scattering events with a leading baryon have been detected by the H1 experiment at HERA using a forward proton spectrometer and a forward neutron calorimeter. Semi--inclusive cross sections have been measured in the kinematic region 2 <= Q^2 <= 50 GeV^2, 6.10^-5 <= x <= 6.10^-3 and baryon p_T <= MeV, for events with a final state proton with energy 580 <= E' <= 740 GeV, or a neutron with energy E' >= 160 GeV. The measurements are used to test production models and factorization hypotheses. A Regge model of leading baryon production which consists of pion, pomeron and secondary reggeon exchanges gives an acceptable description of both semi-inclusive cross sections in the region 0.7 <= E'/E_p <= 0.9, where E_p is the proton beam energy. The leading neutron data are used to estimate for the first time the structure function of the pion at small Bjorken--x.
Production of Sigma- and Lambda(1520) in hadronic Z decays has been measured using the DELPHI detector at LEP. The Sigma- is directly reconstructed as a charged track in the DELPHI microvertex detector and is identified by its Sigma -> n pi decay leading to a kink between the Sigma- and pi-track. The reconstruction of the Lambda(1520) resonance relies strongly on the particle identification capabilities of the barrel Ring Imaging Cherenkov detector and on the ionisation loss measurement of the TPC. Inclusive production spectra are measured for both particles. The production rates are measured to be <N_{Sigma-}/N_{Z}^{had}> = 0.081 +/- 0.002 +/- 0.010, <N_{Lambda(1520)}/N_{Z}^{had}> = 0.029 +/- 0.005 +/- 0.005. The production rate of the Lambda(1520) suggests that a large fraction of the stable baryons descend from orbitally excited baryonic states. It is shown that the baryon production rates in Z decays follow a universal phenomenological law related to isospin, strangeness and mass of the particles.
Two-particle angular correlations were measured in pp collisions at $\sqrt{s} = 7$ TeV. The analysis was carried out for pions, kaons, protons, and lambdas, for all particle/anti-particle combinations in the pair. Data for mesons exhibit an expected peak dominated by effects associated with mini-jets and are well reproduced by general purpose Monte Carlo generators. However, for baryon--baryon and anti-baryon--anti-baryon pairs, where both particles have the same baryon number, a near-side anti-correlation structure is observed instead of a peak. This effect is interpreted in the context of baryon production mechanisms in the fragmentation process. It currently presents a challenge to Monte Carlo models and its origin remains an open question.
The production of charmed particles by Sigma- of 340 Gev/c momentum was studied in the hyperon beam experiment WA89 at the CERN-SPS, using the Omega-spectrometer. In two data-taking periods in 1993 and 1994 an integrated luminosity of 1600 microb^-1 on copper and carbon targets was recorded. From the reconstruction of 930 +- 90 charm particle decays in 10 decay channels production cross sections for D, antiD, Ds and Lambdac were determined in the region xF>0. Assuming an A^1 dependence of the cross section on the nucleon number, we calculate a total ccbar production cross section of sigma(x_F > 0) = 5.3+- 0.4(stat)+-1.0(syst)+1.0(Xi_c) microb per nucleon. The last term is an upper limit on the unknown contribution from charmed-strange baryon production.
A sample of 2.2 million hadronic Z decays, selected from the data recorded by the Delphi detector at LEP during 1994-1995 was used for an improved measurement of inclusive distributions of pi+, K+ and p and their antiparticles in gluon and quark jets. The production spectra of the individual identified particles were found to be softer in gluon jets compared to quark jets, with a higher multiplicity in gluon jets as observed for inclusive charged particles. A significant proton enhancement in gluon jets is observed indicating that baryon production proceeds directly from colour objects. The maxima, xi^*, of the xi-distributions for kaons in gluon and quark jets are observed to be different.
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The transverse momentum ($p_{\rm T}$) differential cross section multiplied by the branching ratio is presented in the interval 1 $<$ $p_{\rm T}$ $<$ 8 GeV/$c$ at mid-rapidity, $|y|$ $<$ 0.5. The transverse momentum dependence of the $\Xi_{\rm c}^0$ baryon production relative to the D$^0$ meson production is compared to predictions of event generators with various tunes of the hadronisation mechanism, which are found to underestimate the measured cross-section ratio.
The production of $K^*+(892)$, $K^{*0}+(892)$, $\rho^{0}(770)$ and $\omega(783)$ vector mesons in $q\bar{q}$ events as well as in the gluonic $\Upsilon(1S)$ decays and $\Upsilon(4S) \to B\bar{B}$ decays has been studied using the ARGUS detector. Combining these results with data on pseudoscalar meson, $\phi$ meson and baryon production collected with the same detector allow comprehensive studies of quark and gluon fragmentation. Model independent information on $s$ quark and vector meson suppression $(s/u = 0.37 \pm 0.04, V/(V+P)_{\pi} = 0.21 \pm 0.04$ and $V/( V+ P)_K = 0.34 \pm 0.03))$ are derived. The data are compared with predictions from the models Jetset 7.3 and UCLA 7.31.
The Large Hadron Collider forward (LHCf) experiment is designed to use the LHC to verify the hadronic-interaction models used in cosmic-ray physics. Forward baryon production is one of the crucial points to understand the development of cosmic-ray showers. We report the neutron-energy spectra for LHC $\sqrt{s}$=7 TeV proton–proton collisions with the pseudo-rapidity η ranging from 8.81 to 8.99, from 8.99 to 9.22, and from 10.76 to infinity. The measured energy spectra obtained from the two independent calorimeters of Arm1 and Arm2 show the same characteristic feature before unfolding the detector responses. We unfolded the measured spectra by using the multidimensional unfolding method based on Bayesian theory, and the unfolded spectra were compared with current hadronic-interaction models. The QGSJET II-03 model predicts a high neutron production rate at the highest pseudo-rapidity range similar to our results, and the DPMJET 3.04 model describes our results well at the lower pseudo-rapidity ranges. However, no model perfectly explains the experimental results over the entire pseudo-rapidity range. The experimental data indicate a more abundant neutron production rate relative to the photon production than any model predictions studied here.
|
2018-08-25 06:58
Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2018-08-25 06:58 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2018-08-25 06:58
Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2018-08-24 06:19
Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2018-08-24 06:19
Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2018-08-24 06:19
Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2018-08-24 06:19
Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2018-08-24 06:19
Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2018-08-24 06:19
First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2018-08-23 11:31
Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 დეტალური ჩანაწერი - მსგავსი ჩანაწერები
|
why is there 0.7V instead of 1.2V on the common emitter?
if you go through Q1 mesh : Ve = 0.5+Vbe = 1.2 instead of Ve = 0+Vbe = 0.7 from Q2.
Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It only takes a minute to sign up.Sign up to join this community
Your circuit is:
You know that the emitter voltage for both BJTs is the same (they are tied together.) It follows from the schematic that \$\mid\: V_{\text{BE}_2}\mid\:\:=\:\:\mid V_{\text{BE}_1}\mid+500\:\text{mV}\$. You should be able to easily see that fact, directly from the schematic: \$V_{\text{B}_1}=V_{\text{B}_2}+500\:\text{mV}\$ and \$V_{\text{E}_1}=V_{\text{E}_2}\$.
Since you also know that for every \$60\:\text{mV}\$ difference in \$V_\text{BE}\$ there will be about \$10\times\$ the collector current, it follows that \$\frac{I_{\text{C}_2}}{I_{\text{C}_1}}\approx e^{^\frac{500\:\text{mV}}{26\:\text{mV}}}\approx 225\times10^{6}\$. In other words, \$Q_2\$ hogs all available current in \$R_1\$. What remains for \$Q_1\$ is in the small "parts per billion." (In short, nothing at all.)
From here, you have to only decide the approximate collector current for \$Q_2\$. If you temporarily assume that \$V_{\text{BE}_2}=700\:\text{mV}\$ then you find that \$I_{\text{C}_2}=\frac{5\:\text{V}-700\:\text{mV}}{1\:\text{k}\Omega}\approx 4.3\:\text{mA}\$. Since this value is fairly consistent with \$V_{\text{BE}_2}\approx 700\:\text{mV}\$ for most small-signal BJTs, you can reasonably rest on this computed value.
This assumption confirmed, the result is easily worked out.
Redraw the diagram with Q1 and Q2 emitter- base paths replaced by diodes, D1 and D2. D1 has 0.5V applied to its cathode, and D2 has 0V applied to its cathode. A diode will drop 0.7V when it's on.
Given the above, only one on/off configuration of diode states, out of four possibilities, can occur.
|
As we know from complex analysis, Cauchy's integral formula states:
$f(z_o)=\frac{1}{2\pi i}\int_\gamma{\frac{f(z)}{z-z_o}dz}$ for a closed contour $\gamma$.
However there is also the result from other areas of maths that states:
$f(x_o)=\int_R{f(x)\delta(x-x_o)dx}$ for some region $R$
Given the similarity in the results, is there any significance in comparing $\delta(x-x_o)$ to $\frac{1}{2 \pi i (x-x_o)}$ since basically integrating a function over them always gives the function's value at $x_o$
(Apologies in advance if this question may seem silly but I haven't found any satisfactory answers in my search so I thought I'd ask)
|
This is inspired by an old Putnam problem from 2005, and a solution given by Professor Greg Martin (a Professor of Mathematics at the University of British Columbia, also a user on MO). The question is
Question (Putnam 2005): For non-negative integers $m,n$, let $f(m,n)$ denote the number of $n$-tuples $(x_1, \cdots, x_n)$ of integers such that $|x_1| + \cdots + |x_n| \leq m$. Show that $f(m,n) = f(n,m)$.
Greg's proof essentially boiled down to showing that the generating function $$\displaystyle G(x,y) = \sum_{m,n \geq 0} f(m,n) x^m y^n$$ is symmetric in $x,y$.
Update: Since it seems that MAA took down the Putnam directory and the old solutions are no longer easily accessible, I shall give the proof here. The credit goes entirely to Professor Martin.
We write
$$\displaystyle G(x,y) = \sum_{n \geq 0} \sum_{m \geq 0} f(m,n)x^m y^n$$ $$\displaystyle = \sum_{n \geq 0} \sum_{m \geq 0} x^m y^n \sum_{\substack{k_1, \cdots, k_n \in \mathbb{Z} \\ |k_1| + \cdots + |k_n| \leq m}} 1$$ $$\displaystyle = \sum_{n \geq 0} y^n \sum_{k_1, \cdots, k_n \in \mathbb{Z}} \sum_{m \geq |k_1| + \cdots + |k_n|} x^m$$ $$\displaystyle = \sum_{n \geq 0} y^n \sum_{k_1, \cdots, k_n \in \mathbb{Z}} \frac{x^{|k_1| + \cdots + |k_n|}}{1 - x}$$ $$\displaystyle = \frac{1}{1-x}\sum_{n \geq 0} y^n \left(\sum_{k \in \mathbb{Z}} x^{|k|}\right)^n$$ $$\displaystyle = \frac{1}{1-x} \sum_{n \geq 0} y^n \left(\frac{1+x}{1-x}\right)^n$$ $$\displaystyle = \frac{1}{1-x} \frac{1}{1 - y(1+x)/(1-x)}$$ $$\displaystyle = \frac{1}{1-x-y-xy}.$$
This seemed a fascinating approach to me back then (2005 was the first time I wrote the Putnam, and Greg was our Putnam coach at UBC), and even more so today when I looked back at it given that some of my work involves some clever generating function arguments (based on the answers given to me by Richard Stanley on a question I posted here). So the question I pose is:
Are there any other interesting quantities $f(n_1, \cdots, n_k)$ involving parameters $n_1, \cdots, n_k$ with $k \geq 2$ say that are symmetric in the parameters, and the proof comes from showing that the generating function
$$\displaystyle \sum_{n_1, \cdots, n_k} f(n_1, \cdots, n_k)x_1^{n_1} \cdots x_k^{n_k}$$ is symmetric?
|
I) There are already several good answers. OP is asking about the momentum of the non-relativistic string with only transverse displacements, whose Lagrangian density usually is given as
$$ {\cal L}_T ~:=~\frac{\rho}{2} \dot{\eta}^2 - \frac{\tau}{2} \eta^{\prime 2} \tag{1}$$
in textbooks.
II) Let us fix notation: $\rho$ is the 1D mass density; $\tau$ is the string tension; $Y$ is the 1D Young modulus; dot denotes a derivative wrt. $x^0\equiv t$; prime denotes a derivative wrt. $x^1\equiv x$; $\xi$ is the longitudinal displacement in the $x$-direction; and $\eta$ is the transversal displacement in the $y$-direction.
III) First of all, note that the canonical stress-energy-momentum (SEM) tensor $T^{\mu}{}_{\nu}$ (which contains the momentum density $T^0{}_1$) is a pull-back to the world sheet (WS), which we identify with the $(x,t)$-plane. Therefore the momentum direction is often identified with the longitudinal $x$-direction, even if the physical target space (TS) vibrations are in the transverse $y$-direction.
Secondly, note that already for the (conceptionally simpler) longitudinal wave model
$$ {\cal L}_L ~:=~\frac{\rho}{2} \dot{\xi}^2 - \frac{Y}{2} \xi^{\prime 2}, \tag{2}$$
(minus) the canonical momentum density
$$T^0{}_1~=~\rho\dot{\xi}\xi^{\prime} \tag{3}$$
is different from the kinetic momentum density $\rho\dot{\xi}$. This is related to the fact that the model (2) is constructed to describe wave excitations of the string, not overall translations thereof. The take-away message is that it is not necessarily a useful thing to try to make the canonical momentum and the kinetic momentum equal. (And in particular, Ref. 1 does not achieve this. Moreover, Ref. 1 only discusses chiral excitations, i.e. a left-mover or a right-mover, but not a superposition thereof, which is incomplete for a non-linear theory.)
Suffice to say that the different momenta can be treated and understood separately, and that there are conservation laws associated with both types of momenta. Kinetic momentum conservation follows from Newton's laws, while canonical momentum conservation is a consequence of translation symmetry, cf. Noether's theorem. In this answer, we will focus on getting a more realistic physical model of the transversal wave than the Lagrangian density (1).
IV) Our starting point is the simple observation that for an unstretchable string $Y \gg \tau $, a small transversal displacement
$$\eta~=~{\cal O}(\varepsilon),\tag{4}$$
where $\varepsilon \ll 1$, must be accompanied with a longitudinal displacement
$$\xi~=~{\cal O}(\varepsilon^2),\tag{5}$$
cf. Fig. 1 below.
$\uparrow$ Fig. 1. An infinitesimal transversal sawtooth displacement $\varepsilon\ll 1$ of an unstretchable string must be accompanied with a longitudinal displacement $\frac{\varepsilon^2}{2}$.
V) We conclude that a realistic model for transversal excitations $\eta$ must include the possibility for longitudinal displacements $\xi$ as well. Let us therefore consider the Lagrangian density
$$ {\cal L}~:=~{\cal T}-{\cal V}, \qquad {\cal T}~:=~\frac{\rho}{2}\left(\dot{\xi}^2+\dot{\eta}^2\right),\tag{6}$$
where the potential density ${\cal V}$ should be given by Hooke's law. Let
$$ s^{\prime} ~=~ \sqrt{(1+\xi^{\prime})^2 +\eta^{\prime 2} }~=~1+\xi^{\prime} +\frac{\eta^{\prime 2}}{2} -\frac{\xi^{\prime}\eta^{\prime 2}}{2} -\frac{\eta^{\prime 4}}{8} +{\cal O}(\varepsilon^5) \tag{7}$$
be the derivative of the arc-length $s$ wrt. the $x$-coordinate. Modulo possible total derivative terms, the potential density ${\cal V}$ must be of the form
$${\cal V}~=~\frac{k}{2} \left( s^{\prime}-a\right)^2 ~=~ \frac{k}{2} (s^{\prime }-1)^2 + k(1-a) (s^{\prime}-1) + \frac{k}{2} (1-a)^2 \tag{8}$$
for suitable material constants $k$ and $a$, cf. Ref. 1. As will become apparent below, we should identify the two constants $k$ and $a$ as
$$ k ~=~Y+\tau \quad\text{and}\quad \tau~=~ k(1-a). \tag{9}$$
Therefore the potential density (8) becomes
$${\cal V}~\stackrel{(8)+(9)}{=}~ \frac{Y+\tau}{2} (s^{\prime }-1)^2 +\tau (s^{\prime}-1) +\frac{\tau^2}{2(Y+\tau)} $$$$~\stackrel{(7)}{=}~ \tau\left(\xi^{\prime} +\frac{\eta^{\prime 2}}{2} +\frac{\xi^{\prime 2}}{2} \right) + \frac{Y}{2}\left(\xi^{\prime} +\frac{\eta^{\prime 2}}{2} \right)^2 +{\cal O}(\varepsilon^5) +\frac{\tau^2}{2(Y+\tau)}.\tag{10}$$
Keeping only terms to quartic order, and discarding total derivative terms and constant terms, the potential density reads
$${\cal V}_4~:=~ \frac{\tau}{2}\left(\xi^{\prime 2} +\eta^{\prime 2}\right) +\frac{Y}{2}\chi^2 ,\tag{11}$$
where we have defined the shorthand notation
$$ \chi~:=~\xi^{\prime} +\frac{\eta^{\prime 2}}{2} .\tag{12}$$
The quartic potential (11) is surprisingly simple. For an unstretchable string $Y\gg\tau$, we recognize in eq. (11) the constraint
$$ \chi~\approx~0 ,\tag{13}$$
which is at the heart of Fig. 1. The constraint (13) implies that a transversal excitation (4) to the first order in $\varepsilon$ induces a longitudinal excitation (5) to the second order in $\varepsilon$. As we shall see below, even an stretchable string has an affinity for the constraint (13).
VI) As an aside, we may rewrite the quartic potential (11) as a cubic potential
$$ {\cal V}_3~:=~ \frac{\tau}{2}\left(\xi^{\prime 2} +\eta^{\prime 2}\right) -\frac{B^2}{2Y} + B\chi, \tag{14}$$
where $B$ is an auxiliary field. The Euler-Lagrange (EL) equation for $B$ is
$$ B ~\approx~Y\chi.\tag{15} $$
The EL equations for $\xi$ and $\eta$ read
$$ \rho \ddot{\xi}~\stackrel{(14)}{\approx}~ \tau\xi^{\prime\prime} + B^{\prime}~\stackrel{(12)+(15)}{\approx}~ (\tau+Y)\xi^{\prime\prime} + Y \eta^{\prime}\eta^{\prime\prime},\tag{16}$$$$ \rho \ddot{\eta}~\stackrel{(14)}{\approx}~ \tau\eta^{\prime\prime} +\left(B\eta^{\prime}\right)^{\prime}~\stackrel{(12)+(15)}{\approx}~\tau\eta^{\prime\prime}+\frac{3Y}{2}\eta^{\prime 2}\eta^{\prime\prime} + Y(\xi^{\prime}\eta^{\prime})^{\prime},\tag{17} $$
respectively.
VII) If we integrate out the $B$-field in the cubic potential (14),
$$ {\cal V}_3\quad\stackrel{B}{\longrightarrow}\quad{\cal V}_4,\tag{18}$$
we get back the quartic potential (11). The EL equations (16) & (17) become
$$\Box_L\xi~:=~ \ddot{\xi}- c_L^2 \xi^{\prime\prime} ~\approx~ \frac{Y}{\rho} \eta^{\prime}\eta^{\prime\prime}~=~ (c_L^2-c_M^2) \eta^{\prime}\eta^{\prime\prime},\tag{19} $$$$\Box_M\eta~:=~ \ddot{\eta}- c_M^2 \eta^{\prime\prime} ~\approx~ \frac{Y}{\rho}\left( \chi \eta^{\prime}\right)^{\prime}~=~ (c_L^2-c_M^2)\left( \chi \eta^{\prime}\right)^{\prime},\tag{20} $$
where we have defined two speeds
$$c_M^2~:=~\frac{\tau}{\rho}\quad\text{and}\quad c_L^2~:=~\frac{Y+\tau}{\rho}.\tag{21} $$
Let us consider left-moving waves only. A straightforward analysis shows that the EL equations (19) & (20) have two travelling modes:
A faster purely longitudinal $L$-mode $\xi_L(x\!-\!c_Lt) $ with $\eta_L(x\!-\!c_Lt)\approx 0$ (which formally violates the constraint (13), but recall eq. (5)).
A slower mixed $M$-mode $\xi_M(x\!-\!c_Mt) $ and $ \eta_M(x\!-\!c_Mt)$ that satisfies the constraint $\chi_M(x\!-\!c_Mt) \approx 0$ in eq. (13).
VIII) The two travelling modes $L$ and $M$ are independent in the sense that they can pass through each other. However the creation (and annihilation) of the $M$-mode are not independent of the $L$-mode. The constraint (13) has a lopsided effect: A transversal displacement is always associated with a longitudinal retraction. Recall that if we impose Dirichlet boundary conditions at the spatial ends of the string, then an overall longitudinal retraction is not possible. The creation (and annihilation) of an $M$-mode must therefore excite a compensating faster $L$-mode that counteracts the longitudinal component of the $M$-mode. See Ref. 1 for further details.
IX) Finally, it is interesting to try to integrate out the longitudinal field $\xi$ in the quartic model (11). We can solve eq. (19) for the longitudinal field
$$\xi~\approx~ \frac{Y}{2\rho}\int \! dt^{\prime}dx^{\prime}~G(x,t;x^{\prime},t^{\prime})\frac{d}{dx^{\prime}}\eta^{\prime}(x^{\prime},t^{\prime})^2$$$$~\stackrel{\text{int. by parts}}{=}~\frac{Y}{2\rho}\int \! dt^{\prime}dx^{\prime}\left\{-\frac{d}{dx^{\prime}}G(x,t;x^{\prime},t^{\prime})\right\}\eta^{\prime}(x^{\prime},t^{\prime})^2\tag{22} $$
by introducing a Green's function $G(x,t;x^{\prime},t^{\prime})$ and light-cone coordinates
$$ x^{\pm} ~:=~ t \pm \frac{x}{c_L}, \qquad \Delta x^{\pm} ~:=~ \Delta t \pm \frac{ \Delta x}{ c_L}, \qquad \Delta t ~:=~ t - t^{\prime}, \qquad \Delta x ~:=~ x - x^{\prime}.\tag{23}$$
Then the D'Alembertian in 1+1D becomes
$$\Box_L ~=~ 4\partial_+\partial_-\tag{24}. $$
The Green's function $G(x,t;x^{\prime},t^{\prime})$ satisfies by definition
$$\Box_L G(x,t;x^{\prime},t^{\prime}) ~=~ \delta(\Delta t)\delta(\Delta x) ~=~ \frac{2}{c_L} \delta(\Delta x^+)\delta(\Delta x^-).\tag{25}$$
The retarded Green's function is
$$ G_{\rm ret}(x,t;x^{\prime},t^{\prime}) ~=~ \frac{1}{2c_L}\theta(\Delta x^+)\theta(\Delta x^-).\tag{26}$$
However, to achieve a Lagrangian formulation (30) for the $\xi$-reduced quartic theory (11), we should used the symmetrized Green's function
$$ G(x,t;x^{\prime},t^{\prime})~=~\frac{1}{2} G_{\rm ret}(x,t;x^{\prime},t^{\prime})+\frac{1}{2} G_{\rm ret}(x^{\prime},t^{\prime};x,t).\tag{27}$$
It is convenient to introduced the notation
$$ K(x,t;x^{\prime},t^{\prime}) ~:=~ -\frac{d}{dx}\frac{d}{dx^{\prime}}G(x,t;x^{\prime},t^{\prime}) $$$$~=~ -\frac{1}{4c_L}\frac{d}{dx}\frac{d}{dx^{\prime}}\left[\theta(\Delta x^+)\theta(\Delta x^-) + \theta(-\Delta x^+)\theta(-\Delta x^-)\right]$$$$~=~ -\frac{1}{8c_L}\frac{d}{dx}\frac{d}{dx^{\prime}}\left[{\rm sgn}(\Delta x^+){\rm sgn}(\Delta x^-)\right].\tag{28}$$
Then the derivative $\xi^{\prime}$ of the longitudinal field is given simply by
$$ \xi^{\prime} (x,t) ~\approx~ \frac{Y}{2\rho} \int \! dt^{\prime}~dx^{\prime}~K(x,t;x^{\prime},t^{\prime}) ~\eta^{\prime}(x^{\prime},t^{\prime})^2.\tag{29}$$
Finally, we are able to write down an action
$$\begin{align} S_4 \quad\stackrel{\xi}{\longrightarrow}\quad &\int \! dt~dx \left(\frac{\rho}{2}\dot{\eta}^2-\frac{\tau}{2}\eta^{\prime 2} -\frac{Y}{8} \eta^{\prime 4}\right) \cr&-\frac{Y^2}{8\rho} \int dt~dx~dt^{\prime}dx^{\prime}~ \eta^{\prime}(x,t)^2 ~K(x,t;x^{\prime},t^{\prime})~ \eta^{\prime}(x^{\prime},t^{\prime})^2\end{align}\tag{30}$$
for the $\xi$-reduced quartic theory (11). It is easy to check that the corresponding EL equation for $\eta$ is eq. (17), where $\xi^{\prime}$ on the right-hand side of eq. (17) is given by eq. (29).
The action (30) is bi-local, which is expected. (On the bright side, at least the action (30) doesn't depend on higher spacetime derivatives!) However the non-local nature challenge the concept of a SEM tensor (and thereby the canonical momentum density, which was what OP originally asked about). It is still possible to derive Noether conservation laws associated with the WS translation symmetry, but we shall not pursue this here.
References:
D.R. Rowland & C. Pask, The missing wave momentum mystery, Am. J. Phys. 67 (1999) 378. (Hat tip: ACuriousMind.)
|
Difference between revisions of "Stokes' Theorem"
m (Curl, not cross product. I'll be back after I study this some more. Needs work.)
(→References: Category)
(11 intermediate revisions by 2 users not shown) Line 1: Line 1: −
'''Stokes' Theorem'''
+
'''Stokes' Theorem''' that the
+ + + + + +
integral of the [[curl]] of vector fieldis to
+ + +
line integral to a surface .
+ +
This is the analog in two dimensions of the [[Divergence Theorem]].
+ + + +
In its most general form, this theorem is the fundamental theorem of [[Exterior Calculus]], and is a generalization of the [[Fundamental Theorem of Calculus]]. It states that if ''M'' is an oriented piecewise smooth [[manifold]] of [[dimension]] k and <math>\omega</math> is a smooth (''k''−1)-[[differential form|form]] with compact support on ''M'', and ∂''M'' denotes the boundary of ''M'' with its induced orientation, then
In its most general form, this theorem is the fundamental theorem of [[Exterior Calculus]], and is a generalization of the [[Fundamental Theorem of Calculus]]. It states that if ''M'' is an oriented piecewise smooth [[manifold]] of [[dimension]] k and <math>\omega</math> is a smooth (''k''−1)-[[differential form|form]] with compact support on ''M'', and ∂''M'' denotes the boundary of ''M'' with its induced orientation, then
Line 11: Line 26:
*When k=1, and the terms appearing in the theorem are translated into their simpler form, this is just the Fundamental Theorem of Calculus.
*When k=1, and the terms appearing in the theorem are translated into their simpler form, this is just the Fundamental Theorem of Calculus.
−
*When k=3, this is often called '''Gauss' Theorem''' or the '''Divergence Theorem''' and is useful in
+
*When k=3, this is often called '''Gauss' Theorem''' or the '''Divergence Theorem''' and is useful in vector calculus:
:<math>\iiint_R (\nabla \cdot \vec w)\ \mathrm{d}V = \iint_S \vec w \cdot \vec{\mathrm{d}A}\,</math>
:<math>\iiint_R (\nabla \cdot \vec w)\ \mathrm{d}V = \iint_S \vec w \cdot \vec{\mathrm{d}A}\,</math>
Line 23: Line 38:
Here S is a surface, E is the boundary path of S, and the single integral denotes path integration around E with <math>\vec{\mathrm{d}l}</math> as the length element. The <math>\nabla \times</math> on the left side is the [[curl]] operator.
Here S is a surface, E is the boundary path of S, and the single integral denotes path integration around E with <math>\vec{\mathrm{d}l}</math> as the length element. The <math>\nabla \times</math> on the left side is the [[curl]] operator.
−
These last two examples (and Stokes' theorem in general)
+
These last two examples (and Stokes' theorem in general) are the subject of vector calculus. They play important roles in [[electrodynamics]]. The divergence and curl operations are cornerstones of [[Maxwell's Equations]].
+ + + + + + + +
[[Category:Mathematics]]
[[Category:Mathematics]]
[[Category:Physics]]
[[Category:Physics]]
Latest revision as of 15:18, 29 July 2016 Stokes' Theorem states that the line integral of a closed path is equal to the surface integral of any capping surface for that path, provided that the surface normal vectors point in the same general direction as the right-hand direction for the contour:
Intuitively, imagine a "capping surface" that is nearly flat with the contour. The curl is the microscopic circulation of the function on tiny loops within that surface, and their sum or integral results in canceling out all the internal circulation paths, leaving only the integration over the outer-most path.
[1] This remains true no matter how the capping surface is expanded, provided that the contour remains as its boundary.
Sometimes the circulation (the left side above) is easier to compute; other times the expresses the surface integral of the curl of vector field is easier to computer (particularly when it is zero).
[2]
Stated another way, Stokes' Theorem equates the line integral of a vector fields to a surface integral of the same vector field. For this identity to be true, the
direction of the vector normal n must obey the right-hand rule for the direction of the contour, i.e., when walking along the contour the surface must be on your left.
This is an extension of Green's Theorem to surface integrals, and is also the analog in two dimensions of the Divergence Theorem. The above formulation is also called as the "Curl Theorem," to distinguish it from the more general form of the Stokes' Theorem described below.
Stokes' Theorem is useful in calculating circulation in mechanical engineering. A conservative field has a circulation (line integral on a simple, closed curve) of zero, and application of the Stokes' Theorem to such a field proves that the curl of a conservative field over the enclosed surface must also be zero.
General Form
In its most general form, this theorem is the fundamental theorem of Exterior Calculus, and is a generalization of the Fundamental Theorem of Calculus. It states that if
M is an oriented piecewise smooth manifold of dimension k and is a smooth ( k−1)-form with compact support on M, and ∂ M denotes the boundary of M with its induced orientation, then ,
where
d is the exterior derivative.
There are a number of well-known special cases of Stokes' theorem, including one that is referred to simply as "Stokes' theorem" in less advanced treatments of mathematics, physics, and engineering:
When k=1, and the terms appearing in the theorem are translated into their simpler form, this is just the Fundamental Theorem of Calculus. When k=3, this is often called Gauss' Theoremor the Divergence Theoremand is useful in vector calculus:
Where R is some region of 3-space, S is the boundary surface of R, the triple integral denotes volume integration over R with dV as the volume element, and the double integral denotes surface integration over S with as the oriented normal of the surface element. The on the left side is the divergence operator, and the on the right side is the vector dot product.
When k=2, this is often just called Stokes' Theorem:
Here S is a surface, E is the boundary path of S, and the single integral denotes path integration around E with as the length element. The on the left side is the curl operator.
These last two examples (and Stokes' theorem in general) are the subject of vector calculus. They play important roles in electrodynamics. The divergence and curl operations are cornerstones of Maxwell's Equations.
Stokes' Theorem is a lower-dimension version of the Divergence Theorem, and a higher-dimension version of Green's Theorem. Green’s Theorem relates a line integral to a double integral over a region, while Stokes' Theorem relates a surface integral of the curl of a function to its line integral. Stokes' Theorem originated in 1850.
|
I'm reading the notes here and have a doubt on page 2 ("Least squares objective" section). The probability of a word $j$ occurring in the context of word $i$ is $$Q_{ij}=\frac{\exp(u_j^Tv_i)}{\sum_{w=1}^W\exp(u_w^Tv_i)}$$
The notes read:
Training proceeds in an on-line, stochastic fashion, but the implied global cross-entropy loss can be calculated as $$J=-\sum_{i\in corpus}\sum_{j\in context(i)}\log Q_{ij}$$ As the same words $i$ and $j$ can occur multiple times in the corpus, it is more efficient to first group together the same values for $i$ and $j$: $$J=-\sum_{i=1}^W\sum_{j=1}^WX_{ij}\log(Q_{ij})$$
where $X_{ij}$ is the total number of times $j$ occurs in the context of $i$ and the value of co-occuring frequency is given by the co-occurence matrix $X$. This much is clear. But then the author states that the denominator of $Q_{ij}$ is too expensive to compute, so the cross entropy loss won't work.
Instead, we use a least square objective in which the normalization factors in $P$ and $Q$ are discarded: $$\hat J=\sum_{i=1}^W\sum_{j=1}^WX_i(\hat P_{ij}-\hat Q_{ij})^2$$ where $\hat P_{ij}=X_{ij}$ and $\hat Q_{ij}=\exp(u_j^Tv_i)$ are the unnormalized distributions.
$X_i=\sum_kX_{ik}$ is the number of times any word appears in the context of $i$. I don't understand this part.
Why have we introduced $X_i$ out of nowhere? How is $\hat P_{ij}$ "unnormalized"? Is there a tradeoff in switching from softmax to MSE?
(As far as I know, softmax made total sense in skip gram because we were calculating scores corresponding to different words (discrete possibilities) and matching the predicted output to the actual word - similar to a classification problem, so softmax makes sense.)
|
The following apparently elementary question came out of a somewhat naive attempt to prove that every distribution $u\in \mathscr D'(\mathbb R^2)$ with $\partial_1 u=\partial_2 u =0$ is a constant function (this can be reduced to $\mathscr C^1$-functions by convolution with an approximate identity and for $\mathscr C^1$-functions it is completely elementary).
For which $\varphi \in \mathscr D(\mathbb R^2)$ does there exist $f,g \in \mathscr D(\mathbb R^2)$ such that $\varphi = \partial_1 f + \partial_2g$?
The only necessary condition I see is $\int \varphi(x,y) d(x,y)=0$, and the conjecture is that this is sufficient.
However, all my ad hoc attempts failed. On the side of Fourier transforms one would have to write $\hat{\varphi}(\xi,\eta) = \xi h(\xi,\eta) + \eta k(\xi,\eta)$ which is easy with smooth $h$ and $k$ (using $\hat{\varphi}(0)=0$) but I do not see how to do this with entire $h$ and $k$ satisfying the Paley-Wiener conditions for $\hat{\mathscr D}$.
(If the conjecture is true one gets a solution of the above mentioned problem since then the kernel of $v(\varphi)=\int \varphi(x,y) d(x,y)$ is contained in the kernel of $u$ and therefore $u$ is a multiple of $v$.)
|
Brauchart, Johann S and Hesse, Kerstin (2007)
Numerical integration over spheres of arbitrary dimension. Constructive Approximation, 25 (1). pp. 41-71. ISSN 0176-4276 Abstract
In this paper, we study the worst-case error (of numerical integration) on the unit sphere $\\mathbb{S}^{d}$, $d\\geq 2$, for all functions in the unit ball of the Sobolev space $\\mathbb{H}^s(\\mathbb{S}^d)$, where $s>d/2$. More precisely, we consider infinite sequences $(Q_{m(n)})_{n\\in\\mathbb{N}}$ of $m(n)$-point numerical integration rules $Q_{m(n)}$, where (i) $Q_{m(n)}$ is exact for all spherical polynomials of degree $\\leq n$, and (ii) $Q_{m(n)}$ has positive weights or, alternatively to (ii), the sequence $(Q_{m(n)})_{n\\in\\mathbb{N}}$ satisfies a certain local regularity property. Then we show that the worst-case error (of numerical integration) $E(Q_{m(n)};\\mathbb{H}^s(\\matbb{S}^d))$ in $\\mathbb{H}^s(\\mathbb{S}^d)$ has the upper bound $c n^{-s}$, where the constant $c$ depends on $s$ and $d$ (and possibly the sequence $(Q_{m(n)})_{n\\in\\mathbb{N}}$). This extends the recent results for the sphere $\\mathbb{S}^2$ by K.Hesse and I.H.Sloan to spheres $\\mathbb{S}^d$ of arbitrary dimension $d\\geq2$ by using an alternative representation of the worst-case error. If the sequence $(Q_{m(n)})_{n\\in\\mathbb{N}}$ of numerical integration rules satisfies $m(n)=\\mathcal{O}(n^d)$ an order-optimal rate of convergence is achieved.
Item Type: Article Schools and Departments: School of Mathematical and Physical Sciences > Mathematics Depositing User: Kerstin Hesse Date Deposited: 06 Feb 2012 19:40 Last Modified: 04 Apr 2012 10:27 URI: http://sro.sussex.ac.uk/id/eprint/21705 📧Request an update
|
Global existence for the Boltzmann equation in $ L^r_v L^\infty_t L^\infty_x $ spaces
Independent scholar
We study the Boltzmann equation near a global Maxwellian. We prove the global existence of a unique mild solution with initial data which belong to the $ L^r_v L^\infty_x $ spaces where $ r \in (1,\infty] $ by using the excess conservation laws and entropy inequality introduced in [
Mathematics Subject Classification:Primary: 35Q20, 35A01; Secondary: 82B40. Citation:Koya Nishimura. Global existence for the Boltzmann equation in $ L^r_v L^\infty_t L^\infty_x $ spaces. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1769-1782. doi: 10.3934/cpaa.2019083
References:
[1]
R. Duan, F. Huang, Y. Wang and T. Yang,
Global well-posedness of the Boltzmann equation with large Amplitude initial data,
[2] [3] [4] [5] [6]
M. Strain Robert and Keya Zhu,
Large-time decay of the soft potential relativistic Boltzmann equation in $\mathbb{R}^3_x$,
[7]
S. Ukai and T. Yang,
The Boltzmann equation in the space $L^2 \cap L^\infty_\beta$: Global and time-periodic solutions,
show all references
References:
[1]
R. Duan, F. Huang, Y. Wang and T. Yang,
Global well-posedness of the Boltzmann equation with large Amplitude initial data,
[2] [3] [4] [5] [6]
M. Strain Robert and Keya Zhu,
Large-time decay of the soft potential relativistic Boltzmann equation in $\mathbb{R}^3_x$,
[7]
S. Ukai and T. Yang,
The Boltzmann equation in the space $L^2 \cap L^\infty_\beta$: Global and time-periodic solutions,
[1]
Nicolas Fournier.
A recursive algorithm and a series expansion related to the homogeneous Boltzmann equation for hard potentials with angular cutoff.
[2]
Shaofei Wu, Mingqing Wang, Maozhu Jin, Yuntao Zou, Lijun Song.
Uniform $L^1$ stability of the inelastic Boltzmann equation with large external force for hard potentials.
[3]
Radjesvarane Alexandre, Mouhamad Elsafadi.
Littlewood-Paley theory and
regularity issues in Boltzmann homogeneous equations II. Non cutoff case and non Maxwellian molecules.
[4] [5] [6] [7] [8]
Zaihui Gan, Boling Guo, Jian Zhang.
Blowup and global existence of the nonlinear Schrödinger equations with multiple potentials.
[9]
Marion Acheritogaray, Pierre Degond, Amic Frouvelle, Jian-Guo Liu.
Kinetic formulation and global existence for the Hall-Magneto-hydrodynamics system.
[10] [11] [12] [13] [14] [15] [16]
Marc Briant.
Instantaneous exponential lower bound for solutions to the Boltzmann equation with Maxwellian diffusion boundary conditions.
[17]
Jean-Marie Barbaroux, Dirk Hundertmark, Tobias Ried, Semjon Vugalter.
Strong smoothing for the non-cutoff homogeneous Boltzmann equation for Maxwellian molecules with Debye-Yukawa type interaction.
[18]
Young-Pil Choi, Seung-Yeal Ha, Seok-Bae Yun.
Global existence and asymptotic behavior of measure valued solutions to the kinetic Kuramoto--Daido model with inertia.
[19]
Nikolaos Bournaveas, Vincent Calvez.
Global existence for the kinetic chemotaxis model without pointwise memory effects, and including internal variables.
[20]
Seung-Yeal Ha, Bingkang Huang, Qinghua Xiao, Xiongtao Zhang.
A global existence of classical solutions to the two-dimensional kinetic-fluid model for flocking with large initial data.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
$\mathbf{Geometrical \ approach}:$ Each point on the surface of a sphere is at an equal distance (equal to radius) from the center this results in the minimum surface area for a given volume. This can be proved analytically by comparing the surface area of a sphere with that of any other geometrical shape for a given volume.
For example, let's compare the surface areas of a sphere & a cube for a given volume say $V$ of the drop,
For the sphere of a radius $r$ $$\frac{4\pi}{3}r^3=V\implies r=\left(\frac{3V}{4\pi}\right)^{1/3}$$surface area of the sphere $\color{red}{S_1}=4\pi r^2=4\pi\left(\frac{3V}{4\pi}\right)^{2/3}=\color{red}{\left(\frac{9}{2\sqrt \pi}\right)^{1/3}V^{2/3}\approx1.36 V^{2/3}}$
For a cube with edge length $a$$$a^3=V\implies a=V^{1/3}$$surface area of the cube $\color{blue}{S_2}=6 a^2=\color{blue}{6V^{2/3}}$
Comparing surface areas, $\color{red}{S_1}<\color{blue}{S_2}$ i.e. the surface area of a sphere is smaller than that of a cube for a given volume
Similarly, we can analytically compare surface area of a sphere with that of any other geometrical shape.
The minimum surface area of the sphere results in the minimum surface energy of the drop. that's why the drop takes spherical shape to minimize its potential energy
|
Prove the identity:
$$n(n-1)2^{n-2}=\sum_{k=1}^n {k(k-1) {n \choose k}}$$
I tried using the binomial coefficients identity $2^n = \sum_{k=1}^n {n \choose k}$ but got stuck along the way.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
HINT:
For $k\ge2,$
$$k(k-1)\binom nk=k(k-1)n(n-1)\frac{(n-2)!}{k(k-1)\cdot (k-2)! \{(n-2)-(k-2)\}!}$$
$$=n(n-1)\binom{n-2}{k-2}$$
The $n(n-1)$ and $k(k-1)$ are a signal that differentiating twice should be at hand.
So start with $$ (1+X)^n=\sum_k\binom nkX^k, $$ differentiate twice with respect to $X$ to get$$ n(n-1)(1+X)^{n-2}=\sum_k\binom nk k(k-1)X^k, $$ and finally set $X:=1$.
HINT: Start with a pool of $n$ people. You want to pick a team of at least two people, designate one member of the team as captain, and designate a different member of the team as assistant captain. Both sides count the ways to do this. More details in the spoiler-protected block if you get stuck.
On the left you’re picking the captain and assistant captain and the rest of the team. On the right you’re choosing the team size, $k$, picking the team, and then choosing a captain and assistant captain from that team.
We can make use of the following binomial identity: $$\binom ab\binom bc=\binom ac \binom {a-c}{b-c}$$ Also, we can take the summation from $k=2$ instead of $k=1$, since the latter yields a zero term.
Hence $$\begin{align} \sum_{k=1}^n k(k-1)\binom nk&=\sum_{k=2}^n \binom nk k(k-1)\\ &=2\sum_{k=2}^n \binom nk\binom k2\\ &=2\sum_{k=2}^n \binom n2 {n-2\choose k-2}\\ &=2\binom n2 \sum_{k=2}^n {n-2\choose k-2}\\ &=n(n-1)\sum_{k=0}^{n-2} {n-2\choose k}\\ &=n(n-1)2^{n-2}\qquad \blacksquare \end{align}$$
An interesting point to note:
From the above result, by dividing both sides by 2 and noting that $\frac{r(r-1)}2=\binom r2$we can see that $$\sum_{k=2}^n \binom k2 \binom nk=\binom n22^{n-2}$$
This forms a nice pattern continuing from the commonly known results: $$\sum_{k=1}^n \binom k1 \binom nk=\binom n1 2^{n-1}$$ and $$\sum_{k=0}^n \binom k0 \binom nk=\binom n0 2^{n-0}$$
NB: the last two are equations in their more commonly known forms are
$\sum_{k=1}^n k\binom nk=n\cdot 2^{n-1}$ and $\sum_{k=0}^n \binom nk=2^n$ respectively.
From the pattern it appears that $$\sum_{k=m}^n \binom km \binom nk =\binom nm 2^{n-m}$$
This can easily be proven as follows: $$\begin{align} \sum_{k=m}^n \binom km \binom nk &=\sum_{k=m}^n \binom nk \binom km \\ &=\sum_{k=m}^n \binom nm \binom {n-m}{k-m}\\ &=\binom nm\sum_{k=m}^n \binom {n-m}{k-m}\\ &=\binom nm\sum_{k=0}^{n-m} \binom {n-m}{k}\\ &=\binom nm 2^{n-m} \end{align}$$
|
I have a set of data $y_i (z_i)$ with errors $\Delta y_i$
data = {{0.015, 34.1114},{0.0277, 35.705},{0.048948, 36.7316},{0.0651, 37.3067},{0.100915, 38.4567},{0.159, 39.4164},{0.248508, 40.2722},{0.455, 42.3239},{0.655, 42.3151},{0.75, 43.243},{0.84, 43.5143},{0.961, 44.2642},{1.188, 44.6076},{1.34, 45.0675},{1.414, 44.8038}} sigmadata={0.215239,0.118114,0.175773,0.242628,0.121087,0.21481,0.152213,0.306006,0.188809,0.198402,0.471047,}
that I want o fit with a complicated formula
$$ Y(z) = 5\log \Bigl((1+z)\int_0^z \frac{dz'}{a+bz'+cz'^2}\Bigr)+25; $$
I used a
NonLinearFit in the following way
model[a_?NumericQ, b_?NumericQ, c_?NumericQ, z_] := 5*Log[10, (1 + z)* NIntegrate[1/(a + b*x + c*x^2), {x, 0, z}, PrecisionGoal -> 7, AccuracyGoal -> 7]] + 25 fit1 = NonlinearModelFit[data, model[a,b,c,z], {a, b, c}, z, Weights -> 1/(sigmadata)^2];
Mathematica was not able to make the fit, and the problem seems to be that the coefficients becomes complex. I decide then to use the analytical form of the Integral and then make the fit (to avoid the integration),
model2=5Log[10,(1+z)*(2ArcTan[(b+2*c*z)/(Sqrt[4*a*c-b^2])])/(Sqrt[\4*a*c-b^2])]+25;
but the same problem appears. There is any way to fix somehow the conditions over $\{a,b,c\}$ ??. The full set of data is about 580 points, its a Cosmological fot for Supernovae. I appreciate your help.
|
Hello,
Many years before, I had the following problem.
We first give a definition. Given a non-negative definite real-valued definite matrix $n^2\times n^2$ matrix $M$, it is called
separable if it can be decomposed in the following way:
$$ M=\sum_{i=1}^{k} \:\: \lambda_i E_i\otimes F_i $$
where $k\le n^2$, $\lambda_i>0$, $\otimes$ denotes the tensor product in the usual sense, and $E_i$, $F_i$ are $n\times n$ non-negative matrices of rank one, having unit length and orthogonal with each other:
$$ Tr(E_i E_j^T) =Tr(F_i F_j^T) = \begin{cases} 1& i=j\cr 0& i\ne j \end{cases}\;. $$
Now the problem is:
how to determine whether a non-negative definite matrix is separable or not?
Recently, I have this problem again, but in a more general context. We define the non-negative definite linear operator $\mu \in \mathcal{S}'(R^{2d})$ ($\mathcal{S}'(R^{2d})$ is the space of Schwartz distributions) over the rapidly decreasing functions $\mathcal{S}(R^d)$ as follows:
$$ \langle\mu,\psi\otimes\psi\rangle\ge 0,\quad\forall \psi\in\mathcal{S}(R^{d}), $$ where $\otimes$ denotes the tensor product.
We call a non-negative definite linear operator $\mu \in \mathcal{S}'(R^{2d})$ ($d$ is even),
separable, if there exists a sequence of pairs $(\lambda_i,\mu_i,\nu_i)$, with $\lambda_i>0$, and each $\mu_i$ and $\nu_i\in\mathcal{S}'(R^d)$ is non-negative definite, such that
$$ \mu=\sum_i \lambda_i \: \mu_i\otimes\nu_i. $$
Note that the above sum can be integral when the operator has a continuous spectral.
So the similar problem is
how to determine whether a non-negative definite linear operator $\mu \in \mathcal{S}'(R^{2d})$ over $\mathcal{S}(R^d)$ is separable or not?
I guess that these two problems are still open. Does anyone have any hints or references?
Thank you very much!
Anand
|
The canonical counterexample is when you throw a football, but you don't get a good spiral. The football is freely flying through the air so there is no torque, but the $\omega$ and $L$ are not colinear.
To explain this example and the general case I will go into the math now. The moment of inertia tensor can be written in the form $I=\left( \begin{array}{ccc} I_1 & 0 & 0 \\ 0 & I_2 & 0 \\ 0 & 0 & I_3 \\ \end{array}\right)$ in the appropriate coordinate system. Now let's suppose we throw the football so $\vec{\omega}$ is not aligned with the symmetry axis of our football, but instead has a component along the $y$ axis so the vector makes some finite angle $\theta$ with the $z$ axis. Then $\vec{L}$ also makes some non-zero angle with the $z$ axis, but since $I_2 \ne I_3$, this angle is different from $\theta$, and $\vec{L}$ and $\vec{\omega}$ are not parallel. So this is a counter example; there is no torque, but $\vec{\omega}$ and $\vec{L}$ are not colinear.
So what is the theorem? We know $\vec{\omega}(t) = I^{-1}(t) \vec{L}$. Taking the time derivative, we get $\dot{\vec{\omega}}(t) = \dot{I}^{-1}(t) \vec{L}$. But what is $\dot{I}^{-1}(t)$?
Well the rotation relating the object's initial orientation to its orientation at time $t$ can be given by an orthogonal matrix $R(t)$. Now at a time $t$ the object is rotating with angular velocity $\vec{\omega}$, so $R(t)$ satisfies the differential equation $\dot{R}(t) = \vec{\omega}^\times R$, where $\vec{\omega}^\times$ is the matrix defined by $\vec{\omega}^\times \vec{u} = \vec{\omega} \times \vec{u}$.
Now $I^{-1}(t) = R(t)I_0^{-1}R^{-1}(t) $ so $\dot{I}^{-1} = \vec{\omega}^\times R I_0^{-1}R^{-1} - R I_0^{-1}R^{-1}\vec{\omega}^\times = \vec{\omega}^\times I^{-1}(t) - I^{-1}(t) \vec{\omega}^\times$.
Then $\dot{\vec{\omega}} = \vec{\omega}^\times I^{-1}(t) \vec{L} - I^{-1}(t) \vec{\omega}^\times \vec{L} = \vec{\omega} \times \vec{\omega} - I^{-1}(t) \vec{\omega} \times \vec{L} = -I^{-1}(t) \vec{\omega} \times \vec{L}$. From this we conclude that $\dot{\vec{\omega}}$ is zero exactly when $\vec{\omega}$ is parallel to $\vec{L}$
|
In geometry, the notion of a
connection makes precise the idea of transporting data along a curve or family of curves in a parallel and consistent manner. There are a variety of kinds of connections in modern geometry, depending on what sort of data one wants to transport. For instance, an affine connection, the most elementary type of connection, gives a means for transporting tangent vectors to a manifold from one point to another along a curve. An affine connection is typically given in the form of a covariant derivative, which gives a means for taking directional derivatives of vector fields: the infinitesimal transport of a vector field in a given direction.
Connections are of central importance in modern geometry in large part because they allow a comparison between the local geometry at one point and the local geometry at another point. Differential geometry embraces several variations on the connection theme, which fall into two major groups: the infinitesimal and the local theory. The local theory concerns itself primarily with notions of parallel transport and holonomy. The infinitesimal theory concerns itself with the differentiation of geometric data. Thus a covariant derivative is a way of specifying a derivative of a vector field along another vector field on a manifold. A Cartan connection is a way of formulating some aspects of connection theory using differential forms and Lie groups. An Ehresmann connection is a connection in a fibre bundle or a principal bundle by specifying the allowed directions of motion of the field. A Koszul connection is a connection generalizing the derivative in a vector bundle.
Connections also lead to convenient formulations of
geometric invariants, such as the curvature (see also curvature tensor and curvature form), and torsion tensor. Motivation: the unsuitability of coordinates
Parallel transport (of black arrow) on a sphere. Blue, respectively red arrows represent parallel transports in different directions but ending at the same lower right point. The fact that they end up not pointing in the same direction is a function of the curvature of the sphere.
Consider the following problem. Suppose that a tangent vector to the sphere
S is given at the north pole, and we are to define a manner of consistently moving this vector to other points of the sphere: a means for parallel transport. Naïvely, this could be done using a particular coordinate system. However, unless proper care is applied, the parallel transport defined in one system of coordinates will not agree with that of another coordinate system. A more appropriate parallel transportation system exploits the symmetry of the sphere under rotation. Given a vector at the north pole, one can transport this vector along a curve by rotating the sphere in such a way that the north pole moves along the curve without axial rolling. This latter means of parallel transport is the Levi-Civita connection on the sphere. If two different curves are given with the same initial and terminal point, and a vector v is rigidly moved along the first curve by a rotation, the resulting vector at the terminal point will be different from the vector resulting from rigidly moving v along the second curve. This phenomenon reflects the curvature of the sphere. A simple mechanical device that can be used to visualize parallel transport is the south-pointing chariot.
For instance, suppose that
S is given coordinates by the stereographic projection. Regard S as consisting of unit vectors in R 3. Then S carries a pair of coordinate patches: one covering a neighborhood of the north pole, and the other of the south pole. The mappings \begin{align} \varphi_0(x,y) & =\left(\frac{2x}{1+x^2+y^2}, \frac{2y}{1+x^2+y^2}, \frac{1-x^2-y^2}{1+x^2+y^2}\right)\\[8pt] \varphi_1(x,y) & =\left(\frac{2x}{1+x^2+y^2}, \frac{2y}{1+x^2+y^2}, \frac{x^2+y^2-1}{1+x^2+y^2}\right) \end{align}
cover a neighborhood
U 0 of the north pole and U 1 of the south pole, respectively. Let X, Y, Z be the ambient coordinates in R 3. Then φ 0 and φ 1 have inverses \begin{align} \varphi_0^{-1}(X,Y,Z)&=\left(\frac{X}{Z+1}, \frac{Y}{Z+1}\right), \\[8pt] \varphi_1^{-1}(X,Y,Z)&=\left(\frac{-X}{Z-1}, \frac{-Y}{Z-1}\right), \end{align}
so that the coordinate transition function is inversion in the circle:
\varphi_{01}(x,y) = \varphi_0^{-1}\circ\varphi_1(x,y)=\left(\frac{x}{x^2+y^2},\frac{y}{x^2+y^2}\right)
Let us now represent a vector field in terms of its components relative to the coordinate derivatives. If
P is a point of U 0 ⊂ S, then a vector field may be represented by the pushforward v(P) = J_{\varphi_0}(\varphi_0^{-1}(P))\cdot {\bold v}_0(\varphi_0^{-1}(P))\qquad(1)
where J_{\varphi_0} denotes the Jacobian matrix of φ
0, and v 0 = v 0( x, y) is a vector field on R 2 uniquely determined by v. Furthermore, on the overlap between the coordinate charts U 0 ∩ U 1, it is possible to represent the same vector field with respect to the φ 1 coordinates: v(P) = J_{\varphi_1}(\varphi_1^{-1}(P))\cdot {\bold v}_1(\varphi_1^{-1}(P)). \qquad (2)
To relate the components
v 0 and v 1, apply the chain rule to the identity φ 1 = φ 0 o φ 01: J_{\varphi_1}(\varphi_1^{-1}(P)) = J_{\varphi_0}(\varphi_0^{-1}(P))\cdot J_{\varphi_{01}}(\varphi_1^{-1}(P)). \,
Applying both sides of this matrix equation to the component vector
v 1(φ 1 −1( P)) and invoking (1) and (2) yields {\bold v}_0(\varphi_0^{-1}(P)) = J_{\varphi_{01}}(\varphi_1^{-1}(P))\cdot {\bold v}_1(\varphi_1^{-1}(P)). \qquad (3)
We come now to the main question of defining how to transport a vector field parallelly along a curve. Suppose that
P( t) is a curve in S. Naïvely, one may consider a vector field parallel if the coordinate components of the vector field are constant along the curve. However, an immediate ambiguity arises: in which coordinate system should these components be constant?
For instance, suppose that
v( P( t)) has constant components in the U 1 coordinate system. That is, the functions v 1( φ 1 −1( P( t))) are constant. However, applying the product rule to (3) and using the fact that d v 1/ dt = 0 gives \frac{d}{dt}{\bold v}_0(\varphi_0^{-1}(P(t)))=\left(\frac{d}{dt}J_{\varphi_{01}}(\varphi_1^{-1}(P(t)))\right)\cdot {\bold v}_1(\varphi_1^{-1}(P(t))).
But \left(\frac{d}{dt}J_{\varphi_{01}}(\varphi_1^{-1}(P(t)))\right) is always a non-singular matrix (provided that the curve
P( t) is not stationary), so v 1 and v 0 cannot ever be simultaneously constant along the curve. Resolution
The problem observed above is that the usual directional derivative of vector calculus does not behave well under changes in the coordinate system when applied to the components of vector fields. This makes it quite difficult to describe how to parallelly translate vector fields, if indeed such a notion makes any sense at all. There are two fundamentally different ways of resolving this problem.
The first approach is to examine what is required for a generalization of the directional derivative to "behave well" under coordinate transitions. This is the tactic taken by the covariant derivative approach to connections: good behavior is equated with covariance. Here one considers a modification of the directional derivative by a certain linear operator, whose components are called the Christoffel symbols, which involves no derivatives on the vector field itself. The directional derivative
D u v of the components of a vector v in a coordinate system φ in the direction u are replaced by a covariant derivative: \nabla_{\bold u} {\bold v} = D_{\bold u} {\bold v} + \Gamma(\varphi)\{{\bold u},{\bold v}\}
where Γ depends on the coordinate system φ and is bilinear in
u and v. In particular, Γ does not involve any derivatives on u or v. In this approach, Γ must transform in a prescribed manner when the coordinate system φ is changed to a different coordinate system. This transformation is not tensorial, since it involves not only the first derivative of the coordinate transition, but also its second derivative. Specifying the transformation law of Γ is not sufficient to determine Γ uniquely. Some other normalization conditions must be imposed, usually depending on the type of geometry under consideration. In Riemannian geometry, the Levi-Civita connection requires compatibility of the Christoffel symbols with the metric (as well as a certain symmetry condition). With these normalizations, the connection is uniquely defined.
The second approach is to use Lie groups to attempt to capture some vestige of symmetry on the space. This is the approach of Cartan connections. The example above using rotations to specify the parallel transport of vectors on the sphere is very much in this vein.
Historical survey of connections
Historically, connections were studied from an infinitesimal perspective in Riemannian geometry. The infinitesimal study of connections began to some extent with Christoffel. This was later taken up more thoroughly by Gregorio Ricci-Curbastro and Tullio Levi-Civita (Levi-Civita & Ricci 1900) who observed in part that a connection in the infinitesimal sense of Christoffel also allowed for a notion of parallel transport.
The work of Levi-Civita focused exclusively on regarding connections as a kind of differential operator whose parallel displacements were then the solutions of differential equations. As the twentieth century progressed, Élie Cartan developed a new notion of connection. He sought to apply the techniques of Pfaffian systems to the geometries of Felix Klein's Erlangen program. In these investigations, he found that a certain infinitesimal notion of connection (a Cartan connection) could be applied to these geometries and more: his connection concept allowed for the presence of curvature which would otherwise be absent in a classical Klein geometry. (See, for example, (Cartan 1926) and (Cartan 1983).) Furthermore, using the dynamics of Gaston Darboux, Cartan was able to generalize the notion of parallel transport for his class of infinitesimal connections. This established another major thread in the theory of connections: that a connection is a certain kind of differential form.
The two threads in connection theory have persisted through the present day: a connection as a differential operator, and a connection as a differential form. In 1950, Jean-Louis Koszul (Koszul 1950) gave an algebraic framework for regarding a connection as a differential operator by means of the Koszul connection. The Koszul connection was both more general than that of Levi-Civita, and was easier to work with because it finally was able to eliminate (or at least to hide) the awkward Christoffel symbols from the connection formalism. The attendant parallel displacement operations also had natural algebraic interpretations in terms of the connection. Koszul's definition was subsequently adopted by most of the differential geometry community, since it effectively converted the
analytic correspondence between covariant differentiation and parallel translation to an algebraic one.
In that same year, Charles Ehresmann (Ehresmann 1950), a student of Cartan's, presented a variation on the connection as a differential form view in the context of principal bundles and, more generally, fibre bundles. Ehresmann connections were, strictly speaking, not a generalization of Cartan connections. Cartan connections were quite rigidly tied to the underlying differential topology of the manifold because of their relationship with Cartan's equivalence method. Ehresmann connections were rather a solid framework for viewing the foundational work of other geometers of the time, such as Shiing-Shen Chern, who had already begun moving away from Cartan connections to study what might be called gauge connections. In Ehresmann's point of view, a connection in a principal bundle consists of a specification of
horizontal and vertical vector fields on the total space of the bundle. A parallel translation is then a lifting of a curve from the base to a curve in the principal bundle which is horizontal. This viewpoint has proven especially valuable in the study of holonomy. Possible approaches References
Levi-Civita, T.; Ricci, G. (1900), "Méthodes de calcul différential absolu et leurs applications", Math. Ann. B 54: 125–201, Cartan, Élie (1926), "Espaces à connexion affine, projective et conforme", Acta Math. 48: 1–42, Ehresmann, C. (1950), Les connexions infinitésimales dans un espace fibré différentiable, Colloque de Toplogie, Bruxelles, pp. 29–55 Koszul, J. L. (1950), "Homologie et cohomologie des algebres de Lie", Bulletin de la Société Mathématique 78: 65–127 Lumiste, Ü. (2001), "Connection", in Hazewinkel, Michiel, Osserman, B. (2004), Connections, curvature, and p-curvature (PDF) Mangiarotti, L.; . Morita, Shigeyuki (2001), Geometry of Differential Forms, AMS, External links Connections at the Manifold Atlas See also
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
I'm trying to solve the following congruence:
$71x-1 \equiv 0 \pmod{59367} $
Given that $59367=771 \times 77$, I have previously solved that:
$71x \equiv 1 \pmod{771}$ such that $x=-76$
$71x \equiv 1 \pmod{77}$ such that $x=-13$
I'm trying to use the Chinese Remainder Theorem, but seem to be getting the wrong answer, if anyone can work this out so I can try and understand where it is that I'm going wrong?
Thank you
|
Because you are looking only at the so-called
global part, i.e. the part of the gauge transformation which resembles a group action.
Recall that the vector bosons transform as$$A_\mu \to g A_\mu g^{-1} - (\partial_\mu g) g^{-1}$$where the first part is the
global part of the gauge transformation, which tells you that $A_\mu$ transform in the Adjoint representation (for non-abelian gauge symmetry), while the second part is the intrinsic local part. The second part, what is strictly speaking the gauge invariance, clearly forbids the mass term as you would get non-homogeneous terms like$$ A^\mu (\partial_\mu g) g^{-1} $$
Now we can ask why the term with the Higgs is allowed. First recall it comes from the covariant derivative$$(D_\mu H)^\dagger D^\mu H$$and when jointly transforming both fields you can show that the action is invariant, i.e. there are no surplus non-homogeneous terms.
It is for this same reason why one has to consider $F_{\mu\nu}$ as the kinetic term for $A_\mu$ instead of something like $\partial_\mu A_\nu \partial^\mu A^\nu$, which is also included in $F^2$ but in a way that the non-homogeneous term cancels (antisymmetric nature of the indices $\mu, \nu$, to be more exact).
Bottom-line: It is not suffice to write down group invariants if the symmetry is local. There is a non-homogeneous part of the transformation when acting on the vector bosons. This intrinsic gauge invariance further constraints how one can write down Lagrangians.
NOTE: My notation is simplistic, I am assuming that $A_\mu \simeq A^a_\mu T^a$ where $T^a$ are the Lie algebra generators, up to some normalisation and conventions.
References:The vast majority of books on the Standard Model and Quantum Field Theory will use similar arguments and notation.
|
In The Feynman Lectures on Physics, Volume I 39-2 The pressure of a gas, the following is presented:
If $v$ is the velocity of an atom, and $v_{x}$ is the $x$-component of $v$, then $mv_{x}$ is the $x$-component of momentum
in; but we also have an equal component of momentum 0utand so the total momentum delivered to the piston by the particle, in one collision, is $2mv_{x}$, because it is "reflected".
Now, we need the number of collisions made by the atoms in a second, or in a certain amount of time $dt;$ then we divide by $dt$. How many atoms are hitting? Let us suppose that there are $N$ atoms in the volume $V$, or $n=N/V$ in each unit volume. To find how many atoms hit the piston, we note that, given a certain amount of time $t$, if a particle has a certain velocity toward the piston it will hit during the time $t$, provided it is close enough. If it is too far away, it goes only part way toward the piston in the time $t$, but does not reach the piston. Therefore it is clear that only those molecules which are within a distance $v_{x}t$ from the piston are going to hit the piston in the time $t$. Thus the number of collisions in a time $t$ is equal to the number of atoms which are in the region within a distance $v_{x}t,$ and since the area of the piston is $A,$ the
volumeoccupied by the atoms which are going to hit the piston is $v_{x}tA$. But theOf course we do not want the number that hit in a time $t$, we want the number that hit per second, so we divide by the time $t$, to get $nv_{x}A$. (This time $t$ could be made very short; if we feel we want to be more elegant, we call it $dt,$ then differentiate, but it is the same thing.) numberof atoms that are going to hit the piston is that volume times the number of atoms per unit volume, $nv_{x}tA.$
So we find that the force is
$$ F=nv_{x}A\times2mv_{x}.\,\,\,(39.3) $$
See, the force is proportional to the area, if we keep the particle density fixed as we change the area! The pressure is then
$$ P=2nmv_{x}^{2}.\,\,\,(39.4) $$
Now we notice a little trouble with this analysis: First, all the molecules do not have the same velocity, and they do not move in the same direction. So, all the $v_{x}^{2}$'s are different! So what we must do, of course, is to take an average of the $v_{x}^{2}$'s, since each one makes its own contribution. What we want is the square of $v_{x}$, averaged over all the molecules:
$$ P=nm\left\langle v_{x}^{2}\right\rangle .\,\,\,(39.5) $$
Did we forget to include the factor 2? No; of all the atoms, only half are headed toward the piston. The other half are headed the other way, so the number of atoms per unit volume
that are hitting the pistonis only $n/2$.
While I accept the result, I do not understand his development. In particular, what is meant by the "volume" $v_{x}tA$? That volume is introduced with the tacit (and incorrect) assumption that all $v_{x}$'s are equal. An assumption which is subsequently rejected. But the meaning of $v_{x}tA$ in terms of the refined understanding of $v_{x}$ being specific to each atom is never made clear.
Introducing the notation $\Delta V_{x}=v_{x}tA;$ I have found no way to arrive at the advertised result $\left(39.5\right)$ using half the average $x$-component of speed to establish the correct value of $\Delta V_{x}$. For example:
$$ \frac{1}{2}n\left\langle \left|v_{x}\right|\right\rangle A\times2m\left\langle \left|v_{x}\right|\right\rangle =Anm\left\langle \left|v_{x}\right|\right\rangle ^{2}\ne Anm\left\langle v_{x}^{2}\right\rangle . $$
The result $\left(39.5\right)$ can be established by an alternative development which determines the number of collisions per unit time by considering the number of times any specific particle will traverse the $x$-dimension of a box of unit volume to the far side, and then back in a time $t$. But I am interested to know if Feynman's approach can be understood.
Under the assumption that the value $v_{x}$ is specific to each atom,what volume, corresponding to the above, $\Delta V_{x}=v_{x}tA$, should be usedto determine the number of collisions per unit time of gas atoms withthe piston?
|
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
|
Learning Outcomes
Use Order of Operations in Statistics Formulas.
We have already encountered the order of operations: Parentheses, Exponents, Multiplication and Division, Addition and Subtraction. In this sections we will give some additional examples where order of operations must be used properly to evaluate statistics.
Example \(\PageIndex{1}\)
The sample standard deviation asks us to add up the squared deviations, take the square root and divide by one less than the sample size. For example, suppose that there are three data values: 3, 5, 10. The mean of these values is 6. Then the standard deviation is:
\[s=\sqrt{\frac{\left(3-6\right)^2+\left(5-6\right)^2+\left(10-6\right)^2}{3-1}}\nonumber\]
Evaluate this number rounded to the nearest hundredth.
Solution
The first thing in the order of operations is to do what is in the parentheses. We must subtract:
\[3-6=-3,\:\:\:5-6\:=\:-1,\:\:\:10-6=4 \nonumber\]
We can substitute the numbers in to get:
\[=\sqrt{\frac{\left(-3\right)^2+\left(-1\right)^2+\left(4\right)^2}{3-1}}\nonumber\]
Next, we exponentiate:
\[\left(-3\right)^2=9,\:\:\:\left(-1\right)^2=1,\:\:\:4^2=16 \nonumber\]
Substitute these in to get:
\[\sqrt{\frac{9+1+16}{3-1}} \nonumber\]
We can now perform the addition inside the square root to get:
\[\sqrt{\frac{26}{3-1}} \nonumber\]
Next, perform the subtraction of the denominator to get:
\[\sqrt{\frac{26}{2}} \nonumber\]
We can divide to get:
\[\sqrt{13} \nonumber\]
We don't want to do this by hand, so in a calculator or computer type in:
\[13^{0.5} = 3.61 \nonumber\]
Example \(\PageIndex{2}\)
When calculating the probability that a value will be less than 4.6 if the value is taken randomly from a uniform distribution between 3 and 7, we have to calculate:
\[\left(4.6-3\right)\times\frac{1}{7-3} \nonumber\]
Find this probability.
Solution
We can use a calculator or computer, but we must be very careful about the order of operations. Notice that there are implied parentheses due to the fraction bar. The answer is:
\[\dfrac{(4.6 - 3) \times 1}{7-3} \nonumber\]
Using technology, we get:
\[\left(4.6-3\right)\times\frac{1}{7-3}\:=\:0.4 \nonumber\]
Exercise
When finding the upper bound, \(U\), of a confidence interval given the lower bound, \(L\), and the margin of error, \(E\), we use the formula
\[U=\:L+2E \nonumber\]
Find the upper bound of the confidence interval for the proportion of babies that are born preterm if the lower bound is 0.085 and the margin of error is 0.03.
|
This question is concerned with the long-standing problem confusing so many people which is: how is it that we can view $z$ and $\bar{z}$ as independent variables. To be more precise, I understand all the formal manipulations using the chain rule and the trick with $x = \frac{z+\bar{z}}{2}$ and $y=\frac{z-\bar{z}}{2\mathrm{i}}$ leading to the usual definitions of $\frac{\partial}{\partial z}$ and $\frac{\partial}{\partial \bar{z}}$ and then those of $dz$ and $\mathrm{d}\bar{z}.$ But somehow this is quite unsatisfying. One possible way out is suggested in the book by John P. D'Angelo where he says that (p. 148) smooth real functions $g_1, \ldots, g_m$ form a coordinate system on an open subset $\Omega \subset \mathbb{R}^n$ if the function $g = (g_1, \ldots, g_m)$ is injective on $\Omega$ and $\mathrm{d}g_1 \wedge \mathrm{d}g_2 \wedge \ldots \wedge \mathrm{d}g_m$ is nonzero. That far everything is in complete harmony with the intuition based on the implicit function theorem. But from now on, things seem to get a bit too clumsy when he says:
1) "This concept makes sense when these functions are either real or complex valued. For example, the functions $z$ and $\bar{z}$ define a coordinate system on $\mathbb{R}^2,$ because $\mathrm{d}x + \mathrm{i} \ \mathrm{d}y$ and $\mathrm{d}x - \mathrm{i} \ \mathrm{d}y$ are independent and the map $(x,y) \mapsto (x + \mathrm{i} y, x - \mathrm{i}y),$ embedding $\mathbb{R}^2$ into $\mathbb{C}^2,$ is injective." I am not sure if I got it right: does this really mean that, quite generally, the $g_i$'s are complex functions of real arguments with the differentials linearly independent over $\mathbb{R}$ (and not over $\mathbb{C}$)? I mean: I can see that this is precisely the case but: why should that be? To put it differently: where the linear independence over $\mathbb{C}$ comes into play and does it make sense at all? Related questions are:
2) Why is the last part of the formula (34) expressing $\mathrm{d}f,$ $$ \sum_{j=1}^{n} \frac{\partial f}{\partial x_j}\mathrm{d}x_j + \sum_{j=1}^{n} \frac{\partial f}{\partial y_j} \mathrm{d}y \stackrel{?}{=} \sum_{j=1}^{n} \frac{\partial f}{\partial z_j}\mathrm{d}z_j + \sum_{j=1}^{n} \frac{\partial f}{\partial \bar{z}_j} \mathrm{d}\bar{z}, $$ true and how do I get it?
3) And finally, are these considerations to suggest that, somehow, the complex analytic functions are precisely those $f(z, \bar{z})$ that do not depend on $\bar{z}$? If so, in exactly what sense is that statement claimed? Is there any baby complex manifolds theory involved?
Thanks
|
For a fractional Brownian motion $B_H$ consider the sequence for $p>0$ $$Y_{n,p}={1\over n}\sum\limits_{i=1}^n \left|B_H(i)-B_H(i-1)\right|^p.$$ By the Ergodic Theorem it is $$\lim\limits_{n\to\infty}Y_{n,p}=\mathbb{E}[|B_H(1)|^p] \ a.s.\text{ and in } L^1.$$ The Ergodic Theorem of Birkhoff says:
Let $(\Omega,\mathcal{F},\mathbb{P},\tau)$ be a measure-preserving dynamical system, $p>0$, $X_0\in\mathcal{L}^p$ and $X_n=X_0\circ \tau^n$. If $\tau$ is ergodic, then it holds $${1\over n}\sum\limits_{k=0}^{n-1}X_k\overset{n\to\infty}{\longrightarrow}\mathbb{E}[X_0]\ a.s.\text{ and in }L^1.$$
My problem is that I don't know how this theorem is used on the case described above, i.e. what is $\tau$ and what is $X_n$ in this case?
Another question is: Why do I have to use the ergodic theorem while by using the stationarity I have $$Y_{n,p}\sim {1\over n}\sum\limits_{i=1}^n |B_H(1)|^p=|B_H(1)|^p\ ?$$
I would be thankful for every help and explanation.
Maybe I should add saying that I am trying to understand how to prove that a fractional Brownian motion is not a semi-martingale for $H\neq {1\over2}$, by applying these statements.
|
Let $$\phi_k(x)=\sum_{1\le n \le x \\(n,x)=1} n^k$$ What's the asymptotic behavior of $$\sum_{n=1}^x\phi_k(n)?$$
The possible routes Route 1 (For someone who wants some practice with Abel Summations): There should be an approach which is an analog to the techniques shown here: sum of the divisor functions and I think that $\sum_{n=1}^x \frac{\phi_k(n)}{n^{k+1}}$ is always on the order of a linear function. So that might be the place to start.
If no one takes this route I will almost certainly post my own answer in 2 or 3 weeks and ask this community for help verifying my proof. This is the most obvious route for me to take to make progress on this.
Route 2: Also it would be particularly interesting to see an argument which isn't an analog of the linked post and which exploits what we already know about the asymptotic behavior of $\sum \sigma_k(n)$ to make claims about $\sum \phi_k(n)$. I am not sure this possible but it may be a route forward.
|
So I got a gift for a friend based on
Mathematica code.
Rose[x_, theta_] := Module[ {phi = (Pi/2)Exp[-theta/(8 Pi)], X = 1 - (1/2)((5/4)(1 - Mod[3.6 theta, 2 Pi]/Pi)^2 - 1/4)^2}, y = 1.95653 x^2(1.27689 x - 1)^2 Sin[phi]; r = X(x Sin[phi] + y Cos[phi]); {r Sin[theta], r Cos[theta], X (x Cos[phi] - y Sin[phi]), EdgeForm[]}];ParametricPlot3D[Rose[x, theta], {x, 0, 1}, {theta, -2 Pi, 15 Pi}, PlotPoints -> {25, 576}, LightSources -> {{{0, 0, 1}, RGBColor[1, 0, 0]}}, Compiled -> False]
And I want to translate this code to LaTeX. I have never used
Mathematica before, but I've managed to find a computer with Mathematica, and I've managed to use the "copy as LaTeX" option. I couldn't figure out how to use TeXform, unfortumately :/
ANYWHO, I've managed to get this in my attempt to translate Rose(x,theta):
$\text{Rose}(x,\theta):=\left[\begin{array}{c}\left\{ \phi =\frac{1}{2} \pi \exp \left(-\frac{\theta }{8 \pi}\right), X = 1-\frac{1}{2}\left(\frac{5}{4} \left(1-\frac{((3.6 \theta ) \bmod (2 \pi ))}{\pi }\right)^2-\frac{1}{4}\right)^2\right\},\\ y=1.95653 x^2 (1.27689 x-1)^2 \sin(\phi );\\ r=X (x \sin (\phi )+y \cos (\phi ));\{r \sin (\theta ),r \cos (\theta ),X (x \cos (\phi )-y \sin (\phi ))\} \end{array}\right]$
(note the copy to latex option wasn't super helpful in formatting)
This is the result of directly pasting the copy result and trying to clean up the function. I'm wondering if this cleanup is correct, and also...what exactly is going on in the function?
What I think is going on:
1) We use $\theta$ to calculate $\phi$ and big $X$
2) We then calculate $y$ using $\phi$ and little $x$
3) Then $r$ is calculated using big $X$, little $x$, $y$, and $\phi$
4) The euclidean coordinates of points on the graph are represented by: $\{r \sin(\theta ),r \cos(\theta),X (x \cos(\phi )-y \sin(\phi ))\}$ , and that depends on big $X$, little $x$, $y$, $\theta$, $\phi$, and $r$
Is this right? Am I missing something? Did I translate something wrong? What does mod mean? Is it modding $3.6\theta$ by $2\pi$?
Also, what does the plot mean? I'm guessing LightSources and RGBcolor refer to how the graph looks aesthetically. Does the second part of the code mean that x ranges from 0 to 1, and theta ranges from -2 Pi and 15 Pi? What does PlotPoints mean? The fineness of the plot/ number of points in each variable?
I need to verify this specific equation with human eyes that know how to read
Mathematica code. :/
EDIT: Sorry for the lack of citation-- this is
Mathematica code from Paul Nylander -- a formula for the "Nylander Rose" -- I had no part in making this code at all.
|
MY ATTEMPT AT PROVING THE LIMIT
By the definition of a limit, we know that:
$\displaystyle \lim_{x \to a} g(x) = 0$
Means that for every:
$\displaystyle \epsilon_2 > 0$,theres is a $\displaystyle \delta_1 > 0$ such that:
$\displaystyle 0 < |x - a| < \delta_1$
implies:
$\displaystyle |g(x) - 0| = |g(x)| < \epsilon_2$
The above inequality implies that:
$\displaystyle \frac{1}{\epsilon_2} < \frac{1}{|g(x)|}$
Also, since:
$\displaystyle 0 < \epsilon_1 < f(x)$
Then it follows that (by multiplying $\displaystyle \epsilon_1$ to both sides of the g(x) inequality):
$\displaystyle \frac{\epsilon_1}{\epsilon_2} < \frac{f(x)}{|g(x)|}$
[I don't know if the follwing is neccesary for this proof, but better safe then sorry]
(We need not worry about $\displaystyle \frac{f(x)}{|g(x)|}$ being negitive; since $\displaystyle 0 \leq |a|$ for all $\displaystyle a$, then the denominator is always positive. And since, by the stipulations made by the problem, $\displaystyle 0 < \epsilon_1 < f(x)$ for all $\displaystyle x$ obviously means that $\displaystyle 0 < f(x)$ for all $\displaystyle x$; then it folllows that both the denominator and numerator are positive, therefore $\displaystyle 0 < \frac{f(x)}{|g(x)|}$ for all $\displaystyle x$)
To prove that the limit approaches infinity, we must show that:
$\displaystyle 0 < |x - a| < \delta_2$
Implies that, for any $\displaystyle N$ we have:
$\displaystyle N < \frac{f(x)}{|g(x)|}$.
But we are already there! (atleast I think, as long as there are no errors) From the given limit of g(x), we were able to show that :
$\displaystyle \frac{\epsilon_1}{\epsilon_2} < \frac{f(x)}{|g(x)|}$
So letting:
$\displaystyle N = \frac{\epsilon_1}{\epsilon_2}$
and
$\displaystyle \delta_1 = \delta_2$
and rememebering the stipulations made by the problem of the value of f(x), then the train of implications goes like this:
$\displaystyle 0 < |x - a| < \delta_2$
$\displaystyle \Rightarrow \;\;\; 0 < |x - a| < \delta_1$
$\displaystyle \Rightarrow \;\;\; |g(x) - 0| = |g(x)| < \epsilon_2$
$\displaystyle \Rightarrow \;\;\; \frac{1}{\epsilon_2} < \frac{1}{|g(x)|}$
Remeber that:
$\displaystyle 0 < \epsilon_1 < f(x)$
So:
$\displaystyle [0 < \epsilon_1 < f(x)] \; \wedge \; [\frac{1}{\epsilon_2} < \frac{1}{|g(x)|} ]$
$\displaystyle \Rightarrow \;\;\; \frac{\epsilon_1}{\epsilon_2} < \frac{f(x)}{|g(x)|}$
Which shows that:
$\displaystyle 0 < |x-a| < \delta_2 \;\; \Rightarrow \;\; N < \frac{f(x)}{|g(x)|}$
For:
$\displaystyle N = \frac{\epsilon_1}{\epsilon_2}$
and
$\displaystyle \delta_1 = \delta_2$
FINNALY, this all means that:
$\displaystyle \lim_{x \to a} \frac{f(x)}{|g(x)|} = \infty$
(granted that the stipulations on f(x) are given)
|
"""Author: John VolkDate: 10/10/2016"""from __future__ import print_functionfrom sympy.parsing.sympy_parser import (parse_expr, standard_transformations, implicit_multiplication,\ implicit_application)import numpy as npimport sympyimport re
Python, regex, and SymPy to automate custom text conversions to LaTeX¶ This post includes examples on how to:
Convert text equation in bad format for Python and SymPy
Convert normal Python mathematical experssion into a suitable form for SymPy's LaTeX printer
Use sympy to produce LaTeX output
Create functions and data structures to make the process reusable and efficient to fit your needs
Lets start with the following string that we assign to the variable text that represents a mathematical model but in poor printing form:¶
text = """Ln(Y) = a0 + a1 LnQ + a2 LnQ^2 + a3 Sin(2 pi dtime) + a4 Cos(2 pi dtime)+ a5 dtime + a6 dtime^2"""text
'\nLn(Y) = a0 + a1 LnQ + a2 LnQ^2 + a3 Sin(2 pi dtime) + a4 Cos(2 pi dtime)\n+ a5 dtime + a6 dtime^2' However, we want this expression to look like:¶
$ \log{\left (Y \right )} = a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi dtime \right )} + a_{4} \cos{\left (2 \pi dtime \right )} + a_{5} dtime + a_{6} dtime^{2} $
Observe the following differences between text and valid LateX:¶
Some variables and functions are concatenated, i.e.: LnQ, correct latex would be \log{Q}
Functions are not in proper latex form (e.g. Sin = \sin, Ln = \log, ...)
Missing subscripts: a0 = a_0
Newline characters need to be removed
Some symbols need to be replaced: dtime = t
Python's symbolic math pacakge SymPy can automate some of the transformations that we need, and SymPy has built in LaTeX printing capabilities.¶
If you are not familiar with SymPy you should take some time to familiarize yourself with it; it takes some time to get used to its syntax. Check out the well done documentation for Sympy here.
First we need to convert the string (text) into valid SymPy input¶
Valid sympy input includes valid python math expressions with added recognition of math operations. For example the following expression can be parsed by SymPy without error:
exp = "(x + 4) * (x + sin(x**3) + log(x + 5*x) + 3*x - sqrt(y))" sympy.expand(exp)
4*x**2 - x*sqrt(y) + x*log(x) + x*sin(x**3) + x*log(6) + 16*x - 4*sqrt(y) + 4*log(x) + 4*sin(x**3) + 4*log(6)
print(sympy.latex(sympy.expand(exp)))
4 x^{2} - x \sqrt{y} + x \log{\left (x \right )} + x \sin{\left (x^{3} \right )} + x \log{\left (6 \right )} + 16 x - 4 \sqrt{y} + 4 \log{\left (x \right )} + 4 \sin{\left (x^{3} \right )} + 4 \log{\left (6 \right )} Now back to our original text that we want to convert, we need to make some simple adjustments to make the string a valid SymPy expression¶
You have several options here, in this case I choose to use regular expressions (regex) to do basic string pattern substitutions. You will likely need to modify these operations or create alrenative regex to prepare your text. If you do not know regex you can probably get by without using basic Python string methods.
## Note, I removed the LHS and the equal sign from the equation- SymPy requires special syntac for equations## further explanation belowtext = """a0 + a1 LnQ + a2 LnQ^2 + a3 Sin(2 pi dtime) + a4 Cos(2 pi dtime)+ a5 dtime + a6 dtime^2"""## Make a dictionary to map our strings to standard python math or symbols as neededsymbol_map = { '^': '**', 'Ln': 'log ', 'Sin': 'sin ', 'Cos': 'cos ', 'dtime': 't' }## use the dictionary to compile a regex on the keys## escape regex characters because ^ is one of the keys, (^ is a regex special character)to_symbols = re.compile('|'.join(re.escape(key) for key in symbol_map.keys())) # run through the text looking for keys (regex) and replacing them with the values from the dicttext = to_symbols.sub(lambda x: symbol_map[x.group()], text) text
'\na0 + a1 log Q + a2 log Q**2 + a3 sin (2 pi t) + a4 cos (2 pi t)\n+ a5 t + a6 t**2'
## remove new line characters from the text text = re.sub('\n', ' ', text)text
' a0 + a1 log Q + a2 log Q**2 + a3 sin (2 pi t) + a4 cos (2 pi t) + a5 t + a6 t**2'
## regex to replace coefficients a0, a1, ... with their equivalents with subscripts e.g. a0 = a_0text = re.sub(r"\s+a(\d)", r"a_\1", text)text
'a_0 +a_1 log Q +a_2 log Q**2 +a_3 sin (2 pi t) +a_4 cos (2 pi t) +a_5 t +a_6 t**2' At this point text is almost ready for LaTeX...¶ The remaining issues are sufficiently difficult string manipulations, SymPy's Parser is perfect for the remaining conversions:¶
Instead of trying to figure out how to place asterisks everywhere that multiplication is implied and parenthesis where functions are implied, e.g. log Q**2 should be log(Q**2) we can use SymPy's Parser that is quite powerful.
We use implicit multiplication (self-explantory) and implicit application for function applications that are mising parenthesis, both of these are transformations provided by the SymPy Parser. Remember the parser will still follow mathematical order of operations (PEMDAS) when doing implicit application. The parser can handle additional cases as well such as function exponentiation. Check the handy examples at the documentation link above.
## get the transformations we need (imported above) and place into a tuple that is required for the parsertransformations = standard_transformations + (implicit_multiplication, implicit_application, )## parse the text by applying implicit multiplication and implicit (math function) appplicationexpr = parse_expr(text, transformations=transformations)expr
a_0 + a_1*log(Q) + a_2*log(Q**2) + a_3*sin(2*pi*t) + a_4*cos(2*pi*t) + a_5*t + a_6*t**2
print(sympy.latex(expr))
a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2} SymPly amazing!!¶
$a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2} $
## global variables for the functionsymbol_map = { '^': '**', 'Ln': 'log ', 'Sin': 'sin ', 'Cos': 'cos ', 'dtime': 't' }transformations = standard_transformations + (implicit_multiplication, implicit_application, )## the functiondef translate(bad_text): """My custom string-to-LaTeX-ready SymPy expression translation function Arguments: bad_text (str): text that is in some bad format that requires string manipulation including custom string modifications to math functions, symbols, and operators defined by the global symbol_map dictionary (for substitutions), and the regexs compiled herein. More advanced manipulations providied by SymPy are defined by the global variable `transformations` are inputs to the SymPy parser Returns: expr (sympy expression): A SymPy expresion created by the SymPy expression parser after first doing custom string modifications to math functions, symbols, and operators """ to_symbols = re.compile('|'.join(re.escape(key) for key in symbol_map.keys())) bad_text = to_symbols.sub(lambda x: symbol_map[x.group()], bad_text) bad_text = re.sub('\n', '', bad_text) text = re.sub(r"\s+a(\d)", r"a_\1", bad_text) expr = parse_expr(text, transformations=transformations) return expr
## very handy, now we just have to convert to TeX and printprint(sympy.latex(translate(text)))
a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2} What about the original text ? It was an equation with a left-hand-side:¶
Parse both the LHS and RHS separately and combine with SymPy's Equation method
text = """Ln(Y) = a0 + a1 LnQ + a2 LnQ^2 + a3 Sin(2 pi dtime) + a4 Cos(2 pi dtime)+ a5 dtime + a6 dtime^2"""# split on the equal signt1 = text.split('=')[0] t2 = text.split('=')[1]
## Use sympy.Eq(LHS,RHS)LHS = translate(t1)RHS = translate(t2)print(sympy.latex(sympy.Eq(LHS, RHS)))
\log{\left (Y \right )} = a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2}
$\log{\left (Load \right )} = a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2} $
SymPly fantastic!!!¶
## extract SymPy symbols from both sides of eqnLHS_symbols = [str(x) for x in LHS.atoms(sympy.symbol.Symbol)]RHS_symbols = [str(x) for x in RHS.atoms(sympy.symbol.Symbol)]LHS_symbols
['Y']
RHS_symbols
['a_0', 'Q', 'a_5', 'a_6', 'a_2', 'a_3', 'a_1', 'a_4', 't']
## remove Q and t from the RHS list because we do not want to plug values in for themRHS_symbols.pop(RHS_symbols.index('Q'))RHS_symbols.pop(RHS_symbols.index('t'));
## create a dictionary assigning each symbol to random variablesplug_in_dict = {k: np.random.randint(10) for k in RHS_symbols }print(plug_in_dict)
{'a_6': 4, 'a_5': 7, 'a_4': 5, 'a_3': 0, 'a_2': 1, 'a_1': 0, 'a_0': 6}
## now plug in our values and let sympy simplyfy! Note, the variables we changed only appear on the RHSRHS.subs(plug_in_dict)
4*t**2 + 7*t + log(Q**2) + 5*cos(2*pi*t) + 6
print(sympy.latex(sympy.Eq(LHS, RHS.subs(plug_in_dict))))
\log{\left (Y \right )} = 4 t^{2} + 7 t + \log{\left (Q^{2} \right )} + 5 \cos{\left (2 \pi t \right )} + 6
$\log{\left (Y \right )} = 4 t^{2} + 7 t + \log{\left (Q^{2} \right )} + 5 \cos{\left (2 \pi t \right )} + 6 $
Remarks¶
I hope this was useful to anyone trying to use Python to batch process strings into mathematical expressions and LaTeX. In my case I needed to process many of these types of strings that were output from a computer code that fits regression models to input data. As you can see, if you work with mathematical expressions of any kind and already know basic Python, SymPy is undoubtedly useful. If you liked this or have experimented with your own implementations of Python, regex, and/or SymPy to do cool and useful things please share in the comments below.
|
I cannot plot this function as I am getting errors. I am not sure what I am doing wrong.
Seems to work when $$n \rightarrow 1,$$ but when $$n \rightarrow 2,$$ I get a whole bunch of errors and nothing is plotted.
Here is the code:
$$ \text{Plot}\left[\frac{n e^{-t (\lambda +\mu )} I_1\left(2 t \sqrt{\lambda \mu }\right) \left(\int_0^t \frac{e^{t (-\lambda -\mu )} I_1\left(2 t \sqrt{\lambda \mu }\right)}{\sqrt{\rho } t} \, dt\right){}^{n-1}}{\sqrt{\rho } t}\text{/.}\, \left\{\lambda \to 0.99,\mu \to 1,\rho \to \frac{1}{2},n\to \{2\}\right\},\{t,0,10\},\text{PlotRange}\to \text{All}\right]$$
Here is copyable code:
Plot[(n*BesselI[1, 2*t*Sqrt[λ*μ]]* Integrate[(E^(t*(-λ - μ))* BesselI[1, 2*t*Sqrt[λ*μ]])/(t*Sqrt[ρ]), {t, 0, t}]^(-1 + n))/ E^(t*(λ + μ))/(t*Sqrt[ρ]) /. {λ -> 0.99, μ -> 1, ρ -> 1/2, n -> {2}}, {t, 0., 10}, PlotRange -> All]
|
The path-groupoid $\mathcal{P}_1(X)$ of a (smooth) topological space $X$ is a refinement of the fundamental groupoid $\Pi_1(X)$ whose morphisms are given by (piecewise smooth) paths in $X$ up to thin-homotopy (rather than full homotopy). Thin-homotopies are basically homotopies sweeping zero area. (Note that there's a natural map $\mathcal{P}_1(X)\to \Pi_1(X)$ by sending thin-homotopies to full-homotopies.)
When we have a topological stratified space $(X,S)$, we may consider a different refinement of the fundamental groupoid $\Pi_1(X)$ given by the
exit-path category $\text{Exit}(X,S)$. In particular, instead of considering the full set of paths, we look at the set of exit-paths: a path $\gamma:[0,1]\to X$ is an exit-path with respect to the stratification $S$ if, for each $t_1\leq t_2\in [0,1]$, the dimension of the stratum containing $\gamma(t_1)$ is less than or equal to the dimension of the stratum containing $t_2$. That is, exit-paths go up the strata. The morphisms of the exit-path category $\text{Exit}(X,S)$ are then given by exit-paths modulo (full) homotopy. (Note that $\Pi_1(X)\hookrightarrow EP(X,S)$ with respect to the trivial stratification.)
There are two sheaf-theoretic characterizations of the categories $\Pi_1(X)$, $\text{Exit}(X,S)$ given by corresponding equivalences of categories, namely:
the category of representations of the fundamental groupoid $\Pi_1(X)$ is equivalent to the category of local systems (say, for $X$ locally simply connected); see, for instance, section 2.6 Szamuely;
Question
Is there an analogous sheaf-theoretic characterization of the category of representations of the path-groupoid $\mathcal{P}_1(X)$?
|
Let $S$ and $M$ be two finite-dimensional smooth manifolds with $\dim S\le \dim M$. Then it is known (e.g.Kriegl-Michor's book) that the set $\mathrm{Emb}(S, M)$ of all smooth embeddings $S\to M$ is an infinite-dimensional manifold; moreover, it is the total space of a smooth principal fiber bundle with structure group $\mathrm{Diff}(S)$ which has a base manifold $B(S,M)$ consisting of all submanifolds of $M$ of type $S$: $\require{AMScd}$ \begin{CD} \mathrm{Diff}(S) @>>> \mathrm{Emb}(S,M) \\ @. @VV{\pi}V \\ @. B(S,M) \end{CD}
Now, let's consider the diffeomorphism group $\mathrm{Diff}(M)$ of
the ambient space $M$ rather than $S$. Then it seems that it has a natural action on $\mathrm{Emb}(S,M)$ (and even on $B(S,M)$) by$$\Big( \phi, (i:S\to M) \Big) \mapsto ( \phi\circ i :S\to M )$$where $\phi\in \mathrm{Diff}(M)$.
Question:It would be interesting to study the properties of this $\mathrm{Diff}(M)$ action on $\mathrm{Emb}(S,M)$, like what are the orbits, the orbit space, etc. Moreover, it seems that this action preserves the fibers, so it should induce an action on the base $B(S,M)$. Is this true? Anyway, I fail to find an answer in literature. Does anyone know any reference?
|
Discussion area to prepare for the Final Exam On problem 2. I am breaking the curve $ \gamma $ up into two piece wise curves $ \gamma_1 $ and $ \gamma_2 $ that meet when the curve $ \gamma $ crosses the negative real axis at the point $ z_0 $. I then am taking the principle branch of log as an analytic function to evaluate the two curves with the Log $ {z_0} $ values dropping out. My worry is that since $ z_0 $ sits on the branch cut that the function won't be analytic for one of the endpoints of the curves. Am I getting myself into trouble with this?--Rgilhamw 21:13, 8 December 2009 (UTC)
Robert, you're not in as big a trouble as you think. When you do the top half, the Principal Branch of Log agrees with a branch where you take the cut pointing straight down the imaginary axis. When you do the bottom half, it agrees with a branch where the cut goes straight up the imaginary axis. --Steve Bell
for anyone who had the same question, Prof. Bell covered this in class today. Having the point where the curves break on the branch cut will not work, so it needs to be chopped up into more piece-wise curves.--Rgilhamw 18:29, 9 December 2009 (UTC)
I was wondering what anyone else did for problem 9, or even how they started it. I'm not sure what is meant by f is a rational function... A little help for a jump start would be nice.--Achurley 17:53, 11 December 2009 (UTC)
$ f $ is a rational function means that $ f $ can be expressed as the division of two polynomials. For problem 9, express $ sin(\theta) $ in terms of $ exp(i\theta) $ and use the substitution $ exp(i\theta)=z $. This expresses the integral in the complex plane along the unit circle in the counterclockwise direction.--Phebda 22:31, 11 December 2009 (UTC)
I had an idea for number 6 for finding the radius of convergence, and I wanted to see if anyone else agrees. Using the trick from exam two, I want to draw the biggest circle with the center at z = 0, since that is the center of the power series, such that there are no singularities enclosed within the circle. $ \frac{tan(z)}{z} = \frac{sin(z)}{z*cos(z)} $.
Consider the origin, z=0. The problem notes that the function is equal to 1 at z=0, so this is fine. Then we simply have to worry about the cosine, since the sine doesn't blow up anywhere. I ended up eventually just getting that the closest singularity to the origin was at $ \frac{\pi}{2} $. Thoughts? --Adbohn 23:57, 11 December 2009 (UTC)
Yes I believe what you stated works. Another way to think about it is that the tan function goes to infinity as theta goes to $ \frac{\pi}{2} $. Since a series converges absolutely and uniformly with in it's RoC and it converges for every value on the real line up to $ \frac{\pi}{2} $ then it must converge at all points in that open disc away from the boundary.--Rgilhamw 17:14, 12 December 2009 (UTC)
For problem 2, are we allowed to assume what the points of intersection between the curves and the axis are? If not, then how do we determine the integrals of the (3?) piecewise curves that we need to solve the problem? --Ysuo 13:47, 14 December 2009 (UTC)
Yu, just give those real numbers a name. You'll see that everything about them cancels out when you add up the pieces. --Steve Bell
|
“Unperformed measurements have no results.” —Asher Peres
With two looming paper deadlines, two rambunctious kids, an undergrad class, program committee work, faculty recruiting, and an imminent trip to Capitol Hill to answer congressional staffers’ questions about quantum computing (and for good measure, to give talks at UMD and Johns Hopkins), the only sensible thing to do is to spend my time writing a blog post.
So: a bunch of people asked for my reaction to the new
Nature Communications paper by Daniela Frauchiger and Renato Renner, provocatively titled “Quantum theory cannot consistently describe the use of itself.” Here’s the abstract:
Quantum theory provides an extremely accurate description of fundamental processes in physics. It thus seems likely that the theory is applicable beyond the, mostly microscopic, domain in which it has been tested experimentally. Here, we propose a Gedankenexperiment to investigate the question whether quantum theory can, in principle, have universal validity. The idea is that, if the answer was yes, it must be possible to employ quantum theory to model complex systems that include agents who are themselves using quantum theory. Analysing the experiment under this presumption, we find that one agent, upon observing a particular measurement outcome, must conclude that another agent has predicted the opposite outcome with certainty. The agents’ conclusions, although all derived within quantum theory, are thus inconsistent. This indicates that quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner.
I first encountered Frauchiger and Renner’s argument back in July, when Renner (who I’ve known for years, and who has many beautiful results in quantum information) presented it at a summer school in Boulder, CO where I was also lecturing. I was sufficiently interested (or annoyed?) that I pulled an all-nighter working through the argument, then discussed it at lunch with Renner as well as John Preskill. I enjoyed figuring out exactly where I get off Frauchiger and Renner’s train—since I
do get off their train. While I found their paper thought-provoking, I reject the contention that there’s any new problem with QM’s logical consistency: for reasons I’ll explain, I think there’s only the same quantum weirdness that (to put it mildly) we’ve known about for quite some time.
In more detail, the paper makes a big deal about how the new argument rests on just three assumptions (briefly, QM works, measurements have definite outcomes, and the “transitivity of knowledge”); and how if you reject the argument, then you must reject at least one of the three assumptions; and how different interpretations (Copenhagen, Many-Worlds, Bohmian mechanics, etc.) make different choices about what to reject.
But I reject an assumption that Frauchiger and Renner never formalize. That assumption is, basically: “it makes sense to chain together statements that involve superposed agents measuring each other’s brains in different incompatible bases, as if the statements still referred to a world where these measurements weren’t being done.” I say: in QM, even statements that look “certain” in isolation might really mean something like “
if measurement X is performed, then Y will certainly be a property of the outcome.” The trouble arises when we have multiple such statements, involving different measurements X 1, X 2, …, and (let’s say) performing X 1 destroys the original situation in which we were talking about performing X 2.
But I’m getting ahead of myself. The first thing to understand about Frauchiger and Renner’s argument is that, as they acknowledge, it’s not entirely new. As Preskill helped me realize, the argument can be understood as simply the “Wigner’s-friendification” of Hardy’s Paradox. In other words, the new paradox is exactly what you get if you take Hardy’s paradox from 1992, and promote its entangled qubits to the status of conscious observers who are in superpositions over thinking different thoughts. Having talked to Renner about it, I don’t think he fully endorses the preceding statement. But since
I fully endorse it, let me explain the two ingredients that I think are getting combined here—starting with Hardy’s paradox, which I confess I didn’t know (despite knowing Lucien Hardy himself!) before the Frauchiger-Renner paper forced me to learn it.
Hardy’s paradox involves the two-qubit entangled state
$$\left|\psi\right\rangle = \frac{\left|00\right\rangle + \left|01\right\rangle + \left|10\right\rangle}{\sqrt{3}}.$$
And it involves two agents, Alice and Bob, who measure the left and right qubits respectively, both in the {|+〉,|-〉} basis. Using the Born rule, we can straightforwardly calculate the probability that Alice and Bob both see the outcome |-〉 as 1/12.
So what’s the paradox? Well, let me now “prove” to you that Alice and Bob can
never both get |-〉. Looking at |ψ〉, we see that conditioned on Alice’s qubit being in the state |0〉, Bob’s qubit is in the state |+〉, so Bob can never see |-〉. And conversely, conditioned on Bob’s qubit being in the state |0〉, Alice’s qubit is in the state |+〉, so Alice can never see |-〉. OK, but since |ψ〉 has no |11〉 component, at least one of the two qubits must be in the state |0〉, so therefore at least one of Alice and Bob must see |+〉!
When it’s spelled out so plainly, the error is apparent. Namely, what do we even
mean by a phrase like “conditioned on Bob’s qubit being in the state |0〉,” unless Bob actually measured his qubit in the {|0〉,|1〉} basis? But if Bob measured his qubit in the {|0〉,|1〉} basis, then we’d be talking about a different, counterfactual experiment. In the actual experiment, Bob measures his qubit only in the {|+〉,|-〉} basis, and Alice does likewise. As Asher Peres put it, “unperformed measurements have no results.”
Anyway, as I said, if you strip away the words and look only at the actual setup, it seems to me that Frauchiger and Renner’s contribution is basically to combine Hardy’s paradox with the earlier Wigner’s friend paradox. They thereby create something that doesn’t involve counterfactuals quite as obviously as Hardy’s paradox does, and so requires a new discussion.
But to back up: what
is Wigner’s friend? Well, it’s basically just Schrödinger’s cat, except that now it’s no longer a cat being maintained in coherent superposition but a person, and we’re emphatic in demanding that this person be treated as a quantum-mechanical observer. Thus, suppose Wigner entangles his friend with a qubit, like so:
$$ \left|\psi\right\rangle = \frac{\left|0\right\rangle \left|FriendSeeing0\right\rangle + \left|1\right\rangle \left|FriendSeeing1\right\rangle}{\sqrt{2}}. $$
From the friend’s perspective, the qubit has been measured and has collapsed to either |0〉 or |1〉. From Wigner’s perspective, no such thing has happened—there’s only been unitary evolution—and in principle, Wigner could even confirm that by measuring |ψ〉 in a basis that included |ψ〉 as one of the basis vectors. But how can they both be right?
Many-Worlders will yawn at this question, since for them,
of course “the collapse of the wavefunction” is just an illusion created by the branching worlds, and with sufficiently advanced technology, one observer might experience the illusion even while a nearby observer doesn’t. Ironically, the neo-Copenhagenists / Quantum Bayesians / whatever they now call themselves, though they consider themselves diametrically opposed to the Many-Worlders (and vice versa), will also yawn at the question, since their whole philosophy is about how physics is observer-relative and it’s sinful even to think about an objective, God-given “quantum state of the universe.” If, on the other hand, you believed both that collapse is an objective physical event, and human mental states can be superposed just like anything else in the physical universe,
then Wigner’s thought experiment probably
should rock your world.
OK, but how do we Wigner’s-friendify Hardy’s paradox? Simple: in the state
$$\left|\psi\right\rangle = \frac{\left|00\right\rangle + \left|01\right\rangle + \left|10\right\rangle}{\sqrt{3}},$$
we “promote” Alice’s and Bob’s entangled qubits to two conscious observers, call them Charlie and Diane respectively, who can think two different thoughts that we represent by the states |0〉 and |1〉. Using far-future technology, Charlie and Diane have been not merely placed into coherent superpositions over mental states but also entangled with each other.
Then, as before, Alice will measure Charlie’s brain in the {|+〉,|-〉} basis, and Bob will measure Diane’s brain in the {|+〉,|-〉} basis. Since the whole setup is mathematically identical to that of Hardy’s paradox, the probability that Alice and Bob both get the outcome |-〉 is again 1/12.
Ah, but now we can reason as follows:
Whenever Alice gets the outcome |-〉, she knows that Diane must be in the |1〉 state (since, if Diane were in the |0〉 state, then Alice would’ve certainly seen |+〉). Whenever Diane is in the |1〉 state, she knows that Charlie must be in the |0〉 state (since there’s no |11〉 component). Whenever Charlie is in the |0〉 state, she knows that Diane is in the |+〉 state, and hence Bob can’t possibly see the outcome |-〉 when he measures Diane’s brain in the {|+〉,|-〉} basis.
So to summarize, Alice knows that Diane knows that Charlie knows that Bob can’t possibly see the outcome |-〉. By the “transitivity of knowledge,” this implies that Alice herself knows that Bob can’t possibly see |-〉. And yet, as we pointed out before, quantum mechanics predicts that Bob
can see |-〉, even when Alice has also seen |-〉. And Alice and Bob could even do the experiment, and compare notes, and see that their “certain knowledge” was false. Ergo, “quantum theory can’t consistently describe its own use”!
You might wonder: compared to Hardy’s original paradox, what have we gained by waving a magic wand over our two entangled qubits, and calling them “conscious observers”? Frauchiger and Renner’s central claim is that, by this gambit, they’ve gotten rid of the illegal counterfactual reasoning that we needed to reach a contradiction in our analysis of Hardy’s paradox. After all, they say, none of the steps in
their argument involve any measurements that aren’t actually performed! But clearly, even if no one literally measures Charlie in the {|0〉,|1〉} basis, he’s still there, thinking either the thought corresponding to |0〉 or the thought corresponding to |1〉. And likewise Diane. Just as much as Alice and Bob, Charlie and Diane both exist even if no one measures them, and they can reason about what they know and what they know that others know. So then we’re free to chain together the “certainties” of Alice, Bob, Charlie, and Diane in order to produce our contradiction.
As I already indicated, I reject this line of reasoning. Specifically, I get off the train at what I called step 3 above. Why? Because the inference from Charlie being in the |0〉 state to Bob seeing the outcome |+〉 holds for the
original state |ψ〉, but in my view it ceases to hold once we know that Alice is going to measure Charlie in the {|+〉,|-〉} basis, which would involve a drastic unitary transformation (specifically, a “Hadamard”) on the quantum state of Charlie’s brain. I.e., I don’t accept that we can take knowledge inferences that would hold in a hypothetical world where |ψ〉 remained unmeasured, with a particular “branching structure” (as a Many-Worlder might put it), and extend them to the situation where Alice performs a rather violent measurement on |ψ〉 that changes the branching structure by scrambling Charlie’s brain.
In quantum mechanics, measure or measure not: there is no
if you hadn’t measured. Unrelated Announcement: My awesome former PhD student Michael Forbes, who’s now on the faculty at the University of Illinois Urbana-Champaign, asked me to advertise that the UIUC CS department is hiring this year in all areas, emphatically including quantum computing. And, well, I guess my desire to do Michael a solid outweighed my fear of being tried for treason by my own department’s recruiting committee… Another Unrelated Announcement: As of Sept. 25, 2018, it is the official editorial stance of Shtetl-Optimized that the Riemann Hypothesis and the abc conjecture both remain open problems.
|
I would like to know whether I have correctly proved the following statement and have correctly extrapolated out a general situation.
We're asked two things:
a) Prove there is no rational number solution to $x^2-3x+1=0$
b) The problem (a) suggests a more general problem. State and outline a proof of this.
a) Proof: We assume, to the contrary, that there is some rational number solution $x=\frac{a}{b}$, $(a, b\in \mathbb{Z}, \frac{a}{b}\in\mathbb{Q})$ to $x^2-3x+1=0$.
$$\text{Using the quadratic formula we solve for x.} \\D=\sqrt{(-3)^2-4\times1\times1}=\sqrt{9-4}=\sqrt{5}\\x=\frac{3\pm\sqrt{5}}{2}\\\text{Then, according to our assumption, } x=\frac{a}{b} \\\text{Since } \frac{3\pm\sqrt{5}}{2} \notin\mathbb{Q} \text{ and } x\in\mathbb{Q} \text{, it follows that } \frac{a}{b}=\frac{3\pm\sqrt{5}}{2} \text{ is a contradiction. } \blacksquare$$
b) If $D\in \mathbb{R}-\mathbb{Q}$ , then the quadratic equation $ax^2+bx+c=0$ does not have a rational solution. We proceed by a direct proof.
$$\text{Proof: We assume, } D \in \mathbb{R-Q}. \\ D=\sqrt{(b)^2-4ac} \text{ where } a, b, c \in \mathbb{Z}. \\\text{Then, } x=\frac{-b\pm D}{2a}.\\ \text{ Given that } D \text{ is irrational it follows that } x \text{ is irrational.} \\\therefore \text{ if } D \in \mathbb{R} - \mathbb{Q} \text{, then } ax^2+bx+c=0 \text{ does not have a rational solution. } \blacksquare$$
P.S.
This is not homework. Answers are in the back of my book. I am actually trying to improve on my proofs and become more logical.
|
I would like to know if the Hirzebruch-Riemann-Roch theorem exists for bundles over Riemann surfaces with a boundary. I am asking this because the Hirzebruch-Riemann-Roch theorem is used in the following paper (https://arxiv.org/pdf/0707.2786v2.pdf) on page 10 to compute the index of the following differential operators defined over fiber bundles on a Riemann surface $\Sigma$, \begin{align} (\nabla^A)^{0,1} &: \Omega^0 (\Sigma ; \mathfrak{g}_P) \longrightarrow \Omega^{0,1} (\Sigma ; \mathfrak{g}_P) \label{2.7} \\ (\phi^{\ast}\nabla^A)^{0,1} &: \Omega^0 (\Sigma ; \phi^\ast \ker \textrm{d} \pi_E) \longrightarrow \Omega^{0,1} (\Sigma ; \phi^\ast \ker \textrm{d} \pi_E) \ , \end{align} where the definitions of the fiber bundles $\mathfrak{g}_P$ and $\phi^\ast \ker \textrm{d} \pi_E$ are given on page 6 of the aforementioned paper.
The indices for these operators are said to be easily obtained from the Hirzebruch-Riemann-Roch theorem, and the result for a general compact $\Sigma$ with no boundary is \begin{align} \rm{index} (\nabla^A)^{0,1} &= c_1 (\mathfrak{g}_P \rightarrow \Sigma) + ({\rm dim}G)(1-g) \label{2.8} \\ \rm{index} (\phi^{\ast}\nabla^A)^{0,1} &= c_1 (\phi^\ast \ker \textrm{d} \pi_E \rightarrow \Sigma) + ({\rm dim}_\mathbb{C} X)(1-g) \ . \end{align}
I would like to know the generalizations of these formulae for general $\Sigma$
with boundary.
I suspect, based on the ordinary Riemann-Roch theorem, that the answer is replacing $(1-g)$ in the expressions above to $(1-g-\frac{b}{2})$, where $b$ is the number of boundaries. This is because the Euler characteristic for the Riemann surface is $\chi =2-2g-b$. Is this correct, and why? References would be highly appreciated.
|
Let $A$ be a real $n \times n$ matrix. Denote by $\operatorname{cof} A$ The cofactor matrix of $A$. By definition, $A (\operatorname{cof} A)^T=\det A \cdot I$.
Thus, it is immediate that $A \in \operatorname{SO}_n$ if and only if $$ (**) \operatorname{cof} A =A,\det A =1$$
However, if $n \neq 2$ the condition on the determinant is superfluous:
$ \operatorname{cof} A =A \Rightarrow AA^T=\det A \cdot I \Rightarrow (\det A)^2=(\det A)^n \Rightarrow \det A \in \{1,-1\}$. However, Since $\det A \cdot I=AA^T$ is positive semidefinite $\det A \ge 0$ so $\det A = 1$ and $A \in \operatorname{SO}_n$.
For the case $n=2$, an easy calculation shows $\operatorname{cof} A=A$ if and only if $A$ is a
scaled rotation. Question:
While the above derivations are easy to do algebraically, I would like to find a more
geometric explanation of these results. I think this amounts to obtaining a better geometric interpretation for the cofactor matrix. (I know it measure in some sense the volume of $n-1$ dimensional parallelepiped, see here).
In particular,
Is there any geometric intution behind the condition $A \in\operatorname{SO}_n \iff (**) \operatorname{cof} A =A,\det A =1$? Is there any explanation for why dimension $2$ is special?
Note: The condition $(**)$ for characterizing matrices in $\operatorname{SO}_n$ is not a mere game. In some contexts this is the only way to show some transformations are indeed isometries. (For instance in proofs of Reshetnyak’s rigidity theorem).
|
Central limit theorems are a set of weak-convergence results in probability theory. Intuitively, they all express the fact that any sum of many independent identically distributed random variables is approximately normally distributed. These results explain the ubiquity of the normal distribution.
The most important and famous result is simply called
The Central Limit Theorem; it is concerned with independent variables with identical distribution whose expected value and variance are finite.
Several generalizations exist which do not require identical distribution but incorporate some condition which guarantees that none of the variables exert a much larger influence than the others. Two such conditions are the Lindeberg condition and the Lyapunov condition. Other generalizations even allow some "weak" dependence of the random variables.
Let
X 1,
Consider the sum :
S n=
In order to clarify the word "approaches" in the last sentence, we normalize
S by setting
n
Then the distribution of
Z converges towards the standard normal distribution N(0,1)
as n
An equivalent formulation of this limit theorem starts with
A = ( n
Note the following "paradox": by adding many independent identically distributed
positive variables, one gets approximately a normal distribution. But for every normally distributed variable, the probability that it is negative is non-zero! How is it possible to get negative numbers from adding only positives? The key lies in the word "approximately". The sum of positive variables is of course always positive, but it is very well approximated by a normal variable (which indeed has a very tiny probability of being negative).
More precisely: the fact that, in this case, for every n there is a z such that Pr(
Z ≤ n
Let
X be a sequence of independent random variables defined on the same probability space. Assume that n
Assume that the third central moments
are finite for every
n, and that
(This is the Lyapunov condition.)
We again consider the sum
S n=
then the distribution of
Z converges towards the standard normal distribution N(0,1) as above.
n
In the same setting and with the same notation as above, we can replace the Lyapunov condition with the following weaker one: for every ε > 0
\lim_{n \to \infty} \sum_{i = 1}^{n} \mbox{E}\left( \frac{(X_i - \mu_i)^2}{s_n^2} : \left| X_i - \mu_i \right| > \epsilon s_n \right) = 0</math>
(where E(
U : V > c) denotes the conditional expected value: the expected value of U given that V > c.) Then the distribution of the normalized sum Z converges towards the standard normal distribution N(0,1).
n
There are some theorems which treat the case of sums of non-independent variables, for instance the
m-dependent central limit theorem, the martingale central limit theorem and the central limit theorem for mixing processes.
Search Encyclopedia
Featured Article
|
This question already has an answer here:
Given $X_1,\ldots,X_n$, where $X_i\sim U(-\theta,\theta)$, what the MLE for $\theta$? Apparently the answer is $\max\{|X_1|,\dots,|X_n|\}$ but I can't figure out why.
The density function is $$f(x,\theta) = \begin{cases} \frac{1}{2\theta}, & x\in[-\theta,\theta] \\ 0, & \text{else} \end{cases}$$
I get a likelihood function that may decrease or increase for $\theta<0$, depending on the parity of $n$. I'm not sure if that's the way to solve this.
|
Caesium has a larger size, and the effective nuclear charge that the valence electron experiences will be far less compared to that of lithium's, right? But lithium is still considered the strongest reducing agent among all the alkali metals, and this is evidenced by its large and negative reduction potential. Why is this so?
The trend in the reducing power of the alkali metals is not a simple linear trend, so it is a little disingenuous if I were to solely talk about $\ce{Li}$ and $\ce{Cs}$, implying that data for the metals in the middle can be interpolated.
$$\begin{array}{cc}\hline\ce{M} & E^\circ(\ce{M+}/\ce{M}) \\\hline \ce{Li} & -3.045 \\\ce{Na} & -2.714 \\\ce{K} & -2.925 \\\ce{Rb} & -2.925 \\\ce{Cs} & -2.923 \\\hline\end{array}$$
Source: Chemistry of the Elements 2nd ed., Greenwood & Earnshaw, p 75
However, a full description of the middle three metals is beyond the scope of this question. I just thought it was worth pointing out that the trend is not really straightforward.
The $\ce{M+}/\ce{M}$ standard reduction potential is related to $\Delta_\mathrm{r}G^\circ$ for the reaction
$$\ce{M(s) -> M+(aq) + e-}$$
by the equation
$$E^\circ = \frac{\Delta_\mathrm{r}G^\circ + K}{F}$$
where $K$ is the absolute standard Gibbs free energy for the reaction
$$\ce{H+ + e- -> 1/2 H2}$$
and is a constant (which means we do not need to care about its actual value). Assuming that $\Delta_\mathrm{r} S^\circ$ is approximately independent of the identity of the metal $\ce{M}$, then the variations in $\Delta_\mathrm{r}H^\circ$ will determine the variations in $\Delta_\mathrm{r}G^\circ$ and hence $E^\circ$. We can construct an energy cycle to assess how $\Delta_\mathrm{r}H^\circ$ will vary with the identity of $\ce{M}$.
The standard state symbol will be dropped from now on.
$$\require{AMScd} \begin{CD} \ce{M (s)} @>{\large \Delta_\mathrm{r}H}>> \ce{M+(aq) + e-} \\ @V{\large\Delta_\mathrm{atom}H(\ce{M})}VV @AA{\large\Delta_\mathrm{hyd}H(\ce{M+})}A \\ \ce{M (g)} @>>{\large IE_1(\ce{M})}> \ce{M+ (g) + e-} \end{CD}$$
We can see, as described in Prajjawal's answer, that there are three factors that contribute to $\Delta_\mathrm{r}H$:
$$\Delta_\mathrm{r}H = \Delta_\mathrm{atom}H + IE_1 + \Delta_\mathrm{hyd}H$$
(the atomisation enthalpy being the same as the sublimation enthalpy). You are right in saying that there is a decrease in $IE_1$ going from $\ce{Li}$ to $\ce{Cs}$.
If taken alone, this would mean that $E(\ce{M+}/\ce{M})$ would decrease going from $\ce{Li}$ to $\ce{Cs}$, which would mean that $\ce{Cs}$ is a better reducing agent than $\ce{Li}$.
However, looking at the very first table, this is clearly not true. So, some
numbers will be needed. All values are in $\mathrm{kJ~mol^{-1}}$.
$$\begin{array}{ccccc}\hline\ce{M} & \Delta_\mathrm{atom}H & IE_1 & \Delta_\mathrm{hyd}H & \text{Sum} \\\hline\ce{Li} & 161 & 520 & \mathbf{-520} & 161 \\\ce{Cs} & 79 & 376 & \mathbf{-264} & 211 \\\hline\end{array}$$
Source: Inorganic Chemistry 6th ed., Shriver et al., p 160
This is, in fact, an extremely crude analysis. However, it hopefully does serve to show in a more quantitative way why $E(\ce{Cs+}/\ce{Cs}) > E(\ce{Li+}/\ce{Li})$: it's because of the extremely exothermic hydration enthalpy of the small $\ce{Li+}$ ion.
Just as a comparison, the ionic radii of $\ce{Li+}$ and $\ce{Cs+}$ ($\mathrm{CN} = 6$) are $76$ and $167~\mathrm{pm}$ respectively (Greenwood & Earnshaw, p 75).
To decide which is the best reducing agent we only not consider that who has less ionisation energy yet it follows 3 steps:
Metal (solid) to Metal (gaseous state) sublimation energy Metal from gaseous state to M+ ionisation energy M+ to M+ (aqueous state) hydration energy
Lithium having more charge density has more sublimation energy and ionisation energy than caesium but hydration energy is released in such a big amount that it compensates the S.E and I.E. and caesium's hydration is less than lithium. That's why lithium is good reducing agent.
Well there might be more reasons than these two:
Lithium has a higher reduction potential. If you also look at the electronegativities of just Lithiumand Cesiumthen you would notice that the shielding effectis more prevalent in Cesium, thereby reducing the electronegativityand affecting the reduction potential. So Lithium however, just compared to Cesium, has a higher electronegativity.
I think these are the two main reasons, please correct me If I am wrong.
|
M4: Geometry - Material for the year 2019-2020
15 lectures
The course is an introduction to some elementary ideas in the geometry of euclidean space through vectors. One focus of the course is the use of co-ordinates and an appreciation of the invariance of geometry under an orthogonal change of variable. This leads into a deeper study of orthogonal matrices, of rotating frames, and into related co-ordinate systems.
Students will learn how to encode a geometric scenario into vector equations and meet the vector algebra needed to manipulate such equations. Students will meet the benefits of choosing sensible co-ordinate systems and appreciate what geometry is invariant of such choices.
Euclidean geometry in two and three dimensions approached by vectors and coordinates. Vector addition and scalar multiplication. The scalar product, equations of planes, lines and circles. [3]
The vector product in three dimensions. Use of $\mathbf{a}, \mathbf{b}, \mathbf{a} \land \mathbf{b}$ as a basis. $\mathbf{r} \land \mathbf{a} = \mathbf{b}$ represents a line. Scalar triple products and vector triple products, vector algebra. [2]
Conics (normal form only), focus and directrix. Showing the locus $Ax^2 + Bxy + Cy^2 = 1$ can be put in normal form via a rotation matrix. Orthogonal matrices. $2\times 2$ orthogonal matrices and the maps they represent. Orthonormal
bases in $\mathbb{R}^3$. Orthogonal change of variable; $A\mathbf{u} \cdot A\mathbf{v} = \mathbf{u \cdot v}$ and $A(\mathbf{u} \land \mathbf{v}) = \pm A\mathbf{u} \land A \mathbf{v}$. Statement that a real symmetric matrix can be orthogonally diagonalized. Simple examples identifying conics not in normal form. [3]
$3 \times 3$ orthogonal matrices; $SO(3)$ and rotations; conditions for being a reflection. Isometries of $\mathbb{R}^3$. [2]
Rotating frames in $2$ and $3$ dimensions. Angular velocity. $\mathbf{v} = \omega \land \mathbf{r}$. [1]
Parametrised surfaces, including spheres, cones. Examples of coordinate systems including parabolic, spherical and cylindrical polars. Calculating normal as $\mathbf{r}_u \land \mathbf{r}_v$. Surface area. [4]
1) J. Roe,
Elementary Geometry, Oxford Science Publications (1992), Chapters 1, 2.2, 3.4, 4, 5.3, 7.1--7.3, 8.1--8.3, 12.1.
2) R. Earl.
Towards Higher Mathematics: A Companion, Cambridge University Press (2017) Chapters 3.1, 3.2, 3.7, 3.10, 4.2, 4.3
|
Prove the following.
Let $\{A_n \}_{n \in \mathbb{N}}$ and $\{ B_n\}_{n \in \mathbb{N}}$ be sequences of sets with $$ A_1 \subset A_2 \subset A_3 \dots \subset A_n \dots $$ $$ B_1 \subset B_2 \subset B_3 \dots \subset B_n \dots $$ then $\left( \bigcup_{n=1}^{\infty} A_n\right ) \cap \left ( \bigcup_{n=1}^{\infty} B_n\right ) = \bigcup_{n=1}^{\infty}A_n \cap B_n$
Similarly, prove that $$ A_1 \supset A_2 \supset A_3 \dots \supset A_n \dots $$ $$ B_1 \supset B_2 \supset B_3 \dots \supset B_n \dots $$ then $\left( \bigcap_{n=1}^{\infty} A_n\right ) \cup \left ( \bigcap_{n=1}^{\infty} B_n\right ) = \bigcap_{n=1}^{\infty}A_n \cup B_n$
Can anyone help with these two exercises? I've been stuck on them for awhile but haven't been able to make any significant progress in my notebook.
|
I've come across this problem and managed to get the right answer, but there remains a mystery that I wasn't quite able to solve: the minus sign (or a lack thereof)! Here's the problem and my solution:
A man walks across a bridge and when he's $40 \%$ of the way through, he spots a train incoming towards him at $40$ mph. He (through superhuman calculational ability or just massive pessimism) knows that if he would start sprinting towards the train, he would be run over at exactly the end of the bridge. He also knows that if he were to turn around and start sprinting away, he would be run over exactly at the beginning of the bridge. The question is to calculate how fast he can sprint.
So I pick a 'coordinate system' such that the origin is the beginning of the bridge, the end of the bridge is at a distance $d$ from the origin. At time $t=0$, the man's position is $\frac{2}{5}d$ and the train is at $d+l$. Denote:
$$s^{\pm}_{1}(t) \equiv \frac{2}{5}d \pm vt$$ $$s_2 (t) = d+l -40t$$
These are the positions of the man and the train respectively as functions of time. The plus/minus in the man's position is because he can either run towards or away from the train. With this notation, the conditions we are given are:
$$s^+_1(t_0)=s_2(t_0)=d$$ $$s^-_1(t_1)=s_2(t_1)=0$$
From the first condition, we get a system of equation which after eliminating the time, gives $$v=24 \frac{d}{l}.$$
From the second condition, we get $$v=16 \frac{d}{d+l}.$$
This implies $$v=-8.$$
The correct answer is $v=8$mph, so the solution is more or less correct. But the way I've set up the notation, shouldn't I get a positive answer? Where's the mistake?
|
The answers currently posted are ignoring a few important details so I'm going to give my own.I may rehash some things already said.To make everything absolutely clear I write here a complete derivation of the forced damped oscillator with emphasis on the role of the $Q$ factor.
Basic equations
Consider the equation of motion of a forced, damped harmonic oscillator:
$$\ddot{\phi}(t)+2\beta\dot{\phi}(t) + \omega_0^2 \phi(t) = j(t) \,.$$
Here $\beta$ is a coefficient of friction (for the case where the friction force is proportional to the velocity $\dot{\phi}$), $j$ is an external forcing function, and $\omega_0$ is the un-damped frequency of the system.
We define the fourier transform $\tilde{\phi}(\omega)$ by the equation
$$\phi(t) = \frac{1}{2\pi}\int \tilde{\phi}(\omega)e^{-i \omega t}\,d\omega\,.$$
Plugging the Fourier transform into the equation of motion gives
$$\phi(\omega) = \frac{\tilde{j}(\omega)}{-\omega^2 - 2i\beta \omega + \omega_0^2}= \frac{\tilde{j}(\omega)}{(\omega-\omega_+)(\omega - \omega_-)}$$
where $\tilde{j}$ is the Fourier transform of $j$ and
$$\omega_{\pm}\equiv -i\beta \pm \omega_0' \qquad\omega_0'\equiv \omega_0\sqrt{1-\left( \frac{\beta}{\omega_0} \right)^2}\,.$$
My $\omega_0'$ is what you called $\omega_r$ if we set $\gamma / \sqrt{2} = \beta$.Note that for light damping, i.e. the case $\beta \ll \omega_0$, we get $\omega_0' \approx \omega_0$.
In order to understand the meanings of the quality factor $Q$ we investigate these equations for two cases.
Free oscillation
First we consider the case where the forcing function is just an instantaneous whack at $t=0$.This should cause the system to oscillate but with a decreasing amplitude as energy is lost to friction.Mathematically we denote the instantaneous whack as $j(t) = A \delta(t)$.This gives $\tilde{j}(\omega) = A$.We find $\phi(t)$ by inverse Fourier transform
$$\begin{align}\phi(t)&= \frac{1}{2\pi} \int \tilde{\phi}(\omega)e^{-i\omega t} \, d\omega \\&= \frac{1}{2\pi} \int \frac{A e^{-i \omega t}}{(\omega-\omega_-)(\omega-\omega_+)} \, d\omega \\&= \frac{A}{\omega_0'} e^{-\beta t} \sin(\omega_0' t) \, . \qquad(*)\end{align}$$Let's understand this result.At $t=0$ the system is at $\phi=0$.This makes sense because we whack it at $t=0$ but it hasn't had time to go anywhere yet.As time goes on, it oscillates at frequency $\omega_0'$ but with decreasing amplitude.Note that the oscillation frequency in this case is
not the un-damped frequency $\omega_0$; it is $\omega_0'$ which is slightly shifted because of the friction.Larger friction causes a bigger shift in this free oscillation frequency.Of course, as you can see, larger friction also causes the amplitude to decrease faster, which makes sense.
Now, what about $Q$?Suppose $\phi$ represents the position of a mass on a spring.In that case, the potential energy of the system is proportional to $\phi^2$.Similarly, if $\phi$ represents the current in an LRC circuit then the the inductive energy is proportional to $\phi^2$.Twice per oscillation
all of the system's energy goes into the potential (or inductive) energy.From Eq. $(*)$ that energy is$$E(t) = E(0) e^{-2\beta t} \,.$$Now we can easily find $Q$
$$Q \equiv \frac{\text{energy stored}}{\text{energy lost per radian}} =\frac{E(t)}{-\frac{dE}{dt} \frac{dt}{d\,\text{radians}}}= \frac{E(0)e^{-2\beta t}}{2\beta E(0) e^{-2\beta t}/\omega_0'}= \frac{\omega_0'}{2\beta} \,.$$
This is almost exactly the same as your expression $\left( \omega_0^2 - \gamma^2 / 2 \right)^{1/2} / \gamma$ except that I
think you have messed up a factor of $\sqrt{2}$ somewhere.Anyway, the point is that your "more exact" formula for the $Q$ value is really just the $Q$ you get if you consider the case of free oscillation of the damped system. Steady state driven system
Now let's consider the case where the system is subjected to constant driving of the form
$$j(t) = A \cos(\Omega t) \, .$$
Then
$$\tilde{j}(\omega) = \frac{(2\pi)A}{2}(\delta(\omega - \Omega) + \delta(\omega + \Omega))\,.$$Plugging that into the integral and cranking away gives us
$$\phi(t) = \text{Re} \left[ e^{-i\Omega t} \frac{-A}{\Omega^2 + 2i\beta \Omega - \omega_0^2}\right]\, .$$
Let's look at the case where we're driving at the natural resonance frequency, i.e. $\Omega = \omega_0$.In this case we get
$$\phi(t) = \frac{A}{2 \beta \omega_0}\sin(\omega_0 t)\,.$$
The power exerted by the driving force is $\text{force}\times\text{velocity}^{[a]}$:$$P(t) = j(t)\dot{\phi}(t) = \frac{A^2}{2\beta}\cos(\omega_0 t)^2 = \frac{A^2}{4 \beta}[1 + \cos(2\omega_0 t)] \, .$$Note that $P(t)$ is
always positive.This is actually the definition of resonance: the resonance frequency is the one such that the work done by the drive is always positive.In an electrical circuit this is the same as saying that the resonance frequency is the one where the impedance of the damped oscillator is purely real.No other frequency has this property, so we've just shown that $\omega_0$ is the resonance frequency of the damped system$^{[b]}$.Since we're in steady state, this work must also be precisely the work lost by the system to the damping.We can therefore compute the average power loss in one cycle:
$$\langle P_{\text{loss}}\rangle =\frac{\omega_0}{2\pi} \int_{0}^{2\pi / \omega_0} P(t)\,dt = \frac{A^2}{4\beta}\,.$$
Great.Now let's compute the energy stored.By analogy to the case of a mass on a spring we know that the max potential energy is$^{[c]}$
$$U = \frac{1}{2} \phi_{\text{max}}^2 \omega_0^2 = (1/2)A^2 / 4\beta^2$$
and again since we're in steady state this is just the total stored energy.Therefore, the $Q$ value is
$$Q \equiv \frac{\text{energy stored}}{\text{energy loss per radian}}=\frac{U}{\langle P_{\text{loss}}\rangle / \omega_0}=\frac{(1/2)A^2 / 4\beta^2}{A^2 / 4 \beta \omega_0} =\frac{\omega_0}{2 \beta} \, .$$
This is the expression is almost exactly the same as the one we found for free oscillation, except that now we have $\omega_0$ instead of $\omega_0'$.Note that we have now answered your question #3, as we have shown that for steady state driving the $Q$ value involves $\omega_0$, not the more complex expression $\omega_0'$.
Answer to the original questions
We've seen that we can get two very slightly different expressions for $Q$ depending on whether we consider free oscillation or steady state driving.In fact, when people talk about $Q$ they're really talking about the steady state driving one; to keep from getting confused the other expression really shouldn't be called "$Q$".That said, for a system where $Q\gg1$ both expressions give extremely close numbers, so the distinction is mostly academic.
Does the energy dissipated per cycle assume that the amplitude is constant from one cycle to the next.
Yes, because when you talk about $Q$ you're implicitly talking about the steady state driving case in which everything is the same from cycle to cycle.
Is it always calculated at the resonance frequency?
By definition, yes. The $Q$ is
defined as the energy stored divided by the energy loss per radian in the steady state driving case with drive at the natural oscillation frequency $\omega_0$.
If the answer to 2 is yes can you explain why for a forced oscillator system with a damping coefficient of $\gamma$ and natural frequency $\omega_0$ the quality factor is $Q=\omega_0/\gamma$ and not some more complicated expression involving the actual resonant frequency
This was demonstrated in detail in the above discussion/calculation.
Notes:
$[a]$: In fact because of the way the quantities are set up here, if $\phi$ is a displacement of a mass on a spring, then what I'm calling "power" is actually "power divided by mass".
$[b]$: See this SO question which I posted specifically to help generate this answer.
$[c]$: Again, if you go through and compare to the case of a mass on a spring you'll see that I've left out a factor of the mass.
|
An investigator wishes to produce a combined analysis of several datasets. In some datasets there are paired observations for treatment A and B. In others there are unpaired A and/or B data. I am looking for a reference for an adaptation of the t-test, or for a likelihood ratio test, for such partially paired data. I am willing to (for now) to assume normality with equal variance and that the population means for A are the same for each study (and likewise for B).
Guo and Yuan suggest an alternative method called the optimal pooled t-test stemming from Samawi and Vogel's pooled t-test.
Great read with multiple options for this situation.
New to commenting so please let me know if I need to add anything else.
Well, if you knew the variances in the unpaired and in the paired (which would generally be a good deal smaller), the optimal weights for the two estimates of difference in groups means would be to have weights inversely proportional to the variance of the individual estimates of the difference in means.
[Edit: turns out that when the variances are estimated, this is called the Graybill-Deal estimator. There's been quite a few papers on it. Here is one]
The need to estimate variance causes some difficulty (the resulting ratio of variance estimates is F, and I think the resulting weights have a beta distribution, and a resulting statistic is kind of complicated), but since you're considering bootstrapping, this may be less of a concern.
An alternative possibility which
might be nicer in some sense (or at least a little more robust to non-normality, since we're playing with variance ratios) with very little loss in efficiency at the normal is to base a combined estimate of shift off paired and unpaired rank tests - in each case a kind of Hodges-Lehmann estimate, in the unpaired case based on medians of pairwise cross-sample differences and in the paired case off medians of pairwise-averages-of-pair-differences. Again, the minimum variance weighted linear combination of the two would be with weights proportional to inverses of variances. In that case I'd probably lean toward a permutation (/randomization) rather than a bootstrap - but depending on how you implement your bootstrap they can end up in the same place.
In either case you might want to robustify your variances/shrink your variance ratio. Getting in the right ballpark for the weight is good, but you'll lose very little efficiency at the normal by making it slightly robust. ---
Some additional thoughts I didn't have clearly enough sorted out in my head before:
This problem has distinct similarities to the Behrens-Fisher problem, but is even harder.
If we fixed the weights, we
could just whack in a Welch-Satterthwaite type approximation; the structure of the problem is the same.
Our issue is that we want to optimize the weights, which effectively means weighting is not fixed - and indeed, tends to maximize the statistic (at least approximately and more nearly in large samples, since any set of weights is a random quantity estimating the same numerator, and we're trying to minimize the denominator; the two aren't independent).
This would, I expect, make the chi-square approximation worse, and would almost surely affect the d.f. of an approximation still further.
[If this problem is doable, there also just
might turn out be a good rule of thumb that would say 'you can do almost as well if you use only the paired data under these sets of circumstances, only the unpaired under these other sets of conditions and in the rest, this fixed weight-scheme is usually very close to optimal' -- but I won't hold my breath waiting on that chance. Such a decision rule would doubtless have some impact on true significance in each case, but if that effect wasn't so big, such a rule of thumb would give an easy way for people to use existing legacy software, so it could be desirable to try to identify a rule like that for users in such a situation.]
---
Edit: Note to self - Need to come back and fill in details of work on 'overlapping samples' tests, especially overlapping samples t-tests
---
It occurs to me that a randomization test should work okay -
where the data are paired you randomly permute the group labels within pairs
where data are unpaired but assumed to have common distribution (under the null), you permute the group assignments
you can now base weights to the two shift estimates off the relative variance estimates ($w_1 = 1/(1+\frac{v_1}{v_2})$), compute each randomized sample's weighted estimate of shift and see where the sample fits into the randomization distribution.
(Added much later)
Possibly relevant paper:
Derrick, B., Russ B., Toher, D., and White, P. (2017),
"Test Statistics for the Comparison of Means for Two Samples That Include Both Paired and Independent Observations" Journal of Modern Applied Statistical Methods, May, Vol. 16, No. 1, 137-157. doi: 10.22237/jmasm/1493597280 http://digitalcommons.wayne.edu/cgi/viewcontent.cgi?article=2251&context=jmasm
Here are some thoughts. I basically just arrive to Greg Snow conclusion that
This problem has distinct similarities to the Behrens-Fisher problem. To avoid handwaving I first introduce some notations and formalize the hypotheses. we have $n$ paired observations $x_i^{pA}$ and $x_i^{pB}$ ($i = 1, \dots, n$); we have $n_A$ and $n_B$ unpaired observations $x_i^A$ ($i = 1, \dots, n_A$) and $x_i^B$ ($i = 1, \dots, n_B$);
each observation is the sum of a patient effect and a treatment effect. The corresponding random variables are
$X_i^{pA} = P_i + T_i^A$, $X_i^{pB} = P_i + T_i^B$, $X_i^A = Q_i + U_i^A$, $X_i^B = R_i + V_i^B$
with $P_i, Q_i, R_i \sim \mathcal N(0,\sigma_P^2)$, and $T_i^\tau, U_i^\tau, V_i^\tau \sim \mathcal N(\mu_\tau, \sigma^2)$ ($\tau = A, B$).
under the null hypothesis, $\mu_A = \mu_B$.
We form as usual a new variable $X_i = X_i^{pA} - X_i^{pB}$. We have $X_i \sim \mathcal N(\mu_A - \mu_B, 2\sigma^2)$.
Now we have three groups of observations, the $X_i$ (size $n$), the $X_i^A$ (size $n_A$) and the $X_i^B$ (size $n_B$). The means are
$X_\bullet\sim \mathcal N(\mu_A - \mu_B, {2\over n} \sigma^2)$ $X^A_\bullet\sim \mathcal N(\mu_A , {1\over n_A} (\sigma_P^2 + \sigma^2))$ $X^B_\bullet\sim \mathcal N(\mu_B , {1\over n_B} (\sigma_P^2 + \sigma^2))$
The next natural step is to consider
$Y = X_\bullet + X^A_\bullet - X^B_\bullet \sim \mathcal N\left( 2(\mu_A-\mu_B), {2\over n} \sigma^2 + \left({1\over n_A}+ {1\over n_B}\right) (\sigma_P^2 + \sigma^2)\right)$
Now basically we are stuck. The three sums of squares give estimations of $\sigma^2$ with $n-1$ df, $\sigma_P^2 + \sigma^2$ with $n_A-1$ df and $n_B-1$ df respectively. The last two can be combined to give an estimation of $\left({1\over n_A}+ {1\over n_B}\right) (\sigma_P^2 + \sigma^2)$ with $n_A+n_B-2$ df. The variance of $Y$ is the sum of two terms, each of which can be estimated, but the recombination is not doable, just as in Behrens Fisher problem.
At this point I think one may plug-in any solution proposed to Behrens Fisher problem to get a solution to your problem.
My first thought was a mixed effects model, but that has already been discussed so I won't say any more on that.
My other thought is that if it were theoretically possible that you could have measured paired data on all subjects but due to cost, errors, or another reason you don't have all the pairs, then you could treat the unmeasured effect for the unpaired subjects as missing data and use tools like the EM algorithm or Multiple Imputation (missing at random seems reasonable unless the reason a subject was only measured under 1 treatment was related to what their outcome would be under the other treatment).
It may be even simpler to just fit a bivariate normal to the data using maximum likelihood (with the likelihood factored based on the available data per subject), then do a likelihood ratio test comparing the distribution with the means equal vs. different means.
It has been a long time since my theory classes, so I don't know how these compare on optimality.
maybe mixed modelling with patient as random effect could be a way. With mixed modelling the correlation structure in the paired case and the partial missings in the unpaired case could be accounted for.
One of the methods proposed in Hani M. Samawi & Robert Vogel (Journal of Applied Statistics , 2013) consists of a weighted combination of T-scores from independent and dependent samples in such a way that the new T score is equal to
$T_o = \sqrt\gamma ( \frac {\mu_Y - \mu_X} {S_x^2/n_X + S_y^2/n_Y}) + \sqrt {(1-\gamma)} \frac {\mu_D} {S_D^2/n_D}$
where $D$ represents the samples of paired differences taken from the correlated data. Basically the new T score is a weighted combination of the unpaired T-score with the new correction term. $\gamma$ represents the proportion of independent samples. When $\gamma$ is equal to 1 the test is equivalent to two sample t-test, whereas if equal to zero, it is a paired t-test.
|
Difference between revisions of "Image Dimensions"
(Added some speculative content for the next section)
m (Text replace - "{{Category|NewTerminology}}" to "{{NewTerminology}}")
(7 intermediate revisions by 4 users not shown) Line 1: Line 1: +
<div style="background-color:#DDFFDD; border:thin solid green; padding:1em">
<div style="background-color:#DDFFDD; border:thin solid green; padding:1em">
'''Disclaimer:''' This page's content is not official and not guaranteed to be free of mistakes. At the moment, it's even only a sum of personal thoughts to cast a bit of light onto synfig's image dimensions handling.</div>
'''Disclaimer:''' This page's content is not official and not guaranteed to be free of mistakes. At the moment, it's even only a sum of personal thoughts to cast a bit of light onto synfig's image dimensions handling.</div>
==Describing the fields of the Canvas Properties Dialog==
==Describing the fields of the Canvas Properties Dialog==
−
The user access the image dimensions in the
+
The user access the image dimensions in the Canvas Properties Dialog.
−
===The '
+
===The '' tab===
+ +
Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well).
Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well).
−
===The 'Image' tab===
+ +
===The 'Image' tab===
+ + +
Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit:
Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit:
+
;The on-screen size(?): The fields ''Width'' and ''Height'' tell synfigstudio how many pixels the image shall cover at a zoom level of 100%.
;The on-screen size(?): The fields ''Width'' and ''Height'' tell synfigstudio how many pixels the image shall cover at a zoom level of 100%.
+
;The physical size: The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images.
;The physical size: The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images.
−
;The mysterious ''Image Area'': Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: <font style="vertical-align:10%;font-size:8pt"><math>\scriptstyle\text{span}=\sqrt{\Delta x^2 + \Delta y^2}</math></font>). The unit seems to be not pixels but ''unit''s, which are at
+ +
;The mysterious ''Image Area'': Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: <font style="vertical-align:10%;font-size:8pt"><math>\scriptstyle\text{span}=\sqrt{\Delta x^2 + \Delta y^2}</math></font>). The unit seems to be not pixels but ''unit''s, which are at Unit System|60 pixels each. If the ratio of the image size and image area dimensions are off, for example circles will appear as an ellipse (see image). These settings seem to influence how large one ''Image Size'' pixel is being rendered. This might be useful when one has to deal with non-square output pixels.
==Effects of the Image Area==
==Effects of the Image Area==
− +
image:Non_square_pixels.png|thumb|300px|Note the different scales at the rulers. Although the image is clearly 400x300 pixels big on screen, the rulers say it is only 400x200, which is what the ''Image Area'' values say.
− +
Image:Non_square.gif|frame|left|Note how the rectangle becomes a square and an elongated rectangle again as it rotates. Source:Image:Non square.sifz|Source file
<br clear="all" />
<br clear="all" />
Line 25: Line 35:
==Feature wishlist to simplify working across documents==
==Feature wishlist to simplify working across documents==
+ + + + Latest revision as of 10:55, 20 May 2013 Contents Describing the fields of the Canvas Properties Dialog
The user access the image dimensions in the Canvas Properties Dialog.
The Other tab
Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well).
The Image tab
Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit:
The on-screen size(?) The fields Widthand Heighttell synfigstudio how many pixels the image shall cover at a zoom level of 100%. The physical size The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images. The mysterious Image Area Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \scriptstyle\text{span}=\sqrt{\Delta x^2 + \Delta y^2}}). The unit seems to be not pixels but units, which are at 60 pixels each. If the ratio of the image size and image area dimensions are off, for example circles will appear as an ellipse (see image). These settings seem to influence how large one Image Sizepixel is being rendered. This might be useful when one has to deal with non-square output pixels. Effects of the Image Area
Somehow the image area setting seems to be saved when copy&pasting between image, see also bug #2116947.
Possible intended effects of out-of-ratio image areas
As mentioned above, different ratios might be needed when then output needs to be specified in pixels, but those pixels are not squares. That might happen for several kinds of media, such as videos encoded in some PAL formats or for dvds. For further reading, look at Wikipedia.
Still, it is probably consensus that the image, as shown on screen while editing should look as closely as possible like when viewed by the final audience. So, while specifying a different output resolution at
rendering time may well be wanted, synfigstudio should (for the majority of monitors) show square pixels, i.e. circles should stay circles. Feature wishlist to simplify working across documents See also
Explanation by dooglus on the synfig-dev mailing list.
|
Congratulations on deriving the exponential law for yourself, one learns a great deal about science working like this. Now to your last question:
If I had a group of atoms that have an 'average lifetime' of say 5 seconds, after 5 seconds has elapsed, what is the 'average lifetime' of the remaining atoms? I don't think I can arbitrarily choose some reference time to begin ticking away at the atoms' remaining time, does that mean at any point of time that their 'average lifetime' or expected lifetime is always a constant, and never actually diminishes as time goes on?
Yes indeed the average lifetime is constant. And the exponential distribution you have derived is the
unique lifetime distribution with this property. Another way of saying this is that the decaying particle is memoryless: it does not encode its "age": there is nothing inside the particle that says "I've live a long time, now its time to die". Yet another take on this - as a discrete rather than continuous probability distribution - is the geometric distribution of the number of throws before a coin turns up heads, and the observation that a coin has no memory that counters the famous gambler's fallacy.
To understand this uniqueness, we encode the memorylessness condition into the basic probability law
$$p(A\cap B) = p(A) \, p(B|A)$$
Suppose after time $\delta$ you observe that your particle has not decayed (event $A$). If $f(t)$ is the propability distribution of lifetimes, then the probability the particle has lasted at least this long,
i.e. the probability that it does not decay in time interval $[0,\,\delta]$ is:
$$p(A) = 1-\int_0^\delta f(u)du$$
The
a priori probability distribution function that the particle will last until time $t+\delta$ and then decay in the time interval $dt$ (event $B$) is
$$p(B\cap A) = f(t+\delta) dt$$.
This is events $B$ and $A$ observed together, which is the same as plain old $B$ since the particle cannot last unti time $t + \delta$ without living to $\delta$ first! Therefore, the conditional probability density function is
$$p(B|A) = \frac{f(t+\delta)\,dt}{1-\int_0^\delta f(u)du}$$
But this must be the same as the unconditional probability density that the particle lasts a further time $t$ measured from any time, by assumption of memorylessness. Thus we must have:
$$\left(1 - \int_0^\delta f(u)du\right)\,f(t) = f(t+\delta),\;\forall \delta>0$$
Letting $\delta\rightarrow 0$, we get the differential equation $f^\prime(t) = - f(0) f(t)$, whose unique solution is $f(t) = \frac{1}{\tau}\exp\left(-\frac{t}{\tau}\right)$. You can readily check that this function fulfills the general functional equation $\left(1 - \int_0^\delta f(u)du\right)\,f(t) = f(t+\delta)$ for any $\delta > 0$ as well.
As Akhmeteli's answer says, true memorylessness is actually incompatible with simple quantum models. For example, one can derive the exponential lifetime for an excited fluorophore from a simple model of a lone excited two state fluorophore equally coupled to all the modes of the electromagnetic field. The catch is that the derivation rests on approximating an integral over positive energy field modes by an integral over all energies, both positive and negative. This of course is unphysical, but an excellent approximation since only modes near to the two state atom's energy gap will be excited: the fluorophore "tries" to excite all modes equally, but destructive interference prevents significant coupling to modes of greatly different energy than the difference between the energies of the states on either side of the transition.
I show how this analysis is done in this answer here and here.
Linewidths are mostly extremely narrow compared to the frequencies of the photons concerned, so I find it surprising and quite wonderful that Ahkmeteli cites a paper giving experimental evidence of the nonconstant lifetime.
|
Definition:Antireflexive Relation Contents Definition
Let $\mathcal R \subseteq S \times S$ be a relation in $S$.
$\mathcal R$ is
antireflexive if and only if: $\forall x \in S: \tuple {x, x} \notin \mathcal R$ Also known as
Some sources use the term
irreflexive.
However, as
irreflexive is also found in other sources to mean non-reflexive, it is better to use the clumsier, but less ambiguous, antireflexive. The term aliorelative can sometimes be found, but this is rare. Also see Results about reflexivity of relationscan be found here.
The word derives from the Latin
alius, meaning other, together with relative, hence meaning a relation whose terms are related only to other terms. Sources 1964: Steven A. Gaal: Point Set Topology... (previous) ... (next): Introduction to Set Theory: $1$. Elementary Operations on Sets 1965: E.J. Lemmon: Beginning Logic... (previous) ... (next): $\S 4.5$: Properties of Relations 1971: Robert H. Kasriel: Undergraduate Topology... (previous) ... (next): $\S 1.19$: Some Important Properties of Relations: Definition $19.2$ 1977: Gary Chartrand: Introductory Graph Theory... (previous) ... (next): Appendix $\text{A}.2$: Cartesian Products and Relations 1989: Ephraim J. Borowski and Jonathan M. Borwein: Dictionary of Mathematics... (previous) ... (next): Entry: aliorelative
|
The Annals of Applied Probability Ann. Appl. Probab. Volume 23, Number 5 (2013), 1879-1912. On the rate of convergence to stationarity of the M/M/N queue in the Halfin–Whitt regime Abstract
We prove several results about the rate of convergence to stationarity, that is, the spectral gap, for the $M/M/n$ queue in the Halfin–Whitt regime. We identify the limiting rate of convergence to steady-state, and discover an asymptotic phase transition that occurs w.r.t. this rate. In particular, we demonstrate the existence of a constant $B^{\ast}\approx1.85772\mbox{ s.t.}$ when a certain excess parameter $B\in(0,B^{\ast}]$, the error in the steady-state approximation converges exponentially fast to zero at rate $\frac{B^{2}}{4}$. For $B>B^{\ast}$, the error in the steady-state approximation converges exponentially fast to zero at a different rate, which is the solution to an explicit equation given in terms of special functions. This result may be interpreted as an asymptotic version of a phase transition proven to occur for any fixed $n$ by van Doorn [
Stochastic Monotonicity and Queueing Applications of Birth-death Processes (1981) Springer].
We also prove explicit bounds on the distance to stationarity for the $M/M/n$ queue in the Halfin–Whitt regime, when $B<B^{\ast}$. Our bounds scale independently of $n$ in the Halfin–Whitt regime, and do not follow from the weak-convergence theory.
Article information Source Ann. Appl. Probab., Volume 23, Number 5 (2013), 1879-1912. Dates First available in Project Euclid: 28 August 2013 Permanent link to this document https://projecteuclid.org/euclid.aoap/1377696301 Digital Object Identifier doi:10.1214/12-AAP889 Mathematical Reviews number (MathSciNet) MR3134725 Zentralblatt MATH identifier 1287.60111 Subjects Primary: 60K25: Queueing theory [See also 68M20, 90B22] Citation
Gamarnik, David; Goldberg, David A. On the rate of convergence to stationarity of the M/M/N queue in the Halfin–Whitt regime. Ann. Appl. Probab. 23 (2013), no. 5, 1879--1912. doi:10.1214/12-AAP889. https://projecteuclid.org/euclid.aoap/1377696301
|
These are homework exercises to accompany Libl's "Differential Equations for Engineering" Textmap. This is a textbook targeted for a one semester first course on differential equations, aimed at engineering students. Prerequisite for the course is the basic calculus sequence.
Exercise 5.1.4: Find eigenvalues and eigenfunctions of
\[y''+ \lambda y=0,~~~y(0)-y'(0)=0,~~~y(1)=0.\]
Exercise 5.1.5: Expand the function \(f(x)=x\) on \(0 \leq x \leq 1\) using the eigenfunctions of the system
\[y''+ \lambda y=0,~~~y'(0)=0,~~~y(1)=0.\]
Exercise 5.1.6: Suppose that you had a Sturm-Liouville problem on the interval \([0,1]\) and came up with \(y_n(x)=\sin(\gamma nx)\), where \(\gamma >0\) is some constant. Decompose \(f(x)=x, 0<x<1\), in terms of these eigenfunctions.
Exercise 5.1.7: Find eigenvalues and eigenfunctions of
\[y'^{(4)}+ \lambda y=0,~~~y(0)=0,~~~y'(0)=0,~~~y(1)=0~~~y'(1)=0.\]
This problem is not a Sturm-Liouville problem, but the idea is the same.
Exercise 5.1.8 (more challenging): Find eigenvalues and eigenfunctions for
\[\frac{d}{dx}(e^xy')+ \lambda e^xy=0,~~~y(0)=0,~~~y(1)=0.\]
Hint: First write the system as a constant coefficient system to find general solutions. Do note that Theorem 5.1.1 guarantees \(\lambda \geq 0\).
Exercise 5.1.101: Find eigenvalues and eigenfunctions of
\[y''+ \lambda y=0,~~~y(-1)=0,~~~y(1)=0.\]
Exercise 5.1.102: Put the following problems into the standard form for Sturm-Liouville problems, that is, find \(p(x),q(x), r(x), \alpha_1,\alpha_,\beta_1,\beta_1, \), and decide if the problems are regular or not.
\(a) ~xy''+\lambda y=0\) for \(0<x<1,y(0)=0, y(1)=0,\)
\(b) ~ (1+x^2)y''+2xy'+(\lambda -x^2)y=0\) for \(-1<x<1,y(-1)=0,y(1)+y'(1)=0\)
Exercise 5.2.2: Suppose you have a beam of length \(5\) with free ends. Let \(y\) be the transverse deviation of the beam at position \(x\) on the beam \((0<x<5)\). You know that the constants are such that this satisfies the equation \(y_{tt}+4y_{xxxx}=0\). Suppose you know that the initial shape of the beam is the graph of \(x(5-x)\), and the initial velocity is uniformly equal to \(2\) (same for each \(x\)) in the positive \(y\) direction. Set up the equation together with the boundary and initial conditions. Just set up, do not solve.
Exercise 5.2.3: Suppose you have a beam of length \(5\) with one end free and one end fixed (the fixed end is at \(x=5\)). Let \(u\) be the longitudinal deviation of the beam at position \(x\) on the beam \((0<x<5)\). You know that the constants are such that this satisfies the equation \(u_{tt}=4u_{xx}\). Suppose you know that the initial displacement of the beam is \(\frac{x-5}{50}\), and the initial velocity is \(\frac{-(x-5)}{100}\) in the positive \(u\) direction. Set up the equation together with the boundary and initial conditions. Just set up, do not solve.
Exercise 5.2.4: Suppose the beam is \(L\) units long, everything else kept the same as in (5.2.2). What is the equation and the series solution?
Exercise 5.2.5: Suppose you have
\[ a^4y_{xxxx}+y_{tt}=0~~~~(0<x<1,t>0), \\ y(0,t)=y_{xx}(0,t)=0, \\ y(1,t)=y_{xx}(1,t)=0, \\ y(x,0)=f(x),~~~~y_t(x,0)=g(x). \]
That is, you have also an initial velocity. Find a series solution. Hint: Use the same idea as we did for the wave equation.
Exercise 5.2.101: Suppose you have a beam of length \(1\) with hinged ends. Let \(y\) be the transverse deviation of the beam at position \(x\) on the beam (\(0<x<1\)). You know that the constants are such that this satisfies the equation \(y_{tt}+4y_{xxxx}=0\). Suppose you know that the initial shape of the beam is the graph of \(\sin(\pi x)\), and the initial velocity is \(0\). Solve for \(y\).
Exercise 5.2.102: Suppose you have a beam of length \(10\) with two fixed ends. Let be the transverse deviation of the beam at position on the beam (\(0<x<10\)). You know that the constants are such that this satisfies the equation \(y_{tt}+9y_{xxxx}=0\). Suppose you know that the initial shape of the beam is the graph of \(\sin(\pi x)\), and the initial velocity is uniformly equal to \(x(10-x)\). Set up the equation together with the boundary and initial conditions. Just set up, do not solve.
Exercise 5.3.5: Suppose that the forcing function for the vibrating string is \(F_0 \sin(\omega t)\). Derive the particular solution \(y_p\).
Exercise 5.3.6: Take the forced vibrating string. Suppose that \(L=1,a=1\). Suppose that the forcing function is the square wave that is \(1\) on the interval \(0<x<1\) and \(-1 \)on the interval \(-1<x<0\). Find the particular solution. Hint: You may want to use result of Exercise 5.3.5.
Exercise 5.3.7: The units are cgs (centimeters-grams-seconds). For \(k=0.005, \omega =1.991 \times 10^{-7},A_0=20\). Find the depth at which the temperature variation is half (\(\pm 10\) degrees) of what it is on the surface.
Exercise 5.3.8: Derive the solution for underground temperature oscillation without assuming that \(T_0=0\).
Exercise 5.3.101: Take the forced vibrating string. Suppose that \(L=1,a=1\). Suppose that the forcing function is a sawtooth, that is \(|x|-\frac{1}{2}\) on \(-1<x<1\) extended periodically. Find the particular solution.
Exercise 5.3.102: The units are cgs (centimeters-grams-seconds). For \(k=0.01, \omega =1.991 \times 10^{-7},A_0=25\). Find the depth at which the summer is again the hottest point.
|
In Chapter 7.5 in Peskin and Schroeder, the authors define the physical charge in eq. (7.76), $$\text{(physical charge)}=\sqrt{Z_3}\cdot\text{(bare charge)}$$ where $\dfrac{1}{1-\Pi(0)}\equiv Z_3$. Here, $$\Pi^{\mu\nu}(q)=(q^2g^{\mu\nu}-q^\mu q^\nu)\Pi(q^2)$$ $i\Pi^{\mu\nu}(q)$ being the sum of all 1PI insertions into the photon propagator.
Now, just below eq. (7.76), the authors say that in a scattering process with non-zero $q^2$, we get a quantity, $$\frac{-ig_{\mu\nu}}{q^2}\left(\frac{e_0^2}{1-\Pi(q^2)}\right){\underset{\mathrm{\mathcal{O}(\alpha)}}{=}}\frac{-ig_{\mu\nu}}{q^2}\left(\frac{e^2}{1-[\Pi_2(q^2)-\Pi_2(0)]}\right)$$ where $\Pi_2(q^2)$ is the $\mathcal{O}(\alpha)$ value of $\Pi(q^2)$.
My specific question is, why do we subtract $\Pi_2(0)$ in the denominator of the RHS of the above expression? Shouldn't we just replace $\Pi(q^2)$ with $\Pi_2(q^2)$?
Thanks in advance.
EDIT: I think the answer lies a posteriori in the discussion leading up to eq. (7.90). They show that the $\mathcal{O}(\alpha)$ shift in the electric charge is given by,$$\Pi_2(0)\approx-\frac{2\alpha}{3\pi\epsilon}$$which blows up as $\epsilon\rightarrow0$ ($\epsilon=4-d$, $d$ being the number of spacetime dimensions). The argument is:
What can be observed is the $q^2$ dependence of the effective electric charge (7.77).
[(7.77) basically refers to the denominator I was talking about in my original question.]
|
Notice:
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!
Test-drive the GATK tools and Best Practices pipelines on Terra Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything. Genotype Refinement workflow for germline short variants Contents Overview Summary of workflow steps Output annotations Example More information about priors Mathematical details 1. Overview
The core GATK Best Practices workflow has historically focused on variant discovery --that is, the existence of genomic variants in one or more samples in a cohorts-- and consistently delivers high quality results when applied appropriately. However, we know that the quality of the individual genotype calls coming out of the variant callers can vary widely based on the quality of the BAM data for each sample. The goal of the Genotype Refinement workflow is to use additional data to improve the accuracy of genotype calls and to filter genotype calls that are not reliable enough for downstream analysis. In this sense it serves as an optional extension of the variant calling workflow, intended for researchers whose work requires high-quality identification of individual genotypes.
While every study can benefit from increased data accuracy, this workflow is especially useful for analyses that are concerned with how many copies of each variant an individual has (e.g. in the case of loss of function) or with the transmission (or de novo origin) of a variant in a family.
If a “gold standard” dataset for SNPs is available, that can be used as a very powerful set of priors on the genotype likelihoods in your data. For analyses involving families, a pedigree file describing the relatedness of the trios in your study will provide another source of supplemental information. If neither of these applies to your data, the samples in the dataset itself can provide some degree of genotype refinement (see section 5 below for details).
After running the Genotype Refinement workflow, several new annotations will be added to the INFO and FORMAT fields of your variants (see below).
Note that GQ fields will be updated, and genotype calls may be modified. However, the Phred-scaled genotype likelihoods (PLs) which indicate the original genotype call (the genotype candidate with PL=0) will remain untouched. Any analysis that made use of the PLs will produce the same results as before. 2. Summary of workflow steps Input
Begin with recalibrated variants from VQSR at the end of the germline short variants pipeline. The filters applied by VQSR will be carried through the Genotype Refinement workflow.
Step 1: Derive posterior probabilities of genotypes Tool used: CalculateGenotypePosteriors
Using the Phred-scaled genotype likelihoods (PLs) for each sample, prior probabilities for a sample taking on a HomRef, Het, or HomVar genotype are applied to derive the posterior probabilities of the sample taking on each of those genotypes. A sample’s PLs were calculated by HaplotypeCaller using only the reads for that sample. By introducing additional data like the allele counts from the 1000 Genomes project and the PLs for other individuals in the sample’s pedigree trio, those estimates of genotype likelihood can be improved based on what is known about the variation of other individuals.
SNP calls from the 1000 Genomes project capture the vast majority of variation across most human populations and can provide very strong priors in many cases. At sites where most of the 1000 Genomes samples are homozygous variant with respect to the reference genome, the probability of a sample being analyzed of also being homozygous variant is very high.
For a sample for which both parent genotypes are available, the child’s genotype can be supported or invalidated by the parents’ genotypes based on Mendel’s laws of allele transmission. Even the confidence of the parents’ genotypes can be recalibrated, such as in cases where the genotypes output by HaplotypeCaller are apparent Mendelian violations.
Step 2: Filter low quality genotypes Tool used: VariantFiltration
After the posterior probabilities are calculated for each sample at each variant site, genotypes with GQ < 20 based on the posteriors are filtered out. GQ20 is widely accepted as a good threshold for genotype accuracy, indicating that there is a 99% chance that the genotype in question is correct. Tagging those low quality genotypes indicates to researchers that these genotypes may not be suitable for downstream analysis. However, as with the VQSR, a filter tag is applied, but the data is not removed from the VCF.
Step 3: Annotate possible de novo mutations Tool used: VariantAnnotator
Using the posterior genotype probabilities, possible de novo mutations are tagged. Low confidence de novos have child GQ >= 10 and AC < 4 or AF < 0.1%, whichever is more stringent for the number of samples in the dataset. High confidence de novo sites have all trio sample GQs >= 20 with the same AC/AF criterion.
Step 4: Functional annotation of possible biological effects Tool options: Funcotator (experimental)
Especially in the case of de novo mutation detection, analysis can benefit from the functional annotation of variants to restrict variants to exons and surrounding regulatory regions. Funcotator is a new tool that is currently still in development. If you would prefer to use a more mature tool, we recommend you look into SnpEff or Oncotator, but note that these are not GATK tools so we do not provide support for them.
3. Output annotations
The Genotype Refinement workflow adds several new info- and format-level annotations to each variant. GQ fields will be updated, and genotypes calculated to be highly likely to be incorrect will be changed. The Phred-scaled genotype likelihoods (PLs) carry through the pipeline without being changed. In this way, PLs can be used to derive the original genotypes in cases where sample genotypes were changed.
Population Priors
New INFO field annotation PG is a vector of the Phred-scaled prior probabilities of a sample at that site being HomRef, Het, and HomVar. These priors are based on the input samples themselves along with data from the supporting samples if the variant in question overlaps another in the supporting dataset.
Phred-Scaled Posterior Probability
New FORMAT field annotation PP is the Phred-scaled posterior probability of the sample taking on each genotype for the given variant context alleles. The PPs represent a better calibrated estimate of genotype probabilities than the PLs are recommended for use in further analyses instead of the PLs.
Genotype Quality
Current FORMAT field annotation GQ is updated based on the PPs. The calculation is the same as for GQ based on PLs.
Joint Trio Likelihood
New FORMAT field annotation JL is the Phred-scaled joint likelihood of the posterior genotypes for the trio being incorrect. This calculation is based on the PLs produced by HaplotypeCaller (before application of priors), but the genotypes used come from the posteriors. The goal of this annotation is to be used in combination with JP to evaluate the improvement in the overall confidence in the trio’s genotypes after applying CalculateGenotypePosteriors. The calculation of the joint likelihood is given as:
where the GLs are the genotype likelihoods in [0, 1] probability space.
Joint Trio Posterior
New FORMAT field annotation JP is the Phred-scaled posterior probability of the output posterior genotypes for the three samples being incorrect. The calculation of the joint posterior is given as:
where the GPs are the genotype posteriors in [0, 1] probability space.
Low Genotype Quality
New FORMAT field filter lowGQ indicates samples with posterior GQ less than 20. Filtered samples tagged with lowGQ are not recommended for use in downstream analyses.
High and Low Confidence De Novo
New INFO field annotation for sites at which at least one family has a possible de novo mutation. Following the annotation tag is a list of the children with de novo mutations. High and low confidence are output separately.
4. Example
Before:
1 1226231 rs13306638 G A 167563.16 PASS AC=2;AF=0.333;AN=6;… GT:AD:DP:GQ:PL 0/0:11,0:11:0:0,0,249 0/0:10,0:10:24:0,24,360 1/1:0,18:18:60:889,60,0
After:
1 1226231 rs13306638 G A 167563.16 PASS AC=3;AF=0.500;AN=6;…PG=0,8,22;… GT:AD:DP:GQ:JL:JP:PL:PP 0/1:11,0:11:49:2:24:0,0,249:49,0,287 0/0:10,0:10:32:2:24:0,24,360:0,32,439 1/1:0,18:18:43:2:24:889,60,0:867,43,0
The original call for the child (first sample) was HomRef with GQ0. However, given that, with high confidence, one parent is HomRef and one is HomVar, we expect the child to be heterozygous at this site. After family priors are applied, the child’s genotype is corrected and its GQ is increased from 0 to 49. Based on the allele frequency from 1000 Genomes for this site, the somewhat weaker population priors favor a HomRef call (PG=0,8,22). The combined effect of family and population priors still favors a Het call for the child.
The joint likelihood for this trio at this site is two, indicating that the genotype for one of the samples may have been changed. Specifically a low JL indicates that posterior genotype for at least one of the samples was not the most likely as predicted by the PLs. The joint posterior value for the trio is 24, which indicates that the GQ values based on the posteriors for all of the samples are at least 24. (See above for a more complete description of JL and JP.)
5. More information about priors
The Genotype Refinement Pipeline uses Bayes’s Rule to combine independent data with the genotype likelihoods derived from HaplotypeCaller, producing more accurate and confident genotype posterior probabilities. Different sites will have different combinations of priors applied based on the overlap of each site with external, supporting SNP calls and on the availability of genotype calls for the samples in each trio.
Input-derived Population Priors
If the input VCF contains at least 10 samples, then population priors will be calculated based on the discovered allele count for every called variant.
Supporting Population Priors
Priors derived from supporting SNP calls can only be applied at sites where the supporting calls overlap with called variants in the input VCF. The values of these priors vary based on the called reference and alternate allele counts in the supporting VCF. Higher allele counts (for ref or alt) yield stronger priors.
Family Priors
The strongest family priors occur at sites where the called trio genotype configuration is a Mendelian violation. In such a case, each Mendelian violation configuration is penalized by a de novo mutation probability (currently 10-6). Confidence also propagates through a trio. For example, two GQ60 HomRef parents can substantially boost a low GQ HomRef child and a GQ60 HomRef child and parent can improve the GQ of the second parent. Application of family priors requires the child to be called at the site in question. If one parent has a no-call genotype, priors can still be applied, but the potential for confidence improvement is not as great as in the 3-sample case.
Caveats
Right now family priors can only be applied to biallelic variants and population priors can only be applied to SNPs. Family priors only work for trios.
6. Mathematical details
Note that family priors are calculated and applied before population priors. The opposite ordering would result in overly strong population priors because they are applied to the child and parents and then compounded when the trio likelihoods are multiplied together.
Review of Bayes’s Rule
HaplotypeCaller outputs the likelihoods of observing the read data given that the genotype is actually HomRef, Het, and HomVar. To convert these quantities to the probability of the genotype given the read data, we can use Bayes’s Rule. Bayes’s Rule dictates that the probability of a parameter given observed data is equal to the likelihood of the observations given the parameter multiplied by the prior probability that the parameter takes on the value of interest, normalized by the prior times likelihood for all parameter values:
$$ P(\theta|Obs) = \frac{P(Obs|\theta)P(\theta)}{\sum_{\theta} P(Obs|\theta)P(\theta)} $$
In the best practices pipeline, we interpret the genotype likelihoods as probabilities by implicitly converting the genotype likelihoods to genotype probabilities using non-informative or flat priors, for which each genotype has the same prior probability. However, in the Genotype Refinement Pipeline we use independent data such as the genotypes of the other samples in the dataset, the genotypes in a “gold standard” dataset, or the genotypes of the other samples in a family to construct more informative priors and derive better posterior probability estimates.
Calculation of Population Priors
Given a set of samples in addition to the sample of interest (ideally non-related, but from the same ethnic population), we can derive the prior probability of the genotype of the sample of interest by modeling the sample’s alleles as two independent draws from a pool consisting of the set of all the supplemental samples’ alleles. (This follows rather naturally from the Hardy-Weinberg assumptions.) Specifically, this prior probability will take the form of a multinomial Dirichlet distribution parameterized by the allele counts of each allele in the supplemental population. In the biallelic case the priors can be calculated as follows:
$$ P(GT = HomRef) = \dbinom{2}{0} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 2)}{\Gamma(nSamples + 2)\Gamma(RefCount)} $$
$$ P(GT = Het) = \dbinom{2}{1} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 1)\Gamma(AltCount + 1)}{\Gamma(nSamples + 2)\Gamma(RefCount)\Gamma(AltCount)} $$
$$ P(GT = HomVar) = \dbinom{2}{2} \ln \frac{\Gamma(nSamples)\Gamma(AltCount + 2)}{\Gamma(nSamples + 2)\Gamma(AltCount)} $$
where Γ is the Gamma function, an extension of the factorial function.
The prior genotype probabilities based on this distribution scale intuitively with number of samples. For example, a set of 10 samples, 9 of which are HomRef yield a prior probability of another sample being HomRef with about 90% probability whereas a set of 50 samples, 49 of which are HomRef yield a 97% probability of another sample being HomRef.
Calculation of Family Priors
Given a genotype configuration for a given mother, father, and child trio, we set the prior probability of that genotype configuration as follows:
$$ P(G_M,G_F,G_C) = P(\vec{G}) \cases{ 1-10\mu-2\mu^2 & no MV \cr \mu & 1 MV \cr \mu^2 & 2 MVs} $$
where the 10 configurations with a single Mendelian violation are penalized by the de novo mutation probability μ and the two configurations with two Mendelian violations by μ^2. The remaining configurations are considered valid and are assigned the remaining probability to sum to one.
This prior is applied to the joint genotype combination of the three samples in the trio. To find the posterior for any single sample, we marginalize over the remaining two samples as shown in the example below to find the posterior probability of the child having a HomRef genotype:
This quantity P(Gc|D) is calculated for each genotype, then the resulting vector is Phred-scaled and output as the Phred-scaled posterior probabilities (PPs).
|
Definition:Multiplication/Modulo Multiplication/Definition 2 Definition
Let $m \in \Z$ be an integer.
Let $\Z_m$ be the set of integers modulo $m$:
$\Z_m = \left\{{0, 1, \ldots, m-1}\right\}$ The operation of multiplication modulo $m$ is defined on $\Z_m$ as: Also denoted as
Although the operation of multiplication modulo $m$ is denoted by the symbol $\times_m$, if there is no danger of confusion, the conventional multiplication symbols $\times, \cdot$ etc. are often used instead.
Also see Results about modulo multiplicationcan be found here.
|
Shift the triangle to the origin by A -> A - A = 0; B -> B - A; C -> C - A.
Points in the plane of the shifted triangle can be expressed with {B - A, C - A} as a basis, in other words you have a linear expression for the translated point P in the form $\alpha$ (B - A) + $\beta$ (C - A).
For the given $x_4$ and $y_4$, translate them in the same way (P -> P - A), solve the two simultaneous equations for $\alpha, \beta$ then compute $z_4$ and translate back to the actual position by P -> P + A.
Further explanation.
The x, y, z axes are a basis for 3D space: they actually represent unit vectors $\hat x = (1, 0, 0)$; $\hat y = (0, 1, 0)$; $\hat z = (0, 0, 1)$. Any point in the 3D space can be represented as a linear combination of these unit vectors. The point A = $(x_1, y_1, z_1) $ for example is equivalent to $ x_1 \hat x + y_1 \hat y + z_1 \hat z$. The x, y, z axes are at right angles to each other, but you can in fact represent a point in 3D space by a combination of any three (non-zero) vectors so long as no two of them point in the same direction.
The triangle ABC lies in a plane, defined by the points A, B, and C. Shifting it to the origin (I moved the point A, but you could move any of the vertices) makes it a proper 2D space which includes the origin, (0, 0). You can represent any point in a 2D space by a combination of any two (non-zero) vectors so long as they don't point in the same direction. The translated points B - A and C - A are two such vectors (so long as the triangle is not degenerate) , so any point in the plane of the translated triangle can be represented as $\alpha$ (B - A) + $\beta$ (C - A).
Translate P (P -> P - A) in the same way so that it is in the plane of the translated triangle, and then for some $\alpha$ and $\beta$, P - A = $\alpha$ (B - A) + $\beta$ (C - A). Expand this out in co-ordinates:
(1) $x_4 - x_1 = \alpha (x_2 - x_1) + \beta(x_3 - x_1)$
(2) $y_4 - y_1 = \alpha (y_2 - y_1) + \beta(y_3 - y_1)$
(3) $z_4 - z_1 = \alpha (z_2 - z_1) + \beta(z_3 - z_1)$
Equations (1) and (2) are two equations in two unknowns $\alpha$ and $\beta$, which you can solve. Then put $\alpha$ and $\beta$ into equation (3) to get $x_4$.
Point to note
You say that you know that P is in the triangle. The process above works for any point
in the plane of the triangle, but does nothing to check that P is actually inside the triangle.
|
“Unperformed measurements have no results.” —Asher Peres
With two looming paper deadlines, two rambunctious kids, an undergrad class, program committee work, faculty recruiting, and an imminent trip to Capitol Hill to answer congressional staffers’ questions about quantum computing (and for good measure, to give talks at UMD and Johns Hopkins), the only sensible thing to do is to spend my time writing a blog post.
So: a bunch of people asked for my reaction to the new
Nature Communications paper by Daniela Frauchiger and Renato Renner, provocatively titled “Quantum theory cannot consistently describe the use of itself.” Here’s the abstract:
Quantum theory provides an extremely accurate description of fundamental processes in physics. It thus seems likely that the theory is applicable beyond the, mostly microscopic, domain in which it has been tested experimentally. Here, we propose a Gedankenexperiment to investigate the question whether quantum theory can, in principle, have universal validity. The idea is that, if the answer was yes, it must be possible to employ quantum theory to model complex systems that include agents who are themselves using quantum theory. Analysing the experiment under this presumption, we find that one agent, upon observing a particular measurement outcome, must conclude that another agent has predicted the opposite outcome with certainty. The agents’ conclusions, although all derived within quantum theory, are thus inconsistent. This indicates that quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner.
I first encountered Frauchiger and Renner’s argument back in July, when Renner (who I’ve known for years, and who has many beautiful results in quantum information) presented it at a summer school in Boulder, CO where I was also lecturing. I was sufficiently interested (or annoyed?) that I pulled an all-nighter working through the argument, then discussed it at lunch with Renner as well as John Preskill. I enjoyed figuring out exactly where I get off Frauchiger and Renner’s train—since I
do get off their train. While I found their paper thought-provoking, I reject the contention that there’s any new problem with QM’s logical consistency: for reasons I’ll explain, I think there’s only the same quantum weirdness that (to put it mildly) we’ve known about for quite some time.
In more detail, the paper makes a big deal about how the new argument rests on just three assumptions (briefly, QM works, measurements have definite outcomes, and the “transitivity of knowledge”); and how if you reject the argument, then you must reject at least one of the three assumptions; and how different interpretations (Copenhagen, Many-Worlds, Bohmian mechanics, etc.) make different choices about what to reject.
But I reject an assumption that Frauchiger and Renner never formalize. That assumption is, basically: “it makes sense to chain together statements that involve superposed agents measuring each other’s brains in different incompatible bases, as if the statements still referred to a world where these measurements weren’t being done.” I say: in QM, even statements that look “certain” in isolation might really mean something like “
if measurement X is performed, then Y will certainly be a property of the outcome.” The trouble arises when we have multiple such statements, involving different measurements X 1, X 2, …, and (let’s say) performing X 1 destroys the original situation in which we were talking about performing X 2.
But I’m getting ahead of myself. The first thing to understand about Frauchiger and Renner’s argument is that, as they acknowledge, it’s not entirely new. As Preskill helped me realize, the argument can be understood as simply the “Wigner’s-friendification” of Hardy’s Paradox. In other words, the new paradox is exactly what you get if you take Hardy’s paradox from 1992, and promote its entangled qubits to the status of conscious observers who are in superpositions over thinking different thoughts. Having talked to Renner about it, I don’t think he fully endorses the preceding statement. But since
I fully endorse it, let me explain the two ingredients that I think are getting combined here—starting with Hardy’s paradox, which I confess I didn’t know (despite knowing Lucien Hardy himself!) before the Frauchiger-Renner paper forced me to learn it.
Hardy’s paradox involves the two-qubit entangled state
$$\left|\psi\right\rangle = \frac{\left|00\right\rangle + \left|01\right\rangle + \left|10\right\rangle}{\sqrt{3}}.$$
And it involves two agents, Alice and Bob, who measure the left and right qubits respectively, both in the {|+〉,|-〉} basis. Using the Born rule, we can straightforwardly calculate the probability that Alice and Bob both see the outcome |-〉 as 1/12.
So what’s the paradox? Well, let me now “prove” to you that Alice and Bob can
never both get |-〉. Looking at |ψ〉, we see that conditioned on Alice’s qubit being in the state |0〉, Bob’s qubit is in the state |+〉, so Bob can never see |-〉. And conversely, conditioned on Bob’s qubit being in the state |0〉, Alice’s qubit is in the state |+〉, so Alice can never see |-〉. OK, but since |ψ〉 has no |11〉 component, at least one of the two qubits must be in the state |0〉, so therefore at least one of Alice and Bob must see |+〉!
When it’s spelled out so plainly, the error is apparent. Namely, what do we even
mean by a phrase like “conditioned on Bob’s qubit being in the state |0〉,” unless Bob actually measured his qubit in the {|0〉,|1〉} basis? But if Bob measured his qubit in the {|0〉,|1〉} basis, then we’d be talking about a different, counterfactual experiment. In the actual experiment, Bob measures his qubit only in the {|+〉,|-〉} basis, and Alice does likewise. As Asher Peres put it, “unperformed measurements have no results.”
Anyway, as I said, if you strip away the words and look only at the actual setup, it seems to me that Frauchiger and Renner’s contribution is basically to combine Hardy’s paradox with the earlier Wigner’s friend paradox. They thereby create something that doesn’t involve counterfactuals quite as obviously as Hardy’s paradox does, and so requires a new discussion.
But to back up: what
is Wigner’s friend? Well, it’s basically just Schrödinger’s cat, except that now it’s no longer a cat being maintained in coherent superposition but a person, and we’re emphatic in demanding that this person be treated as a quantum-mechanical observer. Thus, suppose Wigner entangles his friend with a qubit, like so:
$$ \left|\psi\right\rangle = \frac{\left|0\right\rangle \left|FriendSeeing0\right\rangle + \left|1\right\rangle \left|FriendSeeing1\right\rangle}{\sqrt{2}}. $$
From the friend’s perspective, the qubit has been measured and has collapsed to either |0〉 or |1〉. From Wigner’s perspective, no such thing has happened—there’s only been unitary evolution—and in principle, Wigner could even confirm that by measuring |ψ〉 in a basis that included |ψ〉 as one of the basis vectors. But how can they both be right?
Many-Worlders will yawn at this question, since for them,
of course “the collapse of the wavefunction” is just an illusion created by the branching worlds, and with sufficiently advanced technology, one observer might experience the illusion even while a nearby observer doesn’t. Ironically, the neo-Copenhagenists / Quantum Bayesians / whatever they now call themselves, though they consider themselves diametrically opposed to the Many-Worlders (and vice versa), will also yawn at the question, since their whole philosophy is about how physics is observer-relative and it’s sinful even to think about an objective, God-given “quantum state of the universe.” If, on the other hand, you believed both that collapse is an objective physical event, and human mental states can be superposed just like anything else in the physical universe,
then Wigner’s thought experiment probably
should rock your world.
OK, but how do we Wigner’s-friendify Hardy’s paradox? Simple: in the state
$$\left|\psi\right\rangle = \frac{\left|00\right\rangle + \left|01\right\rangle + \left|10\right\rangle}{\sqrt{3}},$$
we “promote” Alice’s and Bob’s entangled qubits to two conscious observers, call them Charlie and Diane respectively, who can think two different thoughts that we represent by the states |0〉 and |1〉. Using far-future technology, Charlie and Diane have been not merely placed into coherent superpositions over mental states but also entangled with each other.
Then, as before, Alice will measure Charlie’s brain in the {|+〉,|-〉} basis, and Bob will measure Diane’s brain in the {|+〉,|-〉} basis. Since the whole setup is mathematically identical to that of Hardy’s paradox, the probability that Alice and Bob both get the outcome |-〉 is again 1/12.
Ah, but now we can reason as follows:
Whenever Alice gets the outcome |-〉, she knows that Diane must be in the |1〉 state (since, if Diane were in the |0〉 state, then Alice would’ve certainly seen |+〉). Whenever Diane is in the |1〉 state, she knows that Charlie must be in the |0〉 state (since there’s no |11〉 component). Whenever Charlie is in the |0〉 state, she knows that Diane is in the |+〉 state, and hence Bob can’t possibly see the outcome |-〉 when he measures Diane’s brain in the {|+〉,|-〉} basis.
So to summarize, Alice knows that Diane knows that Charlie knows that Bob can’t possibly see the outcome |-〉. By the “transitivity of knowledge,” this implies that Alice herself knows that Bob can’t possibly see |-〉. And yet, as we pointed out before, quantum mechanics predicts that Bob
can see |-〉, even when Alice has also seen |-〉. And Alice and Bob could even do the experiment, and compare notes, and see that their “certain knowledge” was false. Ergo, “quantum theory can’t consistently describe its own use”!
You might wonder: compared to Hardy’s original paradox, what have we gained by waving a magic wand over our two entangled qubits, and calling them “conscious observers”? Frauchiger and Renner’s central claim is that, by this gambit, they’ve gotten rid of the illegal counterfactual reasoning that we needed to reach a contradiction in our analysis of Hardy’s paradox. After all, they say, none of the steps in
their argument involve any measurements that aren’t actually performed! But clearly, even if no one literally measures Charlie in the {|0〉,|1〉} basis, he’s still there, thinking either the thought corresponding to |0〉 or the thought corresponding to |1〉. And likewise Diane. Just as much as Alice and Bob, Charlie and Diane both exist even if no one measures them, and they can reason about what they know and what they know that others know. So then we’re free to chain together the “certainties” of Alice, Bob, Charlie, and Diane in order to produce our contradiction.
As I already indicated, I reject this line of reasoning. Specifically, I get off the train at what I called step 3 above. Why? Because the inference from Charlie being in the |0〉 state to Bob seeing the outcome |+〉 holds for the
original state |ψ〉, but in my view it ceases to hold once we know that Alice is going to measure Charlie in the {|+〉,|-〉} basis, which would involve a drastic unitary transformation (specifically, a “Hadamard”) on the quantum state of Charlie’s brain. I.e., I don’t accept that we can take knowledge inferences that would hold in a hypothetical world where |ψ〉 remained unmeasured, with a particular “branching structure” (as a Many-Worlder might put it), and extend them to the situation where Alice performs a rather violent measurement on |ψ〉 that changes the branching structure by scrambling Charlie’s brain.
In quantum mechanics, measure or measure not: there is no
if you hadn’t measured. Unrelated Announcement: My awesome former PhD student Michael Forbes, who’s now on the faculty at the University of Illinois Urbana-Champaign, asked me to advertise that the UIUC CS department is hiring this year in all areas, emphatically including quantum computing. And, well, I guess my desire to do Michael a solid outweighed my fear of being tried for treason by my own department’s recruiting committee… Another Unrelated Announcement: As of Sept. 25, 2018, it is the official editorial stance of Shtetl-Optimized that the Riemann Hypothesis and the abc conjecture both remain open problems.
|
A friend of mine posed this brain teaser to me recently:
What's the length of shortest bit sequence that's never been sent over the Internet?
We can never know for sure because we don't have a comprehensive list of all the data. But what can we say probabilistically? Restating it like so:
At what value for X is there a 50% chance there's a sequence of X-bits in length that hasn't been transmitted yet?
What does your intuition say? Obviously every 8-bit sequence has been sent since there's only 256 values. By downloading this HTML page over TLS you've probably used up every 8-bit value. Has every 100 byte message been sent?
This is how my intuition went: it's probably less than 128 bits because UUIDs are 128 bits, and they're universally unique. It's probably greater than 48 bits because of how common collisions are at that end for hashes and CRCs, and the Internet has generated a lot of traffic.
How would we determine the right value?
I decided to model data as each bit sent is like flipping a coin. This isn't strictly true, of course, but with encryption becoming more prevalent, it's getting to be close.
So how many flips of a coin does it take to expect to get n heads in a row?
I found this neat little paper deriving the following formula, where $n$ is number of heads in a row, and $E$ is the expected number of flips:
$$ E = 2^{n+1} - 2 $$
We're looking for a specific sequence, though, not a specific number of heads in a row. We don't even know what the sequence is since it hasn't been sent yet. Is that a problem? Not at all! We're looking for some sequence of length $n$, and given that both 0 and 1 are equally likely, the sequence 00110 is equally likely as 11111.
(Of course, different sequences on the Internet are not all equally likely, but we're simplifying to make this calculable.)
We're looking for $n$, however, and not the number of flips. What should the number of flips be set to? We need to estimate the total amount of data ever sent over the Internet. I found a nice table estimating how many petabytes per month are sent for each year.
Adding them up gets you $3.4067 \cdot 10^{22}$ bits, which is in the same rough neighborhood as the number of grains of sand on Earth! Neat.
To solve for $n$:
\begin{equation} \begin{aligned} E &= 2^{n + 1} - 2 \\ 3.4067 \cdot 10^{22} &= 2^{n + 1} - 2 \\ \log_2 (3.4067 \cdot 10^{22} + 2) - 1 &= n \\ n &= 73.85 \end{aligned} \end{equation}
So there's a 50% chance a message of length $73.85$ bits has not been sent yet. This matched my intuition nicely!
Using some forecasting estimates from Cisco, here's how $n$ changes over the next few years:
2017 $n = 74.28$ 2018 $n = 74.67$ 2019 $n = 75.05$ 2020 $n = 75.41$ 2021 $n = 75.75$
What do you think? Am I right? Is there a different way to solve this problem? Let me know via email or Twitter.
|
Let $Z_i \sim \mathcal{N}(0,1)$ be independent normal distributions. Consider the following correlated variables, defined by $$ X_1 = \frac{Z_1 + Z_2}{\sqrt{2}},\;\;\;X_2= \frac{Z_2 + Z_3}{\sqrt{2}},\;\;\;X_3= \frac{Z_3 + Z_4}{\sqrt{2}},\ldots$$
Thus each $X_i$ by itself is also a standard normal distribution but is correlated to the immediate neighbours $X_{i-1}$ and $X_{i+1}$. Consider the joint distribution of $(X_1,X_2)$ which is a joint normal with mean = $(0,0)$ and covariance matrix $$\begin{pmatrix} 1 & 1/2 \\ 1/2 & 1 \end{pmatrix} $$
Now the thing is according to the rules of conditional probability the conditional variance for $X_1$ is $\left(1-\rho^2\right)\sigma_1^2 = \frac{3}{4}$ in this case. So far so good.
Suppose we then consider the joint normal $\left(X_1,X_2,X_3\right)$, which has the covariance matrix $$\begin{pmatrix} 1 & 1/2 & 0 \\ 1/2 & 1 & 1/2 \\ 0 & 1/2 & 1 \end{pmatrix}$$
In this case, the conditional variance of $X_1$ is given by
$$ 1 - \begin{pmatrix} 1/2 & 0 \end{pmatrix}\begin{pmatrix} 1 & 1/2 \\ 1/2 & 1 \end{pmatrix}^{-1}\begin{pmatrix} 1/2 \\ 0 \end{pmatrix} = 2/3$$
The questions I have now are:
Since $X_1$ is not dependent on $X_3$ at all, why the conditional variance of $X_1$ drops from $3/4$ to $2/3$ when $X_3$ is taken into account? If I further include $X_4,X_5,\ldots$ the conditional variance seems to drop further and reaches a limit of $1/2$ when I include a very large number of $X_i$. Is there any intuitive explanation for this limit?
|
We can find the arc length of a curve by cutting it up into tiny pieces and adding up the length of each of the pieces. If the pieces are small and the curve is differentiable then each piece will be approximately linear.
We can use the distance formula to find the length of each piece:
\[ L = \sqrt{ \left(\Delta{x}\right)^2+ \left(\Delta{y}\right)^2}. \]
Multiplying and dividing by \(\Delta{t} \) gives
\[ L = \sqrt{ \left(\dfrac{\Delta{x}}{\Delta{t}}\right)^2+ \left(\dfrac{\Delta{y}}{\Delta{t}}\right)^2} \, \Delta {t}.\]
Adding up all the lengths and taking the limit as \(\Delta{t} \) approaches 0 gives the formula
\[ L = \int _a^b {\sqrt{ \left(\frac{dx}{dt}\right)^2+ \left(\frac{dy}{dt}\right)^2} \, dt}.\]
Example \(\PageIndex{1}\)
Find the arc length of the curve defined parametrically by \( x(t) = t^2 + 4t \) and \( y(t) = 1 - t^2, \) from \( 0 < t < 2 \).
Solution
We calculate the derivatives:
\[ x' = 2t + 4 \;\;\; \text{and}\;\;\; y' = -2t. \nonumber\]
Hence, the integrand to integrate is
\[ \sqrt{ \left(\dfrac{dx}{dt}\right)^2+ \left(\dfrac{dy}{dt}\right)^2} = \sqrt{ 8\,t^2 + 16\, t + 16} \nonumber\]
and the full integral to solve, with limits,
\[ \int_0^2 \sqrt{ 8\,t^2 + 16\, t + 16} \, dt \nonumber\]
is difficult (but not impossible) to do by hand. Either by hand or by computer we get
\[ L \approx 12.74. \nonumber\]
Larry Green (Lake Tahoe Community College)
Integrated by Justin Marshall.
|
$\newcommand{\vec}[1]{\mathbf{#1}} \newcommand{\dd}{\mathrm{d}}$I'm reading Landau and Lifshitz' book on non-relativistic quantum mechanics and I have some doubts about a passage in the chapter about elastic scattering. I have the French edition of 1966 so I cannot quote precisely, but it should be in §125, from around equation (125.10).
While studying the rate of transitions in the continuous spectrum (dealing with free particles of given momenta) due to some potential $U$, it is written that «we normalise the outgoing wave function, with momentum $\vec p'$, as the Dirac delta in the momentum space \begin{equation*} \psi_{\vec{p}'}(\vec{x})=\frac1{(2\pi\hbar)^{3/2}}e^{\frac{i}{\hbar}\vec{p}'\cdot\vec{x}} \end{equation*} and the incoming wave function to unit current density \begin{equation*} \psi_{\vec{p}}(\vec{x})=\sqrt{\frac{m}{p}}e^{\frac{i}{\hbar}\vec{p}\cdot\vec{x}} \end{equation*} therefore the probability given by Fermi's golden rule \begin{equation*} \dd w_{\vec{p}\vec{p}'}=\frac{2\pi}{\hbar}\bigl\lvert \langle\vec{p}'|U|\vec{p}\rangle\bigr\rvert^2\delta\bigl(E(\vec{p})-E(\vec{p}')\bigr)\,\dd\nu \end{equation*} represents the differential cross section of the scattering process».
Here $m$ is the mass of the particle, $p=\lVert\vec{p}\rVert$ and $\dd\nu$ represents an "interval of states", in this case $\dd p_x\dd p_y\dd p_z$.
Now, my question: can the normalisation factor of a free particle be arbitrary? My feeling is that the authors did it "because it works" and because it gives the desired result, but probably I just don't know something that happens behind the curtains of this derivation. I get that free particle wave functions cannot be normalised anyway in $\mathbb{R}^3$, but does this mean that I can multiply them by whatever (constant scalar factor) I want?
When the equation for the golden rule for transitions between continuous spectrum states was introduced (§43 in my edition), the authors in fact wrote that $\dd w$ cannot be considered as a transition rate, since it doesn't even have the correct units (I guess that depends on how you "count the states": I could have used e.g. $\dd\nu=\dd p_x\dd p_y\dd p_z/\hbar^3$ as well).
How do I resolve all this arbitrariness?
|
Kyle Kanos's answer looks to be very full, but I thought I'd add my own experience. The split-step Fourier method (SSFM) is extremely easy to get running and fiddle with; you can prototype it in a few lines of Mathematica and it is, extremely stable numerically. It involves imparting only unitary operators on your dataset, so it automatically conserves probability / power (the latter if you're solving Maxwell's equations with it, which is where my experience lies). For a one-dimensional Schrödinger equation (i.e. $x$ and $t$ variation only), it is extremely fast even as Mathematica code. And if you need to speed it up, you really only need a good FFT code in your target language (my experience lies with C++).
What you'd be doing is a disguised version of the Beam Propagation Method for optical propagation through a waveguide of varying cross section (analogous to time varying potentials), so it would be helpful to look this up too.
The way I look at the SSFM/BPM is as follows. Its grounding is the Trotter product formula of Lie theory:
$$\tag{1}\lim\limits_{m\to\infty}\left(\exp\left(\mathcal{D}\,\frac{t}{m}\right)\,\exp\left(\mathcal{V}\,\frac{t}{m}\right)\right)^m = \exp((\mathcal{D+V}) t)$$
which is sometimes called the operator splitting equation in this context. Your dataset is an $x-y$ or $x-y-z$ discretised grid of complex values representing $\psi(x,y,z)$ at a given time $t$. So you imagine this (you don't have to
do this; I'm still talking conceptually) whopping grid written as an $N$-element column vector $\Psi$ (for a $1024\times1024$ grid we have $N=1024^2=1\,048\,576$) and then your Schrödinger equation is of the form:
$$\tag{2}\mathrm{d}_t \Psi = K\Psi = (\mathcal{D+V}(t)) \Psi$$
where $K = \mathcal{D+V}$ is an $N\times N$ skew-Hermitian matrix, an element of $\mathfrak{u}(N)$, and $\Psi$ is going to be mapped with increasing time by an element of the one parameter group $\exp(K\,t)$. (I've sucked the $i\hbar$ factor into the $K = \mathcal{D+V}$ on the RHS so I can more readily talk in Lie theoretic terms). Given the size of $N$, the operators' natural habitat $\mathfrak{U}(N)$ is a thoroughly colossal Lie group so PHEW! yes I am still talking in wholly theoretical terms!. Now, what does $\mathcal{D+V}$ look like? Still imagining for now, it could be thought of as a finite difference version of $i\,\hbar\,\nabla^2/(2\,m) - i\hbar^{-1}V_0 + i\hbar^{-1}(V_0-V(x,y,z,t_0))$, where $V_0$ is some convenient "mean" potential for the problem at hand.
We let:
$$\tag{3}\begin{array}{lcl}\mathcal{D} &=& i\frac{\hbar}{2\,m} \nabla^2 - i\hbar^{-1}V_0\\\mathcal{V}&=&i\hbar^{-1}(V_0-V(x,y,z,t))\end{array}$$
Why I have split them up like this will become clear below.
The point about $\mathcal{D}$ is that it can be worked out analytically for a plane wave: it is a simple multiplication operator in momentum co-ordinates. So, to work out $\Psi\mapsto\exp(\Delta t\,\mathcal{D}) \Psi$, here are the first three steps of a SSFM/BPM cycle:
Impart FFT to dataset $\Psi$ to transform it into a set $\tilde{\Psi}$ of superposition weights of plane waves: now the grid co-ordinates have been changed from $x,\,y,\,z$ to $k_x,\,k_y,\,k_z$; Impart $\tilde{\Psi}\mapsto\exp(\Delta t\,\mathcal{D}) \tilde{\Psi}$ by simply multiplying each point on the grid by $\exp(i\,\Delta t (V_0-k_x^2+k_y^2+k_z^2)/\hbar)$;
Impart inverse FFT to map our grid back to $\exp(\Delta t\,\mathcal{D}) \Psi$
.Now we're back in position domain. This is the better domain to impart the operator $\mathcal{V}$ of course: here $\mathcal{V}$ is a simple multiplication operator. So here is your last step of your algorithmic cycle:
Impart the operator $\Psi\mapsto\exp(\Delta t\,\mathcal{V}) \Psi$ by simply multiplying each point on the grid by the phase factor $\exp(i\,\Delta t\,(V_0-V(x,y,z,t))/\hbar)$
....and then you begin your next $\Delta t$ step and cycle over and over. Clearly it is very easy to put time-varying potentials $V(x,y,z,t)$ into the code.
So you see you simply choose $\Delta t$ small enough that the Trotter formula (1) kicks in: you're simply approximating the action of the operator $\exp(\mathcal{D+V}\,\Delta t)\approx\exp(\mathcal{D}\,\Delta t)\,\exp(\mathcal{V}\,\Delta t)$ and you flit back and forth with your FFT between position and momentum co-ordinates, i.e. the domains where $\mathcal{V}$ and $\mathcal{D}$ are simple multiplication operators.
Notice that you are only ever imparting, even in the discretised world, unitary operators: FFTs and pure phase factors.
One point you do need to be careful of is that as your $\Delta t$ becomes small, you must make sure that the spatial grid spacing shrinks as well. Otherwise, suppose the spatial grid spacing is $\Delta x$. Then the physical meaning of the one discrete step is that the diffraction effects are travelling at a velocity $\Delta x/\Delta t$; when simulating Maxwell's equations and waveguides, you need to make sure that this velocity is much smaller than $c$. I daresay like limits apply to the Schrödinger equation: I don't have direct experience here but it does sound fun and maybe you could post your results sometime!
A second "experience" point with this kind of thing - I'd be almost willing to bet this is how you'll wind up following your ideas. We often have ideas that we want to do simple and quick and dirty simulations but it never quite works out that way! I'd begin with the SSFM as I've described above as it is very easy to get running and you'll quickly see whether or not its results are physical. Later on you can use your, say Mathematica SSFM code check the results of more sophisticated code you might end up building, say, a Crank Nicolson code along the lines of Kyle Kanos's answer.
Error Bounds
The Dynkin formula realisation of the Baker-Campbell-Hausdorff Theorem:
$$\exp(\mathcal{D}\Delta t)\,\exp(\mathcal{V})\Delta t) = \exp\left((\mathcal{D}+\mathcal{V})\Delta t + \frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,\Delta t^2 + \cdots\right)$$converging for some $\Delta t>0$ shows that the method is accurate to second order and can show that:
$$\exp(\mathcal{D}\Delta t)\,\exp(\mathcal{V})\Delta t)\,\exp\left(-\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,\Delta t^2\right) = \exp\left((\mathcal{D}+\mathcal{V})\Delta t + \mathcal{O}(\Delta t^3)\right)$$
You can, in theory, therefore use the term $\exp(\mathcal{V})\Delta t)\,\exp\left(-\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,\Delta t^2\right)$ to estimate the error and set your $\Delta t$ accordingly. This is not as easy as it looks and in practice bounds end up being instead rough estimates of the error. The problem is that:
$$\frac{\Delta t^2}{2}[\mathcal{D},\,\mathcal{V}] = -\frac{i\,\Delta t^2}{2\,m}\,\left(\partial_x^2 V(x,\,t) + 2 \partial_x V(x,\,t)\,\partial_x\right)$$
and there are no readily transformed to co-ordinates wherein $[\mathcal{D},\,\mathcal{V}]$ is a simple multiplication operator. So you have to be content with $\exp\left(-\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,\Delta t^2\right) \approx e^{-i\,\varphi\,\Delta t^2}\left(\mathrm{id} -\left(\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,-i\,\varphi(t)\right)\,\Delta t^2\right)$ and use this to estimate your error, by working out $\left(\mathrm{id} -\left(\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,-i\,\varphi(t)\right)\,\Delta t^2\right) \,\psi$ for your currently evolving solution $\psi(x,\,t)$ and using this to set your $\Delta t$ on-the-fly after each cycle of the algorithm. You can of course make these ideas the basis for an adaptive stepsize controller for your simulation. Here $\varphi$ is a global phase pulled out of the dataset to minimise the norm of $\left(\frac{1}{2} [\mathcal{D},\,\mathcal{V}]\,-i\,\varphi(t)\right)\,\Delta t^2$; you can of course often throw such a global phase out: depending on what you're doing with the simulation results often we're not bothered by a constant phase global $\exp\left(\int \varphi\,\mathrm{d}t\right)$.
A relevant paper about errors in the SSFM/BPM is:
Lars Thylén. "The Beam Propagation Method: An Analysis of its Applicability",
Optical and Quantum Electronics 15 (1983) pp433-439.
Lars Thylén thinks about the errors in non-Lie theoretic terms (Lie groups are my bent, so I like to look for interpretations of them) but his ideas are essentially the same as the above.
|
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say
|
It has "been known" since 1908 that for any such function, this holds for all but countably many real numbers $c$, even when we additionally require all the sequences to approach $c$ from the same side.
In May 1908 William Henry Young presented several results for general functions from $\mathbb R$ to ${\mathbb R},$ including a result implying that, given any such function, all but countably many real numbers $c$ have the property you're asking about. These results (for more about them, see my answer here) may have been joint work with his wife, Grace Chisholm Young, and the results were published in the 1908 paper cited below.
Young showed that for co-countably many real numbers $c$ we have
$$f(c) \in C^{-}(f,c) \;\; \text{and} \;\; f(c) \in C^{+}(f,c)$$
Definition: Given a function $f: {\mathbb R} \rightarrow {\mathbb R}$ and $c \in {\mathbb R}$, we let $C^{-}(f,c)$ be the set of all extended real numbers $y$ (i.e. $y$ can be $-\infty$ or $+\infty$) for which there exists a sequence $\left\{x_{k}\right\}$ such that for each $k$ we have $x_k < c,$ and we have $x_{k} \rightarrow c$ and $f(x_k) \rightarrow y.$ In other words, $C^{-}(f,c)$ is the set of all numbers (including $-\infty$ and $+\infty$) that can be obtained as a limit of $f$-values when using some sequence converging to $c$ from the left. The right version, $C^{+}(f,c),$ is defined analogously.
Incidentally, the requirement in this definition that each $x_k < c$ (and also each $x_k > c)$ allows you to find such sequences converging to $c$ that have infinitely many values.
William Henry Young,
Sulle due funzioni a più valori costituite dai limiti d'una variabile reale a destra e a sinistra di ciascun punto [On the two functions of multiple values that are determined by the left and right limits of a real variable at each point], Atti della Accademia Reale dei Lincei. Rendiconti. Classe di Scienze fisiche, Matematiche e Naturali (5) 17 #9 (1st semestre) (1908), 582-587. [Paper given at session dated 3 May 1908.]
|
LaTeX:Symbols
LaTeX About - Getting Started - Diagrams - Symbols - Downloads - Basics - Math - Examples - Pictures - Layout - Commands - Packages - Help
This article will provide a short list of commonly used LaTeX symbols.
Contents Common Symbols Operators Relations Finding Other Symbols
Here are some external resources for finding less commonly used symbols:
Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! MathJax (what allows us to use on the web) maintains a list of supported commands. The Comprehensive LaTeX Symbol List. Operators Relations
Symbol Command Symbol Command Symbol Command \le \ge \neq \sim \ll \gg \doteq \simeq \subset \supset \approx \asymp \subseteq \supseteq \cong \smile \sqsubset \sqsupset \equiv \frown \sqsubseteq \sqsupseteq \propto \bowtie \in \ni \prec \succ \vdash \dashv \preceq \succeq \models \perp \parallel \mid \bumpeq
Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a few examples, plus a few other negations; it works for many of the others as well.
Symbol Command Symbol Command Symbol Command \nmid \nleq \ngeq \nsim \ncong \nparallel \not< \not> \not= \not\le \not\ge \not\sim \not\approx \not\cong \not\equiv \not\parallel \nless \ngtr \lneq \gneq \lnsim \lneqq \gneqq
To use other relations not listed here, such as =, >, and <, in LaTeX, you may just use the symbols on your keyboard.
Greek Letters
Symbol Command Symbol Command Symbol Command Symbol Command \alpha \beta \gamma \delta \epsilon \varepsilon \zeta \eta \theta \vartheta \iota \kappa \lambda \mu \nu \xi \pi \varpi \rho \varrho \sigma \varsigma \tau \upsilon \phi \varphi \chi \psi \omega
Symbol Command Symbol Command Symbol Command Symbol Command \Gamma \Delta \Theta \Lambda \Xi \Pi \Sigma \Upsilon \Phi \Psi \Omega Headline text Arrows
Symbol Command Symbol Command \gets \to \leftarrow \Leftarrow \rightarrow \Rightarrow \leftrightarrow \Leftrightarrow \mapsto \hookleftarrow \leftharpoonup \leftharpoondown \rightleftharpoons \longleftarrow \Longleftarrow \longrightarrow \Longrightarrow \longleftrightarrow \Longleftrightarrow \longmapsto \hookrightarrow \rightharpoonup \rightharpoondown \leadsto \uparrow \Uparrow \downarrow \Downarrow \updownarrow \Updownarrow \nearrow \searrow \swarrow \nwarrow
(For those of you who hate typing long strings of letters, \iff and \implies can be used in place of \Longleftrightarrow and \Longrightarrow respectively.)
Dots
Symbol Command Symbol Command Symbol Command Symbol Command \cdot \vdots \dots \cdots \ddots \iddots Accents
Symbol Command Symbol Command Symbol Command \hat{x} \check{x} \dot{x} \breve{x} \acute{x} \ddot{x} \grave{x} \tilde{x} \mathring{x} \bar{x} \vec{x}
When applying accents to i and j, you can use \imath and \jmath to keep the dots from interfering with the accents:
Symbol Command Symbol Command \vec{\jmath} \tilde{\imath}
\tilde and \hat have wide versions that allow you to accent an expression:
Symbol Command Symbol Command \widehat{7+x} \widetilde{abc} Others Command Symbols
Some symbols are used in commands so they need to be treated in a special way.
Symbol Command Symbol Command Symbol Command Symbol Command \textdollar or $ \& \% \# \_ \{ \} \backslash
(Warning: Using $ for will result in . This is a bug as far as we know. Depending on the version of this is not always a problem.)
European Language Symbols
Symbol Command Symbol Command Symbol Command Symbol Command {\oe} {\ae} {\o} {\OE} {\AE} {\AA} {\O} {\l} {\ss} !` {\L} {\SS} Bracketing Symbols
In mathematics, sometimes we need to enclose expressions in brackets or braces or parentheses. Some of these work just as you'd imagine in LaTeX; type ( and ) for parentheses, [ and ] for brackets, and | and | for absolute value. However, other symbols have special commands:
Symbol Command Symbol Command Symbol Command \{ \} \| \backslash \lfloor \rfloor \lceil \rceil \langle \rangle
You might notice that if you use any of these to typeset an expression that is vertically large, like
(\frac{a}{x} )^2
the parentheses don't come out the right size:
If we put \left and \right before the relevant parentheses, we get a prettier expression:
\left(\frac{a}{x} \right)^2
gives
\left and \right can also be used to resize the following symbols:
Symbol Command Symbol Command Symbol Command \uparrow \downarrow \updownarrow \Uparrow \Downarrow \Updownarrow Multi-Size Symbols
Some symbols render differently in inline math mode and in display mode. Display mode occurs when you use \[...\] or $$...$$, or environments like \begin{equation}...\end{equation}, \begin{align}...\end{align}. Read more in the commands section of the guide about how symbols which take arguments above and below the symbols, such as a summation symbol, behave in the two modes.
In each of the following, the two images show the symbol in display mode, then in inline mode.
Symbol Command Symbol Command Symbol Command \sum \int \oint \prod \coprod \bigcap \bigcup \bigsqcup \bigvee \bigwedge \bigodot \bigotimes \bigoplus \biguplus
|
I am looking for a proof, a hint or an idea to the following problem:
Is the unique solution $x\in (0,2\pi)$ of
$$ x\sin(x) + \cos(x) = 1 $$
which is equivalent to
$$ 2\arctan(x) = x$$
a rational multiple of $\pi$. I.e. is $\frac{x}{\pi} \in \mathbb{Q}$?
I believe that this is not true. This idea is based on the numeric solution, which does not look very rational:
One idea is to use Thomas Andrews answer:
$\arctan(x)$ is a rational multiple of $\pi$ if and only if the complex number $1+xi$ has the property that $(1+xi)^n$ is a real number for some positive integer $n$. This is not possible if $x$ is a rational, $|x|\neq 1$, because $(q+pi)^n$ cannot be real for any $n$ if $(q,p)=1$ and $|qp|>1$. So $\arctan(\frac{p}{q})$ cannot be a rational multiple of $\pi$. (His full answer and proof can be found here: ArcTan(2) a rational multiple of $\pi$?)
Now, one would need to show: If $x$ is a rational multiple of $\pi$, is there an $n$, such that $(q+p\pi i)^n$ is real? For $p,q,n \in \{1,\ldots,100\}$ Mathematica says no:
Do[Do[Do[If[(p + Pi*I*q)^n \[Element] Rationals, Print[n, p, q],], {n, 1, 100}], {p, 1, 100}], {q, 1, 100}]
Thanks in advance.
|
Let $X$ be a compact metric Borel space. Suppose $\mu_{n}(A)\rightarrow\mu(A)$ for all $\mu-$continuity sets $A$ (sets with zero boundary measure), where $\mu_{n}$ is a sequence of probability measures. (some people call it weak other weak* convergence)
If $E$ is a measurable set such that $\mu(E)>0$ and the Cesaro average of $\mu_{n}(E)$ converges; can we conclude that $\mu_{n}(E)$ converges?
Can we conclude this with extra hypothesis?
I am particularly interested in the case when $T:X\rightarrow X$ is a continuous transformation, $\mu_{n}=T^{n}\mu_{1},$ and $E=\cap T^{-i}A_{i}$ where $A_{i}$ is a sequence of $\mu-$continuity sets.
|
This is essentially an addition to the list of @4tnemele
I'd like to add some earlier work to this list, namely Discrete Gauge Theory.
Discrete gauge theory in 2+1 dimensions arises by breaking a gauge symmetry with gauge group $G$ to some lower
discrete subgroup $H$, via a Higgs mechanism. The force carriers ('photons') become massive which makes the gauge force ultra-short ranged. However, as the gauge group is not completely broken we still have the the Aharanov-Bohm effect. If H is Abelian this AB effect is essentially a 'topological force'. It gives rise to a phase change when one particle loops around another particle. This is the idea of fractional statistics of Abelian anyons.
The particle types that we can construct in such a theory (i.e. the one that are "color neutral") are completely determined by the residual, discrete gauge group $H$. To be more precise: a particle is said to be charged if it carries
a representation of the group H. The number of different particle types that carry a charge is then equal to the number of irreducible representations of the group H. This is similar to ordinary Yang-Mills theory where charged particles (quarks) carry the fundamental representation of the gauge group (SU(3). In a discrete gauge theory we can label all possible charged particle types using the representation theory of the discrete gauge group H.
But there are also other types of particles that can exist, namely those that carry flux. These flux carrying particles are also known as magnetic monopoles. In a discrete gauge theory the flux-carrying particles are labeled by the
conjugacy classes of the group H. Why conjugacy classes? Well, we can label flux-carrying particles by elements of the group H. A gauge transformation is performed through conjugacy, where $ |g_i\rangle \rightarrow |hg_ih^{-1}\rangle $ for all particle states $|g_i\rangle$ (suppressing the coordinate label). Since states related by gauge transformations are physically indistinguishable the only unique flux-carrying particles we have are labeled by conjugacy classes.
Is that all then? Nope. We can also have particles which carry both charge and flux -- these are known as dyons. They are labeled by both an irrep and a conjugacy class of the group $H$. But, for reasons which I wont go into, we cannot take all possible combinations of possible charges and fluxes.
(It has to do with the distinguishability of the particle types. Essentially, a dyon is labeled by $|\alpha, \Pi(g)\rangle$ where $\alpha$ is a conjugacy class and $\Pi(N(g))$ is a representation of the associated normalizer $N(\alpha)$ of the conjugacy class $\alpha$.)
The downside of this approach is the rather unequal setting of flux carrying particles (which are labeled by conjugacy classes), charged particles (labeled by representations) and dyons (flux+compatible charge). A unifying picture is provided by making use of the (quasitriangular) Hopf algebra $D(H)$ also known as a quantum double of the group $H$.
In this language
all particles are (irreducible) representations of the Hopf algebra $D(H)$. A Hopf Algebra is endowed with certain structures which have very physical counterparts. For instance, the existence of a tensor product allows for the existence of multiple particle configurations (each particle labeled by their own representation of the Hopf algebra). The co-multiplication then defines how the algebra acts on this tensored space. the existence of an antipode (which is a certain mapping from the algebra to itself) ensures the existence of an anti-particle. The existence of a unit labels the vacuum (=trivial particle).
We can also go beyond the structure of a Hopf algebra and include the notion of an R-matrix. In fact, the quasitriangular Hopf Algebra (i.e. the quantum double) does precisely this: add the R-matrix mapping. This R-matrix describes what happens when one particle loops around another particle (braiding). For non-Abelian groups $H$ this leads to non-Abelian statistics. These quasitriangular Hopf algebras are also known as quantum groups.
Nowadays the language of discrete gauge theory has been replaced by more general structures, referred to by topological field theories, anyon models or even modular tensor categories. The subject is huge, very rich, very physical and a lot of fun (if you're a bit nerdy ;)).
Sources:
http://arxiv.org/abs/hep-th/9511201 (discrete gauge theory)
http://www.theory.caltech.edu/people/preskill/ph229/ (lecture notes: check out chapter 9. Quite accessible!)
http://arxiv.org/abs/quant-ph/9707021 (a simple lattice model with anyons. There are definitely more accessible review articles of this model out there though.)
http://arxiv.org/abs/0707.1889 (review article, which includes potential physical realizations)
|
Exam-Style Question on Logarithms A mathematics exam-style question with a worked solution that can be revealed gradually
Question id: 410. This question is similar to one that appeared on an IB AA Standard paper (specimen) for 2021. The use of a calculator is allowed.
(a) Show that \( \log_4 (\sin 2x +2) = \log_2 \sqrt{\sin 2x + 2 }\)
(b) Hence or otherwise solve \( \log_2 (2 \cos x) = \log_4 (\sin 2x + 2) \) to show that \(x = \frac12 \tan^{-1} 2 \).
The worked solutions to these exam-style questions are only available to those who have a Transum Subscription.
Subscribers can drag down the panel to reveal the solution line by line. This is a very helpful strategy for the student who does not know how to do the question but given a clue, a peep at the beginnings of a method, they may be able to make progress themselves.
This could be a great resource for a teacher using a projector or for a parent helping their child work through the solution to this question. The worked solutions also contain screen shots (where needed) of the step by step calculator procedures.
A subscription also opens up the answers to all of the other online exercises, puzzles and lesson starters on Transum Mathematics and provides an ad-free browsing experience.
Drag this panel down to reveal the solution
If you need more practice try the self-checking interactive exercises called Logarithms.
©1997 - 2019 Transum Mathematics :: For more exam-style questions and worked solutions go to Transum.org/Maths/Exam/
|
Difference between revisions of "LaTeX:Symbols"
m (→Dots)
(→Operators)
Line 6: Line 6:
=== Operators ===
=== Operators ===
+ + + + +
=== Relations ===
=== Relations ===
Revision as of 21:23, 4 December 2017
LaTeX About - Getting Started - Diagrams - Symbols - Downloads - Basics - Math - Examples - Pictures - Layout - Commands - Packages - Help
This article will provide a short list of commonly used LaTeX symbols.
Contents Common Symbols Operators Relations Finding Other Symbols
Here are some external resources for finding less commonly used symbols:
Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! MathJax (what allows us to use on the web) maintains a list of supported commands. The Comprehensive LaTeX Symbol List. Operators Relations
Symbol Command Symbol Command Symbol Command \le \ge \neq \sim \ll \gg \doteq \simeq \subset \supset \approx \asymp \subseteq \supseteq \cong \smile \sqsubset \sqsupset \equiv \frown \sqsubseteq \sqsupseteq \propto \bowtie \in \ni \prec \succ \vdash \dashv \preceq \succeq \models \perp \parallel \mid \bumpeq
Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a few examples, plus a few other negations; it works for many of the others as well.
Symbol Command Symbol Command Symbol Command \nmid \nleq \ngeq \nsim \ncong \nparallel \not< \not> \not= \not\le \not\ge \not\sim \not\approx \not\cong \not\equiv \not\parallel \nless \ngtr \lneq \gneq \lnsim \lneqq \gneqq
To use other relations not listed here, such as =, >, and <, in LaTeX, you may just use the symbols on your keyboard.
Greek Letters
Symbol Command Symbol Command Symbol Command Symbol Command \alpha \beta \gamma \delta \epsilon \varepsilon \zeta \eta \theta \vartheta \iota \kappa \lambda \mu \nu \xi \pi \varpi \rho \varrho \sigma \varsigma \tau \upsilon \phi \varphi \chi \psi \omega
Symbol Command Symbol Command Symbol Command Symbol Command \Gamma \Delta \Theta \Lambda \Xi \Pi \Sigma \Upsilon \Phi \Psi \Omega Arrows
Symbol Command Symbol Command \gets \to \leftarrow \Leftarrow \rightarrow \Rightarrow \leftrightarrow \Leftrightarrow \mapsto \hookleftarrow \leftharpoonup \leftharpoondown \rightleftharpoons \longleftarrow \Longleftarrow \longrightarrow \Longrightarrow \longleftrightarrow \Longleftrightarrow \longmapsto \hookrightarrow \rightharpoonup \rightharpoondown \leadsto \uparrow \Uparrow \downarrow \Downarrow \updownarrow \Updownarrow \nearrow \searrow \swarrow \nwarrow
(For those of you who hate typing long strings of letters, \iff and \implies can be used in place of \Longleftrightarrow and \Longrightarrow respectively.)
Dots
Symbol Command Symbol Command \cdot \vdots \dots \ddots \cdots \iddots Accents
Symbol Command Symbol Command Symbol Command \hat{x} \check{x} \dot{x} \breve{x} \acute{x} \ddot{x} \grave{x} \tilde{x} \mathring{x} \bar{x} \vec{x}
When applying accents to i and j, you can use \imath and \jmath to keep the dots from interfering with the accents:
Symbol Command Symbol Command \vec{\jmath} \tilde{\imath}
\tilde and \hat have wide versions that allow you to accent an expression:
Symbol Command Symbol Command \widehat{7+x} \widetilde{abc} Others Command Symbols
Some symbols are used in commands so they need to be treated in a special way.
Symbol Command Symbol Command Symbol Command Symbol Command \textdollar or $ \& \% \# \_ \{ \} \backslash
(Warning: Using $ for will result in . This is a bug as far as we know. Depending on the version of this is not always a problem.)
European Language Symbols
Symbol Command Symbol Command Symbol Command Symbol Command {\oe} {\ae} {\o} {\OE} {\AE} {\AA} {\O} {\l} {\ss} !` {\L} {\SS} Bracketing Symbols
In mathematics, sometimes we need to enclose expressions in brackets or braces or parentheses. Some of these work just as you'd imagine in LaTeX; type ( and ) for parentheses, [ and ] for brackets, and | and | for absolute value. However, other symbols have special commands:
Symbol Command Symbol Command Symbol Command \{ \} \| \backslash \lfloor \rfloor \lceil \rceil \langle \rangle
You might notice that if you use any of these to typeset an expression that is vertically large, like
(\frac{a}{x} )^2
the parentheses don't come out the right size:
If we put \left and \right before the relevant parentheses, we get a prettier expression:
\left(\frac{a}{x} \right)^2
gives
And with system of equations:
\left\{\begin{array}x+y=3\\2x+y=5\end{array}\right.
Gives
See that there's a dot after
\right. You must put that dot or the code won't work.
And, if you type this
\underbrace{a_0+a_1+a_2+\cdots+a_n}_{x}
Gives
Or
\overbrace{a_0+a_1+a_2+\cdots+a_n}^{x}
Gives
\left and \right can also be used to resize the following symbols:
Symbol Command Symbol Command Symbol Command \uparrow \downarrow \updownarrow \Uparrow \Downarrow \Updownarrow Multi-Size Symbols
Some symbols render differently in inline math mode and in display mode. Display mode occurs when you use \[...\] or $$...$$, or environments like \begin{equation}...\end{equation}, \begin{align}...\end{align}. Read more in the commands section of the guide about how symbols which take arguments above and below the symbols, such as a summation symbol, behave in the two modes.
In each of the following, the two images show the symbol in display mode, then in inline mode.
Symbol Command Symbol Command Symbol Command \sum \int \oint \prod \coprod \bigcap \bigcup \bigsqcup \bigvee \bigwedge \bigodot \bigotimes \bigoplus \biguplus
|
For the proof of Lemma 1 we need some auxiliary results on the eigenvalues of \({\mathbf {M}}_{22}-m_2^2{\mathbf {J}}_{C(K,2)}\) for a symmetric invariant design when \(K\ge 4\).
Lemma 3
Let
$$\begin{aligned} \lambda _{\varvec{1}}= 1+2(K-2)m_{2} +\textstyle {\frac{1}{2}}(K-2)(K-3)m_{4} - \textstyle {\frac{1}{2}}K(K-1)m_{2}^2 \end{aligned}$$
be the eigenvalue of \({\mathbf {M}}_{22}-m_2^2{\mathbf {J}}_{C(K,2)}\)
associated with the eigenvector \(\varvec{1}_{C(K,2)}\)
. Then \(\lambda _{\varvec{1}}>0\)
if and only if the support of \({\bar{\xi }}\)
includes at least two distinct symmetric orbits.
Proof
First note that for each invariant design \({\bar{\xi }}_k\) on a single orbit \({\mathcal {O}}_k\) we get by inserting the moments that the corresponding eigenvalue \(\lambda _{\varvec{1}}({\bar{\xi }}_k)\) is zero. Let \({\tilde{\xi }}_{k}\) be the corresponding symmetric invariant design on the symmetric orbit \(\tilde{{\mathcal {O}}}_{k}\). Then \(m_{2}({\tilde{\xi }}_k) = m_{2}({\bar{\xi }}_{k})\), \(m_{4}({\tilde{\xi }}_k) = m_{4}({\bar{\xi }}_{k})\) and, hence, \(\lambda _{\varvec{1}}({\tilde{\xi }}_k)=\lambda _{\varvec{1}}({\bar{\xi }}_k)=0\). Consequently at least two distinct symmetric orbits are needed for \(\lambda _{\varvec{1}}>0\).
Now, let \({\bar{\xi }}={\bar{w}}_k{\tilde{\xi }}_k+{\bar{w}}_{\ell }{\tilde{\xi }}_{\ell }\) be a symmetric invariant design on the symmetric orbits \(\tilde{{\mathcal {O}}}_{k}\) and \(\tilde{{\mathcal {O}}}_{\ell }\), \(k<\ell \le K/2\), \({\bar{w}}_k,{\bar{w}}_{\ell }>0\). As \(m_2({\bar{\xi }}_k)\) is strictly decreasing in
k we have \(m_2({\bar{\xi }}_k)\ne m_2({\bar{\xi }}_{\ell })\). Thus \(m_2({\bar{\xi }})^2<{\bar{w}}_k m_2({\bar{\xi }}_k)^2 + {\bar{w}}_{\ell } m_2({\bar{\xi }}_{\ell })^2\) by the strict concavity of the quadratic function, which implies \(\lambda _{\varvec{1}}>0\). \(\square \) Lemma 4
Let \(\lambda _{{\mathbf {S}}}=1+(K-4)m_{2} - (K-3) m_{4}\) be the eigenvalue of \({\mathbf {M}}_{22}-m_2^2{\mathbf {J}}_{C(K,2)}\) associated with the remaining eigenvectors of \({\mathbf {S}}_{K}{\mathbf {S}}_{K}^{\top }\) orthogonal to \(\varvec{1}_{C(K,2)}\). Then \(\lambda _{{\mathbf {S}}}>0\) if and only if the support of \({\bar{\xi }}\) includes at least one symmetric orbit \(\tilde{{\mathcal {O}}}_{k}\) for which \(0<k<K/2\).
Proof
By inserting the moments we get for this eigenvalue
$$\begin{aligned} \lambda _{{\mathbf {S}}}({\bar{\xi }}_{k}) = \frac{(2k-K)^2(K^2 - (2k-K)^2)}{K(K-1)(K-2)}\,, \end{aligned}$$
which is equal to 0 for \(k=0\)
or \(k=K/2\)
. Moreover, \(\lambda _{{\mathbf {S}}}({\bar{\xi }}_k)\)
is a polynomial in
k
of degree four, symmetric around
K
/ 2, and with negative leading term. Hence, there cannot be any other root, and \(\lambda _{{\mathbf {S}}}({\bar{\xi }}_k)>0\)
for all \(0<k<K/2\)
. \(\square \)
Lemma 5
Let \(\lambda _{{\mathbf {I}}} = 1-2m_{2}+m_{4}\) be the eigenvalue of \({\mathbf {M}}_{22}-m_2^2{\mathbf {J}}_{C(K,2)}\) associated with the remaining eigenvectors orthogonal to those of \({\mathbf {S}}_{K}{\mathbf {S}}_{K}^{\top }\). Then \(\lambda _{{\mathbf {I}}}>0\) if and only if the support of \({\bar{\xi }}\) includes at least one symmetric orbit \(\tilde{{\mathcal {O}}}_{k}\) for which \(k>1\).
Proof
By inserting the moments we see that \(\lambda _{{\mathbf {I}}}({\bar{\xi }}_k)\) is a polynomial in
k of degree four, symmetric around K / 2, and with positive leading term, which is equal to zero for \(k=0\) and \(k=1\). Hence, there cannot be any other root, and \(\lambda _{{\mathbf {I}}}({\bar{\xi }}_k)>0\) for all \(1<k\le K/2\). \(\square \) Proof (Proof of Lemma 1)
We note that according to Freise et al. (2018) the matrix \({\mathbf {M}}_{11}\) associated with the main effects is nonsingular when at least two distinct symmetric orbits are involved in the design \({\bar{\xi }}\). Hence, the requirement of \({\mathbf {M}}_{11}\) to be nonsingular does not impose any additional condition besides those of Lemmas 3–5. Hence, the assertion follows from these lemmas. \(\square \)
Proof (Proof of Theorem 1)
For \(K=2\) and \(K=3\) we have \(B_K<1\), such that the full factorial design can serve as the asserted symmetric invariant design.
Let \(K\ge 4\)
. If \(L=B_{K}\)
, then the design of Lemma 2
can be used. Let \(L<B_{K}\)
and let
K
be even. Then choose \(\ell \)
such that
$$\begin{aligned} B_{K} = \frac{K-\sqrt{3K-2}}{2} \le \ell \le \frac{K-\sqrt{K}}{2}\,. \end{aligned}$$
Such \(\ell \)
always exists (for \( K\le 8\)
see Table 1
; note that \(\sqrt{3K-2}-\sqrt{K}\ge 2\)
for \(K\ge 10\)
).
Let designs \({\bar{\xi }}_{(L)}\)
and \({\bar{\xi }}_{(\ell )}\)
be defined as
$$\begin{aligned} {\bar{\xi }}_{(L)}={\bar{w}}_L{\bar{\xi }}_L+(1-2{\bar{w}}_L){\bar{\xi }}_{K/2} + {\bar{w}}_L{\bar{\xi }}_{K-L} \quad \text {with}\quad {\bar{w}}_{L} = \frac{K}{2(2L-K)^2} \end{aligned}$$
(9)
and
$$\begin{aligned} {\bar{\xi }}_{(\ell )}={\bar{w}}_{\ell }{\bar{\xi }}_{\ell }+(1-2{\bar{w}}_{\ell }){\bar{\xi }}_{K/2} + {\bar{w}}_{\ell }{\bar{\xi }}_{K-\ell } \quad \text {with}\quad {\bar{w}}_{\ell } = \frac{K}{2(2\ell -K)^2}\,. \end{aligned}$$
(10)
According to Freise et al. (2018
) the moments \(m_{2}({\bar{\xi }}_{(L)})\)
and \(m_{2}({\bar{\xi }}_{(\ell )})\)
are equal to 0, because \(L,\ell <(K-\sqrt{K})/2\)
. Next we consider the convex combination
$$\begin{aligned} {\bar{\xi }}^{*} = \alpha {\bar{\xi }}_{(L)} + (1-\alpha ){\bar{\xi }}_{(\ell )} \quad \text {with}\quad \alpha = \frac{3K-2-(2\ell -K)^2}{4(\ell -L)(K - L - \ell )}\,. \end{aligned}$$
Also for the symmetric invariant design \({\bar{\xi }}^*\)
we have \(m_{2}({\bar{\xi }}^*) = 0\)
. Further we obtain
$$\begin{aligned} m_{4}({\bar{\xi }}^*) = \frac{4 \alpha (\ell -L)(K - L - \ell )+ (2\ell -K)^2 - (3K-2)}{(K-1)(K-2)(K-3)} = 0\,, \end{aligned}$$
which establishes the result for even
K
.
For
K
odd the proof is similar. For \(L<B_{K}\)
choose \(\ell \)
as above with the corresponding value for \(B_{K}\)
and the designs
$$\begin{aligned} {\bar{\xi }}_{(L)}={\bar{w}}_L{\bar{\xi }}_L+\big (\textstyle {\frac{1}{2}}-{\bar{w}}_L\big ){\bar{\xi }}_{(K-1)/2} +\big (\textstyle {\frac{1}{2}}-{\bar{w}}_L\big ){\bar{\xi }}_{(K+1)/2}+ {\bar{w}}_L{\bar{\xi }}_{K-L} \end{aligned}$$
and
$$\begin{aligned} {\bar{\xi }}_{(\ell )}={\bar{w}}_{\ell }{\bar{\xi }}_{\ell }+\big (\textstyle {\frac{1}{2}}-{\bar{w}}_{\ell }\big ){\bar{\xi }}_{(K-1)/2} +\big (\textstyle {\frac{1}{2}}-{\bar{w}}_{\ell }\big ){\bar{\xi }}_{(K+1)/2}+ {\bar{w}}_{\ell }{\bar{\xi }}_{K-\ell } \end{aligned}$$
with corresponding weights
$$\begin{aligned} {\bar{w}}_{L} =\frac{K-1}{2((2L-K)^2-1)} \quad \text {and}\quad {\bar{w}}_{\ell } =\frac{K-1}{2((2\ell -K)^2-1)}\,. \end{aligned}$$
Again such \(\ell \)
always exists (for \(K\le 7\)
see Table 1
; for \(K \ge 9\)
note that \(\sqrt{3K}-\sqrt{K}\ge 2\)
) and \(m_{2}({\bar{\xi }}_{(L)})=m_{2}({\bar{\xi }}_{(\ell )})=0\)
.
Then the convex combination
$$\begin{aligned} {\bar{\xi }}^{*} = \alpha {\bar{\xi }}_{(L)} + (1-\alpha ){\bar{\xi }}_{(\ell )} \quad \text {with}\quad \alpha = \frac{3K-(2\ell -K)^2}{4(\ell -L)(K-L-\ell )} \end{aligned}$$
yields \(m_{4}({\bar{\xi }}^*)=0\)
, and the result follows. \(\square \)
For the proof of Theorem 2 we will make use of the celebrated Kiefer-Wolfowitz equivalence theorem (Kiefer and Wolfowitz 1960). Therefore we first investigate the sensitivity function (functional derivative).
Lemma 6
Let \({\bar{\xi }}\) be a symmetric invariant design. Then the sensitivity function \(\psi (\varvec{x})=\varvec{f}(\varvec{x})^{\top }{\mathbf {M}}({\bar{\xi }})^{-1}\varvec{f}(\varvec{x})\) is constant on the orbits, \(\psi (\varvec{x})={\tilde{\psi }}(k)\) for \(\varvec{x}\in {\mathcal {O}}_k\), say, and the function \({\tilde{\psi }}\) is a polynomial of degree at most 4, which is symmetric with respect to
K / 2 \(({\tilde{\psi }}(K-k)={\tilde{\psi }}(k))\). Proof
Using the inverse of the information matrix in Eq. (8
) we obtain for the sensitivity function
$$\begin{aligned} \psi (\varvec{x}) = c_{0} -2c_{2}\varvec{{\tilde{x}}}^\top \varvec{1}_{C(K,2)} +\varvec{x}^\top {\mathbf {M}}_{11}^{-1}\varvec{x} +\varvec{{\tilde{x}}}^\top {\mathbf {C}}_{22}\varvec{{\tilde{x}}}\,. \end{aligned}$$
(11)
Note that \(\varvec{x}^\top \varvec{x} = K\)
and \(\varvec{{\tilde{x}}}^\top \varvec{{\tilde{x}}} = K(K-1)/2\)
. Further, for \(\varvec{x} \in {\mathcal {O}}_{k}\)
, we get
$$\begin{aligned} \varvec{x}^\top \varvec{1}_{K} = 2 k - K\,, \quad \varvec{{\tilde{x}}}^\top \varvec{1}_{C(K,2)} = ((2k-K)^2 - K)/2 \end{aligned}$$
and
$$\begin{aligned} \varvec{{\tilde{x}}}^\top {\mathbf {S}}_{K}{\mathbf {S}}_{K}^\top \varvec{{\tilde{x}}} = (K - 2)(2 k-K)^{2} + K\,. \end{aligned}$$
This yields
$$\begin{aligned} \psi (\varvec{x}) = a_{4}(2k-K)^4+ a_{2}(2k-K)^{2}+a_{0} \end{aligned}$$
(12)
with coefficients
$$\begin{aligned} a_{0}&=c_{0} + \frac{K}{1-m_{2}} +\frac{K(K-1)}{2(1-2m_{2}+m_{4})} -\frac{\delta _{{\mathbf {S}}}K}{1-2m_{2}+m_{4}} -\frac{\delta _{{\mathbf {J}}}K^2}{4(1-2m_{2}+m_{4})}\\ a_{2}&=-\left( c_{2} + \frac{m_{2}}{(1-m_{2})(1+(K-1)m_{2})} +\frac{\delta _{{\mathbf {S}}}(K-2)}{1-2m_{2}+m_{4}} -\frac{\delta _{{\mathbf {J}}}K}{2(1-2m_{2}+m_{4})}\right) \\ a_{4}&=-\frac{\delta _{{\mathbf {J}}}}{4(1-2m_{2}+m_{4})}\,. \end{aligned}$$
Hence, \({{\tilde{\psi }}}\)
is a polynomial of degree four in
k
. The symmetry around
K
/ 2 follows, since only even powers of
k
occur. \(\square \)
Lemma 7
Let \(B_{K}<L<K/2\) and \({\bar{\xi }}^*\) an optimal symmetric invariant design on \({\mathcal {X}}_{L,K-L}\). Then the orbitwise sensitivity function \({\tilde{\psi }}\) has a positive leading term for the fourth order monomial \(k^4\).
Proof
We will use the same notation as in Lemma 6 and its proof.
Let the coefficient of the fourth order monomial in \({\tilde{\psi }}\) be nonpositive or equivalently let \(a_{4}\le 0\). Then the function \({\tilde{\psi }}\) as a function in
k has either a single maximum at \(k=K/2\), two (symmetric) maxima outside \((L,K-L)\) (respectively at \(k=L\) and \(k=K-L\) for the admitted orbits), two (symmetric) maxima inside \((L,K-L)\), or is constant.
In the first two cases \({\bar{\xi }}^*\) is supported on one symmetric orbit only. It follows from Lemma 1, that the information matrix has to be singular. But in this case the function \({\tilde{\psi }}\) would not be defined, which is a contradiction.
In the last case, if the orbitwise sensitivity is constant, we have \({\tilde{\psi }}(k)\le p\) for all \(k\in \{0,\ldots ,K\}\) and, consequently, \({\bar{\xi }}^*\) is optimal on the unrestricted design region \({\mathcal {X}}_{0,K}\).
In the third case, i.e. two maxima inside of \((L,K-L)\), the optimal invariant design \({\bar{\xi }}^*\) has all its weight on either one or two symmetric orbits. If these orbits do not satisfy the conditions in Lemma 1, the information matrix would be singular, which leads to a contradiction. Otherwise the design is optimal on \({\mathcal {X}}_{0,K}\), with the same argument as for the constant case.
Now let \({\bar{\xi }}^*\)
be optimal on \({\mathcal {X}}_{0,K}\)
. Then its information matrix is \({\mathbf {I}}_{p}\)
. It follows that \(m_{2}=m_{4} = 0\)
and thus
$$\begin{aligned} \sum _{k=L}^{K-L}{\bar{w}}_{k}(2 k - K)^2 = K \quad \text {and}\quad \sum _{k=L}^{K-L}{\bar{w}}_{k}(2k-K)^4 - K^2 = 2K(K-1)\,. \end{aligned}$$
(13)
The left-hand sides of these two equations can be interpreted as expectation and variance, respectively, of a discrete random variable taking values \((2k-K)^2\)
, \(k=L,\ldots ,K-L\)
. An upper bound for the variance is given in Muilwijk (1966
) (see also Bhatia and Davis 2000
). This yields for the variance
$$\begin{aligned} \sum _{k=L}^{K-L}{\bar{w}}_{k}(2k-K)^4 - K^2 \le ((2L-K)^2 - K)(K - R_{K})\,, \end{aligned}$$
where \(R_{K}=0\)
for
K
even and \(R_{K} = 1\)
for
K
odd. Since \(B_{K}<L\)
it follows that
$$\begin{aligned} ((2L-K)^2 - K)(K - R_{K}) < ((2B_{K}-K)^2 - K)(K - R_{K}) = 2K(K-1)\,, \end{aligned}$$
which is in contradiction to (13
). Hence \(a_{4}\)
and consequently the leading coefficient has to be positive. \(\square \)
Proof (Proof of Theorem 2)
Under the assumptions of Theorem 2, the orbitwise sensitivity function \({\tilde{\psi }}\) of the optimal design \({\bar{\xi }}^*\) is a polynomial of degree four with positive leading term by Lemma 6 and 7. Then, in view of the fundamental theorem of algebra, the equality \({{\tilde{\psi }}}(k)=p\) can only have at most four distinct roots. Because of the symmetry of the sensitivity function with respect to
K / 2 (cf. Lemma 6) the optimal design has thus to be concentrated on at most two symmetric orbits. In order to fulfill the condition \({{\tilde{\psi }}}(k)\le p\) for all \(k=L,\ldots ,K-L\), imposed by the equivalence theorem on the optimal design \({\bar{\xi }}^*\), these symmetric orbits can only be the outmost orbit \({\mathcal {O}}_{L}\cup {\mathcal {O}}_{K-L}\) on the boundaries and the central orbit \({\mathcal {O}}_{K/2}\) for K even and \({\mathcal {O}}_{(K-1)/2}\cup {\mathcal {O}}_{(K+1)/2}\) for K odd, respectively.
On the other hand the nonsingularity condition of Lemma 1 requires that the optimal design \({\bar{\xi }}^*\) has to be supported by at least two symmetric orbits and, hence, \({\bar{\xi }}^*\) is of the form specified in the Theorem. Finally only the weights have to be optimized given the two symmetric orbits. \(\square \)
|
If the solution of $Ax=b$ is unstable, the matrix is very ill-conditioned (i.e., has a very large condition number), and (paraphrasing Lanczos) no amount of mathematical trickery can make it stable. The best you can hope for is to solve a
different problem that is a) stable and b) gives you a solution that is sufficiently close; this is called regularization.
For linear ill-conditioned problems, there are two classical approaches:
Tikhonov regularization replaces $Ax=b$ by the stabilized least-squares problem $$ \min_x \|Ax-b\|^2 + \alpha \|x\|^2$$for some regularization parameter $\alpha>0$. The minimizer can of course be computed by solving the normal equations$$ (A^TA+\alpha I)x_\alpha = A^Tb.$$ Truncated singular value decomposition computes the singular value decomposition$$ U \Sigma V^T = A,$$where the columns of $U$ and $V$ contain the left and right singular vectors, respectively, and $\Sigma$ is a diagonal matrix containing the singular values (usually in order of descending magnitude). Since $U$ and $V$ are unitary, the inverse of $A$ (if it exists) is given by $A^{-1}=V\Sigma^{-1} U^T$. Ill-conditionedness of $A$ manifests in the existence of very small singular values, such that the corresponding entry in $\Sigma^{-1}$ would be very large, amplifying small perturbations (e.g., due to finite numerical precision). As the name indicates, stability is restored by ignoring these small entries, i.e., replacing $\Sigma^{-1}$ by$\Sigma_{\alpha}^{-1}$ where$$[\Sigma_{\alpha}^{-1}]_{ii} = \begin{cases} \Sigma_{ii}^{-1} & \text{if } |\Sigma_{ii}| > \alpha \\ 0 &\text{else}\end{cases}$$and setting $$x_\alpha = V\Sigma_{\alpha}^{-1} U^T b.$$
In both cases, you need to choose $\alpha$ specifically for your problem to get good results. You could start by computing the SVD of a representative $A$ and looking at the singular values to see if there's a clear threshold.
|
Smoothing effects for some derivative nonlinear Schrödinger equations
1.
Department of Applied Mathematics, Science University of Tokyo, 1-3, Kagurazaka, Shinjuku-ku, Tokyo 162
2.
Instituto de Física y Matemáticas, Universidad Michoacana, AP 2-82, CP 58040, Morelia, Michoacana
3.
Department of Applied Mathematics, Science University of Tokyo,Tokyo 162-8601, Japan
$iu_t + u_{x x} = \mathcal N(u, \bar u, u_x, \bar u_x), \quad t \in \mathbf R,\ x\in \mathbf R;\quad u(0, x) = u_0(x),\ x\in \mathbf R,\qquad$ (A)
where $\mathcal N(u, \bar u, u_x, \bar u_x) = K_1|u|^2u+K_2|u|^2u_x +K_3u^2\bar u_x +K_4|u_x|^2u+K_5\bar u$ $u_x^2 +K_6|u_x|^2u_x$, the functions $K_j = K_j (|u|^2)$, $K_j(z)\in C^\infty ([0, \infty))$. If the nonlinear terms $\mathcal N =\frac{\bar{u} u_x^2}{1+|u|^2}$, then equation (A) appears in the classical pseudospin magnet model [16]. Our purpose in this paper is to consider the case when the nonlinearity $\mathcal N$ depends both on $u_x$ and $\bar u_x$. We prove that if the initial data $u_0\in H^{3, \infty}$ and the norms $||u_0||_{3, l}$ are sufficiently small for any $l\in N$, (when $\mathcal N$ depends on $\bar u_x$), then for some time $T > 0$ there exists a unique solution $u\in C^\infty ([-T, T]$\ $\{0\};\ C^\infty(\mathbb R))$ of the Cauchy problem (A). Here $H^{m, s} = \{\varphi \in \mathbf L^2;\ ||\varphi||_{m, s}<\infty \}$, $||\varphi||_{m, s}=||(1+x^2)^{s/2}(1-\partial_x^2)^{m/2}\varphi||_{\mathbf L^2}, \mathbf H^{m, \infty}=\cap_{s\geq 1} H^{m, s}.$
Mathematics Subject Classification:35Q5. Citation:Nakao Hayashi, Pavel I. Naumkin, Patrick-Nicolas Pipolo. Smoothing effects for some derivative nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 685-695. doi: 10.3934/dcds.1999.5.685
[1] [2] [3]
Nakao Hayashi, Elena I. Kaikina, Pavel I. Naumkin.
Large time behavior of solutions to the generalized derivative nonlinear Schrödinger equation.
[4]
Hiroyuki Hirayama, Mamoru Okamoto.
Random data Cauchy problem for the nonlinear Schrödinger equation with derivative nonlinearity.
[5]
Nakao Hayashi, Pavel I. Naumkin.
Asymptotic behavior in time of solutions to the derivative nonlinear Schrödinger equation revisited.
[6]
Kazumasa Fujiwara, Tohru Ozawa.
On the lifespan of strong solutions to the periodic derivative nonlinear Schrödinger equation.
[7]
Razvan Mosincat, Haewon Yoon.
Unconditional uniqueness for the derivative nonlinear Schrödinger equation on the real line.
[8]
Yuji Sagawa, Hideaki Sunagawa.
The lifespan of small solutions to cubic derivative nonlinear Schrödinger equations in one space dimension.
[9]
Zihua Guo, Yifei Wu.
Global well-posedness for the derivative nonlinear Schrödinger equation in $H^{\frac 12} (\mathbb{R} )$.
[10]
Nakao Hayashi, Pavel Naumkin.
On the reduction of the modified Benjamin-Ono equation to the cubic derivative nonlinear Schrödinger equation.
[11] [12]
Hiroyuki Hirayama.
Well-posedness and scattering for a system of quadratic derivative
nonlinear Schrödinger equations with low regularity initial data.
[13]
Minoru Murai, Kunimochi Sakamoto, Shoji Yotsutani.
Representation formula
for traveling waves
to a derivative nonlinear Schrödinger equation
with the periodic boundary condition.
[14]
Brenton LeMesurier.
Modeling thermal effects on nonlinear wave motion in biopolymers by a stochastic discrete nonlinear Schrödinger equation with phase damping.
[15]
Juan Belmonte-Beitia, Vladyslav Prytula.
Existence of solitary waves in nonlinear equations of Schrödinger type.
[16]
Van Duong Dinh, Binhua Feng.
On fractional nonlinear Schrödinger equation with combined power-type nonlinearities.
[17]
Hiroyuki Hirayama, Mamoru Okamoto.
Well-posedness and scattering for fourth order nonlinear Schrödinger type equations at the scaling critical regularity.
[18]
Changxing Miao, Bo Zhang.
Global well-posedness of the Cauchy problem for nonlinear Schrödinger-type equations.
[19]
Binhua Feng.
On the blow-up solutions for the fractional nonlinear Schrödinger equation with combined power-type nonlinearities.
[20]
Jun-ichi Segata.
Initial value problem for the fourth order
nonlinear Schrödinger type equation
on torus and orbital stability of
standing waves.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
There is a "standard" way to consider normed spaces over arbitrary fields but these are not well-behaved in the case of scalars in finite fields. If you want to work with norms on vector spaces over fields in general, then you have to use the concept of valuation.
Valued field:Let $K$ be a field with valuation $|\cdot|:K\to\mathbb{R}$. This is, for all $x,y\in K$, $|\cdot|$ satisfies: $|x|\geq0$, $|x|=0$ iff $x=0$, $|x+y|\leq|x|+|y|$, $|xy|=|x||y|$.
The set $|K|:=\{|x|:x\in K-\{0\}\}$ is a multiplicative subgroup of $(0,+\infty)$ called the value group of $|\cdot|$. The valuation is called
trivial, discrete or dense accordingly as its value group is $\{1\}$, a discrete subset of $(0,+\infty)$ or a dense subset of $(0,+\infty)$. For example, the usual valuations in $\mathbb{R}$ and $\mathbb{C}$ are dense valuations. The valuation is said to be non-Archimedean when it satisfies the strong triangle inequality $|x+y|\leq\max\{|x|,|y|\}$ for all $x,y\in K$. In this case, $(K,|\cdot|)$ is called a non-Archimedean valued field and $|n1_K|\leq1$ for all $n\in\mathbb{Z}$. Common examples of non-Archimedean valuations are the $p$-adic valuations in $\mathbb{Q}$ or the valuations of a field that is not isomorphic to a subfield of $\mathbb{C}$.
Norm: Let $(K,|\cdot|)$ be a valued field and $X$ be a vector space over $(K,|\cdot|)$. A function $p:X\to \mathbb{R}$ is a norm iff for each $a,b\in X$ and each $k\in K$, it satisfies: $p(a)\geq0$ and $p(a)=0$ iff $a=0_X$, $p(ka)=|k|p(a)$, $p(a+b)\leq p(a)+p(b)$
In the case of a finite field, the valuation $|\cdot|$ must be the trivial one.In fact, if there is nonzero scalar $x\in K$ such that $|x|\neq1$, then $\{|x^n|:n\in\mathbb{Z}\}$ is infinite, which is a contradiction.
Example of Normed space over a finite field:Let $K$ be any field with the trivial valuation (e.g. a finite field) and let $X$ be an infinite-dimensional vector space with Hamel basis $B$. We can define a norm $p$ by saying $p(e)$ is the number of nonzero coefficients there are when we write $e$ as a linear combination of elements of $B$.
But in this context, we have unexpected situations. For example, two norms may induce the same topology without being equivalent. In fact, consider the trivial norm $q$ on $X$ defined by $q(e)=1$ for all nonzero $e\in X$. Then both norms, $p$ and $q$, induce the discrete topology, but $p/q$ is unbounded. So there are no constant $C$ such that $ p\leq Cq$.
For more information, I recommend the paper: Non-archimedean Banach spaces over trivially valued fields, Borrey, S., P-adic functional analysis, Editorial Universidad de Santiago, Chile, 17 - 31. (1994). There, the norm is assumed to satisfy the strong triangle inequality.
|
As with the sine, we do not know anything about derivatives that allows us to compute the derivatives of the exponential and logarithmic functions without going back to basics. Let's do a little work with the definition again:
\[\eqalign{ {d\over dx}a^x&=\lim_{\Delta x\to 0} {a^{x+\Delta x}-a^x\over \Delta x}\cr& =\lim_{\Delta x\to 0} {a^xa^{\Delta x}-a^x\over \Delta x}\cr& =\lim_{\Delta x\to 0} a^x{a^{\Delta x}-1\over \Delta x}\cr& =a^x\lim_{\Delta x\to 0} {a^{\Delta x}-1\over \Delta x}.\cr }\]
There are two interesting things to note here: As in the case of the sine function we are left with a limit that involves \(\Delta x\) but not \(x\), which means that whatever \( \lim_{\Delta x\to 0} (a^{\Delta x}-1)/\Delta x\) is, we know that it is a number, that is, a constant. This means that \( a^x\) has a remarkable property: its derivative is a constant times itself.
We earlier remarked that the hardest limit we would compute is \( \lim_{x\to0}\sin x/x=1\); we now have a limit that is just a bit too hard to include here. In fact the hard part is to see that \( \lim_{\Delta x\to 0} (a^{\Delta x}-1)/\Delta x\) even exists---does this fraction really get closer and closer to some fixed value? Yes it does, but we will not prove this fact.
We can look at some examples. Consider \( (2^x-1)/x\) for some small values of \(x\): 1, \(0.828427124\), \(0.756828460\), \(0.724061864\), \(0.70838051\), \(0.70070877\) when \(x\) is 1, \(1/2\), \(1/4\), \(1/8\), \(1/16\), \(1/32\), respectively. It looks like this is settling in around \(0.7\), which turns out to be true (but the limit is not exactly \(0.7\)). Consider next \( (3^x-1)/x\): \(2\), \(1.464101616\), \(1.264296052\), \(1.177621520\), \(1.13720773\), \(1.11768854\), at the same values of \(x\). It turns out to be true that in the limit this is about \(1.1\).
Two examples don't establish a pattern, but if you do more examples you will find that the limit varies directly with the value of \(a\): bigger \(a\), bigger limit; smaller \(a\), smaller limit. As we can already see, some of these limits will be less than 1 and some larger than 1. Somewhere between \(a=2\) and \(a=3\) the limit will be exactly 1; the value at which this happens is called \(e\), so that
\[\lim_{\Delta x\to 0} {e^{\Delta x}-1\over \Delta x}=1.\]
As you might guess from our two examples, \(e\) is closer to 3 than to 2, and in fact \(e\approx 2.718\).
Now we see that the function \( e^x\) has a truly remarkable property:
\[\eqalign{ {d\over dx}e^x&=\lim_{\Delta x\to 0} {e^{x+\Delta x}-e^x\over \Delta x}\cr& =\lim_{\Delta x\to 0} {e^xe^{\Delta x}-e^x\over \Delta x}\cr& =\lim_{\Delta x\to 0} e^x{e^{\Delta x}-1\over \Delta x}\cr& =e^x\lim_{\Delta x\to 0} {e^{\Delta x}-1\over \Delta x}\cr& =e^x.\cr }\]
That is, \( e^x\) is its own derivative, or in other words the slope of \( e^x\) is the same as its height, or the same as its second coordinate: The function \( f(x)=e^x\) goes through the point \( (z,e^z)\) and has slope \( e^z\) there, no matter what \(z\) is. It is sometimes convenient to express the function \( e^x\) without an exponent, since complicated exponents can be hard to read. In such cases we use \(\exp(x)\), e.g., \( \exp(1+x^2)\) instead of \( e^{1+x^2}\).
What about the logarithm function? This too is hard, but as the cosine function was easier to do once the sine was done, so the logarithm is easier to do now that we know the derivative of the exponential function. Let's start with \( \log_e x\), which as you probably know is often abbreviated \(\ln x\) and called the "natural logarithm'' function.
Consider the relationship between the two functions, namely, that they are inverses, that one "undoes'' the other. Graphically this means that they have the same graph except that one is "flipped'' or "reflected'' through the line \(y=x\), as shown in Figure \(\PageIndex{1}\).
This means that the slopes of these two functions are closely related as well: For example, the slope of \( e^x\) is \(e\) at \(x=1\); at the corresponding point on the \(\ln(x)\) curve, the slope must be \(1/e\), because the "rise'' and the "run'' have been interchanged. Since the slope of \( e^x\) is \(e\) at the point \((1,e)\), the slope of \(\ln(x)\) is \(1/e\) at the point \((e,1)\).
More generally, we know that the slope of \( e^x\) is \( e^z\) at the point \( (z,e^z)\), so the slope of \(\ln(x)\) is \( 1/e^z\) at \( (e^z,z)\), as indicated in Figure \(\PageIndex{2}\). In other words, the slope of \(\ln x\) is the reciprocal of the first coordinate at any point; this means that the slope of \(\ln x\) at \((x,\ln x)\) is \(1/x\). The upshot is: \({d\over dx}\ln x = {1\over x}.\) We have discussed this from the point of view of the graphs, which is easy to understand but is not normally considered a rigorous proof---it is too easy to be led astray by pictures that seem reasonable but that miss some hard point. It is possible to do this derivation without resorting to pictures, and indeed we will see an alternate approach soon.
Note that \(\ln x\) is defined only for \(x>0\). It is sometimes useful to consider the function \(\ln |x|\), a function defined for \(x\not=0\). When \(x < 0\), \(\ln |x|=\ln(-x)\) and
\[{d\over dx}\ln |x|={d\over dx}\ln (-x)={1\over -x}(-1)={1\over x}.\]
Thus whether \(x\) is positive or negative, the derivative is the same.
What about the functions \( a^x\) and \( \log_a x\)? We know that the derivative of \( a^x\) is some constant times \( a^x\) itself, but what constant? Remember that "the logarithm is the exponent'' and you will see that \( a=e^{\ln a}\). Then \(a^x = (e^{\ln a})^x = e^{x\ln a},\) and we can compute the derivative using the chain rule:
\[{d\over dx} a^x = {d\over dx}(e^{\ln a})^x = {d\over dx}e^{x\ln a} = (\ln a)e^{x\ln a} =(\ln a)a^x.\]
The constant is simply \(\ln a\). Likewise we can compute the derivative of the logarithm function \( \log_a x\). Since \(x=e^{\ln x}\) we can take the logarithm base \(a\) of both sides to get \( \log_a(x)=\log_a(e^{\ln x})=\ln x \log_a e\). Then
\[{d\over dx}\log_a x = {1\over x}\log_a e.\]
This is a perfectly good answer, but we can improve it slightly. Since
\[\eqalign{ a&=e^{\ln a}\cr \log_a(a) &= \log_a(e^{\ln a}) = \ln a\log_a e\cr 1&=\ln a\log_a e\cr {1\over \ln a}&=\log_a e,\cr }\]
we can replace \( \log_a e\) to get \({d\over dx}\log_a x = {1\over x\ln a}\).
You may if you wish memorize the formulas
\[{d\over dx}a^x = (\ln a)a^x \quad \hbox{and}\quad {d\over dx}\log_a x = {1\over x\ln a}.\]
Because the "trick'' \( a=e^{\ln a}\) is often useful, and sometimes essential, it may be better to remember the trick, not the formula.
Example \(\PageIndex{1}\)
Compute the derivative of \( f(x)=2^x\).
Solution
\[\eqalign{ {d\over dx}2^{x} &= {d\over dx}(e^{\ln 2})^x\cr& = {d\over dx}e^{x\ln 2}\cr& = \left({d\over dx} x\ln 2\right) e^{x\ln 2}\cr& = (\ln 2) e^{x\ln 2}=2^x\ln2\cr }\]
Example \(\PageIndex{2}\)
Compute the derivative of \( f(x)=2^{x^2}=2^{(x^2)}\).
\[\eqalign{ {d\over dx}2^{x^2} &= {d\over dx}e^{x^2\ln 2}\cr& = \left({d\over dx} x^2\ln 2\right) e^{x^2\ln 2}\cr& = (2\ln 2) x e^{x^2\ln 2}\cr& = (2\ln 2) x 2^{x^2}\cr }\]
Example \(\PageIndex{3}\)
Compute the derivative of \( f(x)=x^x\). At first this appears to be a new kind of function: it is not a constant power of \(x\), and it does not seem to be an exponential function, since the base is not constant. But in fact it is no harder than the previous example.
\[\eqalign{ {d\over dx}x^x&={d\over dx}e^{x\ln x}\cr& =\left({d\over dx}x\ln x\right)e^{x\ln x}\cr& =(x{1\over x}+\ln x)x^x\cr& =(1+\ln x)x^x\cr }\]
Example \(\PageIndex{4}\)
Recall that we have not justified the power rule except when the exponent is a positive or negative integer. We can use the exponential function to take care of other exponents.
\[\eqalign{ {d\over dx}x^r&={d\over dx}e^{r\ln x}\cr& =\left({d\over dx}r\ln x\right)e^{r\ln x}\cr& =(r{1\over x})x^r\cr& =rx^{r-1}\cr }\]
|
Let $\Gamma = \langle S \mid R \rangle$ be a finitely generated group, with the neutral element $e \not \in S= S^{-1}$.
Let $\ell : \Gamma \to \mathbb{N}$ be the world length related to $S$.
For any $g \in \Gamma$ and for any $s \in S$, let $p_s(g)$ be the number of geodesic paths from $g$ to $e$ beginning by the edge $[g,gs]$ in the Cayley graph $\mathcal{G}(S,R)$. Next, let $p(g) = \sum_{s \in S}p_s(g)$.
The marked group $(\Gamma, S)$ belongs to the class $\mathcal{C}$ if:
$\forall g \in \Gamma$, $\exists \lambda_g > 0$ such that $\forall s \in S$, $\forall h \in \Gamma \setminus \{e,g^{-1} \}$ $$| \frac{p_s(g \cdot h)}{p(g \cdot h)}-\frac{p_s(h)}{p(h)}| \le \frac{\lambda_g}{\ell(h)} $$ Meaning: this property means that the left multiplication by $g$, almost preserves the proportion of geodesics from a given element $h$ to $e$, for a given direction $[h,hs]$. Question: Are the automatic groups of class $\mathcal{C}$?
Note that the property of being automatic is independent on the choice of the set of generators.
Bonus question: Is the property of being $\mathcal{C}$, also independent on the choice of the set of generators? Motivation: the class $\mathcal{C}$ appears naturally when we try to define a noncommutative geometry from a finitely generated group. We will show that (for their usual presentation) the automatic groups $\mathbb{Z}^2$ and $\mathbb{F}_2$ belong to $\mathcal{C}$, whereas the non-automatic Baumslag-Solitar group $B(2,1)$ not. First, $\mathbb{Z}^2 = \langle a^{\pm 1},b^{\pm 1} \mid aba^{-1}b^{-1} \rangle$. Let $h = a^nb^m$ (without lose of generality, take $m>0$) Then $\ell(h) = m+n$, $p(h) = {n+m \choose m}$, $p_{b^{-1}}(g)={n+m-1 \choose m-1}$ and $\frac{p_{b^{-1}}(h)}{p(h)}\ell(h) = m$. We finally observe that $\lambda_g = \ell(g)^2$ should be enough to have the property. Next, $\mathbb{F}_2 = \langle a^{\pm 1},b^{\pm 1} \mid \emptyset \rangle$. Let $h \in \mathbb{F}_2$, then $\exists! s \in S$ such that $p_s(h) \neq 0$ and $p(h) = 1$. We observe that $\lambda_g = \ell(g)$ is enough to have the property. Finally, $B(2,1) = \langle a^{\pm 1},b^{\pm 1} \mid a^2ba^{-1}b^{-1} \rangle$. Then $a^{2^n} = b^{n-1}a^2b^{1-n}$ and $\ell(a^{2^n}) = 2n$. For $n>1$, we observe that $ | \frac{p_b(a \cdot a^{2^n})}{p(a \cdot a^{2^n})}-\frac{p_b(a^{2^n})}{p(a^{2^n})}| = | \frac{2}{4}-\frac{2}{2}| = \frac{1}{2}$. So the property fails. Remark: The Baumslag-Solitar $B(2,1)$ is amenable and solvable but not of polynomial growth. So the class $\mathcal{C}$ contains neither all the amenable marked groups, nor all the solvable marked groups.
|
How do I write by proof, the ground state of the toric code (by Kitaev) Hamiltonian $ H=-\sum_{v}A(v)-\sum_{p}B(p) $ where $A(v)=\sigma_{v,1}^{x}\sigma_{v,2}^{x}\sigma_{v,3}^{x}\sigma_{v,4}^{x}$ and plaquette term $B(p)=\sigma_{p,1}^{z}\sigma_{p,2}^{z}\sigma_{p,3}^{z}\sigma_{p,4}^{z} $ ? Here $v$ are indices of vertices on a lattice with spin-1/2 particles on the edges, $p$ refers to the indices of the plaquettes in the lattice.
The $A$ operators and the $B$s all commute with each other because they always share an even number of sites and therefore an even number of Pauli matrices. Therefore, these are all conserved quantities and can be replaced by their expectation value. The ground state is the state with eigenvalues $A = 1 = B$ in units where $\sigma$ is a Pauli matrix.
In terms of the spins the situation is slightly less trivial. Consider the configurations that satisfy the constraint $B = 1$, working in the z-basis. This requires that the spins have an even number of spins and an even number down (all up, all down, or two and two). However, if one tries to make a pair of up spins, one will find that on the neighboring stars (crosses) will need another spin flipped, and therefore the only possible configurations are ones where the up down spins form closed loops in the background of up spins or (equivalently) vice-versa. Therefore the ground state has to be some superposition of these states with loops in them.
Now consider the constraint $A=1$. Writing $\sigma^x$ in the $z$ basis makes it clear that the operator flips the spin (this can also be understood as due to the anticommutation of $\sigma^z$ and $\sigma^x$). Therefore, $A$ flips the spins on a plaquettes. As expected, this brings us to another state that obeys the $B=1$ constraint; the excitation can be seen as a small loop of up spins. Moreover, we can act with another $A$ nearby and make the loop bigger, and this way create loops of any size and shape by acting with the $A$ operators.
The ground state $|\psi_0>$ of the toric code is the equal weight (magnitude and phase) superposition of all loop configurations of spin downs on the background of spin ups. One might call it a quantum loop gas. It is easy to see that this obeys the first constraint because each loop configuration itself has $B=1$. On the other hand, when one acts with $A_p$ each loop configuration gets changed into another one. However, it is easy to prove that this is the ground state since for a given $A_p$ this simply swaps two loop configurations that are the same everywhere except on plaquette p and have opposite spin configurations on p. These two configurations are both part of the sum, with equal weight, so that the state is left unchanged by this action. Therefore $A|\psi_0> = |\psi_0>$ and $A=1$.
Finally, note that one can even see that with open boundary conditions this state is unique. This is because the state must be made of loop configurations, and is therefore some superposition of them (by the $B=1$ constraint). But, the different loop configurations can be obtained from the state with no loops by acting with a product of $A$ operators creates the loops, one plaquette at a time. Therefore the different configurations must all have the same coefficient as the no loop config since the action of the product of $A$s cannot change the state (it also has eigenvalue 1 since each of its terms does) and it swaps those two configurations. On a cylinder (or torus), however, there are loops that cannot be created by products of the local $A$ operators - they are the loops around that wind the circular dimension. Therefore, there exists a ground state degeneracy in these cases corresponding to the number of configurations available to the unconstrained loops.
|
When evaluating the integral below in python using scipy.quad I get the following warning:
UserWarning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used.
Furthermore the real integration bounds should be zero to infinity (see below) but when I change the bounds to this my result looks very different from the finite values used in my example below. How should I go about calculating this integral?
I have the following complex integral to compute numerically:
\begin{align} \int_0^{\infty} Re \left( \frac{t_1(s)-it_2(s)}{\sqrt{\zeta^2(s)-1}} \right) \exp(i\omega s/V) \, \mathrm{d}s \end{align}
here $i=\sqrt{-1}$, $t_1$ and $t_2$ are real parametric functions of $s$, $V=\frac{\pi}{2(\pi+2)}$ is a constant, and $\zeta(s)$ is a complex function of $s$. The functions $t_1$, $t_2$, and $\zeta$ are not available in closed form, rather I have computed them numerically. A numpy array containing the sampled function values at various vales of s can be found to download at http://speedy.sh/prG9e/stackexchange.npy. The integrand is problematic because $\zeta(0) = 1$.
I have plotted the integrands for various values of omega:
The singularity in the real integrand is apparent. My work so far:
import numpy as npimport matplotlib.pyplot as pltfrom scipy import integrateT2 = np.load('stackexchange.npy') #Data: [s, x1, x2, t1, t2]def pyfunc(z): return np.sqrt(z**2-1)def integrand(s, omega): x1 = np.interp(s, T2[:,0], T2[:,1] ) x2 = np.interp(s, T2[:,0], T2[:,2] ) t1 = np.interp(s, T2[:,0], T2[:,3] ) t2 = np.interp(s, T2[:,0], T2[:,4] ) zeta = x2+x1*1j sigma = np.pi/(np.pi+2) V = 1/(2*sigma) return (-t2*np.real(1j/pyfunc(zeta))+t1*np.real(1/pyfunc(zeta)))*np.exp(1j*omega*s/V)def integral(omega): def real_func(x,omega): return np.real(integrand(x,omega)) def imag_func(x,omega): return np.imag(integrand(x,omega)) a = 0.05 #Lower bound b = 20.0 #Upper bound real_integral = integrate.quad(real_func, a, b, args=(omega)) imag_integral = integrate.quad(imag_func, a, b, args=(omega)) return real_integral[0] + 1j*imag_integral[0]vintegral = np.vectorize(integral)omega = np.linspace(-30, 30, 601)I = integral(omega)plt.plot(omega, I.real, omega, I.imag)plt.show()
Edit
I found an analytical representation of the integrand in question, defined below. It still seems that there are some numerical difficulties with it though as NaNs are returned when I try to integrate.
def pyfunc(z): return np.sqrt(z**2-1) def func(theta): t1 = 1/np.sqrt(1+np.tan(theta)**2) t2 = -1/np.sqrt(1+1/np.tan(theta)**2) return t1, t2 def integrand(s, omega): sigma = np.pi/(np.pi+2) xs = np.exp(-np.pi*s/(2*sigma)) x1 = -2*sigma/np.pi*(np.log(xs/(1+np.sqrt(1-xs**2)))+ np.sqrt(1-xs**2)) x2 = 1-2*sigma/np.pi*(1-xs) zeta = x2+x1*1j Vc = 1/(2*sigma) theta = -1*np.arcsin(np.exp(-np.pi/(2.0*sigma)*s)) t1, t2 = func(theta) return np.real((t1-1j*t2)/pyfunc(zeta))*np.exp(1j*omega*s/Vc) def integral(omega): def real_func(x,omega): return np.real(integrand(x,omega)) def imag_func(x,omega): return np.imag(integrand(x,omega)) a = 0 b = np.inf real_integral = integrate.quad(real_func, a, b, args=(omega)) imag_integral = integrate.quad(imag_func, a, b, args=(omega)) return real_integral[0] + 1j*imag_integral[0] vintegral = np.vectorize(integral)
|
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
|
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
|
Mertens' third theorem is just the exponentiated version of the second theorem (without the bounds that Mertens proved for his second theorem):
\begin{align}-\ln\Biggl(\ln n\prod_{p\leqslant n}\biggl(1 - \frac{1}{p}\biggr)\Biggr)&= -\ln \ln n - \sum_{p\leqslant n} \ln \biggl(1 - \frac{1}{p}\biggr)\\&= \Biggl(\sum_{p\leqslant n}\frac{1}{p} - \ln \ln n - M\Biggr) + \Biggl(M - \sum_{p\leqslant n} \biggl(\ln\biggl(1-\frac{1}{p}\biggr) + \frac{1}{p}\biggr)\Biggr),\end{align}
where the first term converges to $0$ by Mertens' second theorem, and the second term converges to $\gamma$ by definition of $M$.
Mertens' bounds in the second theorem and estimates for
$$\sum_{p > n}\biggl(\ln\biggl(1-\frac{1}{p}\biggr)+\frac{1}{p}\biggr)$$
give you bounds for
$$e^\gamma\ln n\prod_{p\leqslant n}\biggl(1-\frac{1}{p}\biggr),\tag{$\ast$}$$
and conversely bounds for that give you bounds for
$$\left\lvert\sum_{p\leqslant n}\frac{1}{p} - \ln \ln n - M\right\rvert,\tag{$\ast\!\ast$}$$
but it is doubtful whether one can directly prove bounds for $(\ast)$ that give you back Mertens' bounds for $(\ast\ast)$.
One can use Mertens' first theorem to derive the second via an integration by parts, Hardy and Wright for example do that, but don't give explicit bounds on $(\ast\ast)$.
For $x > 0$ we define
$$S(x) := \sum_{p\leqslant x} \frac{\ln p}{p}.$$
Mertens' first theorem tells us
$$\lvert S(x) - \ln x\rvert \leqslant 2 + O(x^{-1}),$$
and we can write
$$T(x) := \sum_{p\leqslant x} \frac{1}{p} = \int_{3/2}^x \frac{1}{\ln t}\,dS(t)$$
with a (Riemann/Lebesgue-) Stieltjes integral. Integration by parts yields
\begin{align}T(x) &= \int_{3/2}^x \frac{1}{\ln t}\,dS(t)\\&= \frac{S(x)}{\ln x} - \frac{S(3/2)}{\ln \frac{3}{2}} - \int_{3/2}^x S(t)\,d\biggl(\frac{1}{\ln t}\biggr)\\&= \frac{S(x)}{\ln x} + \int_{3/2}^x \frac{S(t)}{t(\ln t)^2}\,dt\\&= \frac{S(x)}{\ln x} + \int_{3/2}^x \frac{dt}{t\ln t} + \int_{3/2}^x \frac{S(t) - \ln t}{t(\ln t)^2}\,dt\\&= \ln \ln x + \underbrace{1 - \ln \ln \frac{3}{2} + \int_{3/2}^\infty \frac{S(t) - \ln t}{t(\ln t)^2}\,dt}_M + \underbrace{\frac{S(x)-\ln x}{\ln x} - \int_x^\infty \frac{S(t)-\ln t}{t(\ln t)^2}\,dt}_{O\bigl(\frac{1}{\ln x}\bigr)}.\end{align}
I'm not sure, however, whether one can get exactly Mertens' bounds on $(\ast\ast)$ easily from that.
So in a way, Mertens' first theorem is the most powerful, since it implies the others, at least if we don't need explicit bounds for the differences.
|
There are three types of neutrinos known today. When detecting them, how can we tell which type we are detecting?
Neutrino flavor is defined as agreeing with the flavor of the charged lepton participating in the interaction, so that the neutrino in the reaction $$ \nu + A \to \mu + X \,, $$ is defined to be a muon neutrino and the one in $$ \nu + n \to e + p $$ is a electron neutrino by definition.
We have no way of knowing the alleged flavor of a neutrino participating in a neutral current interaction.
As a matter of experimental fact electron and moun neutrinos (and anti-neutrinos) are easy, but tau neutrinos are much harder because demonstrating that you have a tau-lepton is hard, but both OPERA and IceCube can do that (to chose currently running experiments).
|
1. The problem statement, all variables and given/known data
Electron of is in a 1-D potential well of depth $20eV$ width $d=0.2 nm$ in his ground state $N=1$. What is the energy of the ground state? Write the normalized wavefunction of the ground state. What is the probability, to find the particle outside the well?
2. The attempt at a solution
1st I draw the image of the well, so we can talk better - otherwise this makes no sense as it looks like a complex homework. In the image $W_p$ marks the potential energy but never mind I'll use $E_p$ notation.
Ok now that I have an image I can tell you what I already know and what is still unclear to me. And this is my 1st finite potential well homework problem so take it easy on me.
I know, that in a standard finite potential well, which is symmetric we have two possible wavefunctions - one is odd $\psi_{odd}$ and one is even $\psi_{even}$. They are both split into three separate functions which are different for each interval - I will name those $\psi_1,\psi_2,\psi_3$. Now let me write all those:
For an odd solutions we have wavefunction:
\begin{align} \psi_{odd} = \left\{ \begin{aligned} \psi_1&=Ae^{Kx}\\ \psi_2&=- \frac{ Ae^{-K\frac{d}{2}} }{ \sin \left( L \frac{d}{2} \right) } \sin \left( Lx \right)\\ \psi_3&=-Ae^{-Kx} \end{aligned} \right. \end{align}
where $L=\sqrt{2mE / \hbar^2}$ and $K=\sqrt{-2m(E-E_p)/\hbar^2}$. These are the same even for "even solutions".
For an even solutions we have wavefunction:
\begin{align} \psi_{even} = \left\{ \begin{aligned} \psi_1&=Ae^{Kx} \longleftarrow\substack{\text{same as for the odd solutions}}\\ \psi_2&=\frac{ Ae^{-K\frac{d}{2}} }{ \cos \left( L \frac{d}{2} \right) } \cos \left( Lx \right)\\ \psi_3&=Ae^{-Kx} \end{aligned} \right. \end{align}
By applying boundary condition for matching derivatives on these wavefunctions (even and odd) we always get
"transcendental equation" - its LHS is different in case of odd and even wavefunctions while its RHS is the same in both cases: For an odd solutions we have transcendental equation:
\begin{align} -\sqrt{\frac{1}{E_p / E -1} } = \tan\left(\frac{\sqrt{2mE}}{\hbar}\frac{d}{2}\right) \end{align}
For an even solutions we have transcendental equation:
\begin{align} \sqrt{\frac{E_p}{E} - 1 } = \tan\left(\frac{\sqrt{2mE}}{\hbar}\frac{d}{2}\right) \end{align}
Because the RHS is the same we can use the constraint that tangents is repeated every $N\pi$ and derive the equation for energies which we derived and it looks like this (solved for $N$):
$$ N = \frac{\sqrt{2mW}}{\pi\hbar} \frac{d}{2} $$
1st I would like to know if my equations until this point are correct. 3. What I don't understand
I have to calculate energy of the ground state so I set $N=1$ in the last equation and calculated energy. I got result $37.64eV$ while the book says it is $4.4eV$... If I manage to calculate the energy I can afterwards calculate $L$ and $K$ which are needed by the wave function, so I need to start here I think.
Even if my obtained value for energy was correct I don't know on what criteria to decide which set of equations should I use (even or odd). I am guessing that for $N=1$ I should take odd ones. I am guessing that for $N=2$ I should then take even ones, but what about for $N=3$?
Notice that in the wavefunctions there are no $N$... How do I plug $N$ in my equations? Where should I put it and how do I derive equations which include $N$?
Oh and there is one more thing. I don't know how to normalize $\psi_{odd}$ or $\psi_{even}$. Do I have to take a superposition of their subfunctions $\psi_1,\psi_2,\psi_3$?
I will include the hyperlink to the derivation of the above equations (It is in Slovenian language but don't mind the language everything is there - I wrote it myself in latex just in case it might come handy)
|
TL;DR Initially published crystal structure of $\ce{[NEt4]2[InCl5]}$ [1] according to the further investigations [3], is not valid. The $\ce{InCl5^2-}$ ion does not have $C_\mathrm{4v}$ symmetry, and VSEPR theory pretty much explains formation of numerous slightly distorted trigonal bipyramidal $\ce{InCl5^2-}$-containing complexes according to the most recent single-crystal structure experiments.
From the other side, the bond lengths and geometry index refer to the dominating character of
square pyramidal environment, which is in this case can be dictated by the crystal packing, towards which $\ce{[InCl5]^2-}$ is more sensible than $\ce{[SnCl5]-}$ due to the noticeable size difference ($\ce{[InCl5]^2-}$ is bulkier). Either way, it looks like another crystallographic experiment (preferably at lower temperatures to decrease the size of thermal ellipsoids) is needed to determine the angles more precisely.
As a general advice, always pay attention to the $R_1$ and $R_2$ values of the crystal structure. A rule of thumb: a good structure refinement of small molecules should lead to $R_1 < 0.05$ and $R_2 < 0.12$.
Crystal structure of $\ce{[NEt4]2[InCl5]}$, determined in 1969 [1], has been subsequently criticized in several publications, mainly involving additional symmetry vibration analysis of Raman spectra for $\ce{MX5^2-}$ anions [2]. It's been established that $\ce{InCl5^2-}$ ion does not have $C_\mathrm{4v}$ symmetry (but retains $C_\mathrm{2v}$), and that the crystal may not be centrosymmetric.
The results are summarized in [3]:
Deconvolution of the single-crystal data was again performed with the aid
of the low temperature data, revealing $11$ $\ce{InCl5^2-}$ bands between $300$ and $\pu{100 cm-1}$. Two overlapping A stretches are clearly evident at $292$ and $\pu{286 cm-1}$, but no B stretch is apparent. The two E symmetry stretches expected for the anion of a $C_2$ site, but not for a $C_4$ site, are evident at $281$ and $\pu{271 cm-1}$. [...] With the aid of low-temperature data, the four E modes predicted for $\ce{InCl5^2-}$ at $C_2$ sites are found at $144$, $136$, $122$, and $\pu{103 cm-1}$ in the single-crystal data. With this reinterpretation of the spectrum it is no longer necessary to assume arbitrarily the presence of a lattice mode in this region.
In summary, the vibrational data for the tetraethylammonium salts of $\ce{InCl5^2-}$ and $\ce{TlCl5^2-}$ indicate that these complexes reside on sites lacking full $C_4$ symmetry. The most straightforward conclusion from the vibrational data is that the $\ce{InCl5^2-}$ ions have local $C_2$ symmetry, which would be inconsistent with the previous structure determination [1]. [...]
The structure of $\ce{[(C2H5)4N]2[InCl5]}$, as previously reported, consisted of centrosymmetrically related $\ce{InCl5^2-}$ ions situated about the
Wyckoff c positions of $C_4$ symmetry in space group $C_\mathrm{4h}-P4/n$.
Coaxial with the $\ce{InCl5^2-}$ ions are disordered $\ce{(C2H5)4N+}$ ions
also situated about the Wyckoff c positions. The N atoms of the two remaining $\ce{(C2H5)4N+}$ ions in the unit cell occupy the centrosymmetrically related Wyckoff b sites of $S_4$ symmetry, the cation as a whole again being disordered.
There are two disturbing features of this solution and its subsequent refinement: the disorder of the $\ce{(C2H5)4N+}$ ions and the significant residual electron density in the region of the basal Cl atoms of the tetragonal-pyramidal anion.
It is demonstrated further that for the ordered model the most satisfying crystallographic interpretation coexists with $C_2$ rather than $C_4$ symmetry. A correlation of allowed point symmetries implies switching from $P4/n$ to $P\bar{4}$ space group, thus reducing $R$-factor ($R_1$) from $9.5\%$ to $6.7\%$.
During the refinement in $P\bar{4}$,
small but significant changes
occur in the basal plane of the $\ce{InCl5^2-}$ ion. The basal Cl atoms
have moved by $\pu{-0.5 A}$ from their fourfold symmetric positions in $P4/n$, leading to variations in the bond distances and angles about the In atom (Figure 3) that are qualitatively sufficient to explain the vibrational results.
Geometry index $\tau_5$ for $\ce{[InCl5]}$ fragment can be used to determine formal coordination environment as follows:
$$\tau_5 = \frac{\beta - \alpha}{60^\circ},$$
where $\alpha$, $\beta$ - two greatest valence angles of the coordination center ($\angle \ce{Cl - In - Cl}$, $\alpha < \beta$).
\begin{align}\begin{cases}\tau_5 &= 0 \qquad &\text{square pyramidal geometry} \\\tau_5 &= 1 \qquad &\text{trigonal bipyramidal geometry}\end{cases}\end{align}
For the original $\ce{[NEt4]2[InCl5]}$ structure refined in [3] I've taken angular values from their supplementary materials. It turned out that changing space group to $P\bar{4}$ still results in dominating character of
square pyramidal geometry of $\ce{[InCl5]^2-}$, though one must remember that the rotation axis of highest order is $C_2$, not $C_4$:
$$\alpha = (150.6 \pm 5.0)^\circ, \beta = (154.1 \pm 6.0)^\circ, \bar{\tau_5} = \frac{154.1^\circ - 150.6^\circ}{60^\circ} = 0.06; \tau_5 \in [0.04; 0.24]$$
Paper [4] mentions that for isoelectronic $\ce{[SnCl5]-}$-contining structures with various cations only trigonal-bipyramidal geometries were reported, and suggests higher influence of crystal packing on $\ce{[InCl5]^2-}$ geometry. This can also be explained by higher flexibility of $\ce{MX5}$ fragment in $\ce{[PPh4]2[InCl5]}$ in comparison with $\ce{[PPh4][SnCl5]}$ [5] due to greater average bond length: $d(\ce{In - Cl}) = \pu{2.5 A}$, $d(\ce{Sn - Cl}) = \pu{2.3 A}$.
As for auxiliary experimental data, all crystal structures including $\ce{[InCl5]^2-}$ fragment determined in the past two decade completely support VSEPR theory that predicts trigonal bipyramidal $D_\mathrm{3h}$ symmetry (with one intermediate exception of $\tau_5 \approx 50\%$).
bis(Tetraphenylphosphonium) pentachloro-indate(III) [4], trigonal bipyramidal geometry:$$\alpha = 124.24^\circ, \beta = 174.74^\circ, \tau_5 = \frac{174.74^\circ - 124.24^\circ}{60^\circ} = 0.84$$
$\color{#EEEEEE}{\Large\bullet}~\ce{H}$;$\color{#909090}{\Large\bullet}~\ce{C}$;$\color{#FF8000}{\Large\bullet}~\ce{P}$;$\color{#1FF01F}{\Large\bullet}~\ce{Cl}$;$\color{#A67573}{\Large\bullet}~\ce{In}$.
bis(Diphenyldichlorophosphonium) pentachloro-indate(III) [6], trigonal bipyramidal geometry:$$\alpha = 120.93^\circ, \beta = 179.29^\circ, \tau_5 = \frac{179.29^\circ - 120.93^\circ}{60^\circ} = 0.97$$
$\color{#EEEEEE}{\Large\bullet}~\ce{H}$;$\color{#909090}{\Large\bullet}~\ce{C}$;$\color{#FF8000}{\Large\bullet}~\ce{P}$;$\color{#1FF01F}{\Large\bullet}~\ce{Cl}$;$\color{#A67573}{\Large\bullet}~\ce{In}$.
tetrakis($\mu_3$-Selenido)-tetrakis($\mu_2$-1,5-bis(diphenylphosphino)pentane)-deca-gold(I) pentachloro-indate(III) [7], intermediate geometry:$$\alpha = 138.45^\circ, \beta = 166.50^\circ, \tau_5 = \frac{166.50^\circ - 138.45^\circ}{60^\circ} = 0.47$$
$\color{#EEEEEE}{\Large\bullet}~\ce{H}$;$\color{#909090}{\Large\bullet}~\ce{C}$;$\color{#FF8000}{\Large\bullet}~\ce{P}$;$\color{#1FF01F}{\Large\bullet}~\ce{Cl}$;$\color{#FFA100}{\Large\bullet}~\ce{Se}$;$\color{#A67573}{\Large\bullet}~\ce{In}$;$\color{#FFD123}{\Large\bullet}~\ce{Au}$.
tetrakis($\mu_2$-chloro)-chloro-tetrakis(triphenylphosphine-P)-di-copper(I)-indium(III) tetrahydrofuran solvate [8], trigonal bipyramidal geometry (though authors refer to it as to "quasi square-pyramidal" coordination based on a single slightly shorter $\ce{In - Cl}$ bond distance $(d(\ce{In-Cl_\mathrm{ap}}) = \pu{2.36 A})$, shortest $d(\ce{In-Cl_\mathrm{eq}}) = \pu{2.42 A})$: $$\alpha = 123.90^\circ, \beta = 172.89^\circ, \tau_5 = \frac{172.89^\circ - 123.90^\circ}{60^\circ} = 0.82$$
$\color{#EEEEEE}{\Large\bullet}~\ce{H}$;$\color{#909090}{\Large\bullet}~\ce{C}$;$\color{#FF8000}{\Large\bullet}~\ce{P}$;$\color{#1FF01F}{\Large\bullet}~\ce{Cl}$;$\color{#C88033}{\Large\bullet}~\ce{Cu}$;$\color{#A67573}{\Large\bullet}~\ce{In}$.
Bibliography Brown, D. S.; Einstein, F. W. B.; Tuck, D. G. Inorganic Chemistry 1969, 8 (1), 14–18. DOI 10.1021/ic50071a004. Adams, D. M.; Smardzewski, R. R. Journal of the Chemical Society A: Inorganic, Physical, Theoretical 1971, 714. DOI 10.1039/j19710000714. Joy, G.; Gaughan, A. P.; Wharf, I.; Shriver, D. F.; Dougherty, J. P. Inorganic Chemistry 1975, 14 (8), 1795–1801. DOI 10.1021/ic50150a011. Bubenheim, W.; Frenzen, G.; Müller, U. Acta Crystallographica Section C 1995, 51 (6), 1120–1124. DOI 10.1107/S0108270194011789. Müller, U.; Siekmann, J. F. Acta Cryst C 1996, 52 (2), 330–333. DOI 10.1107/S0108270195011073. Taraba, J.; Zak, Z. Inorganic Chemistry 2003, 42 (11), 3591–3594. DOI 10.1021/ic034091n. Olkowska-Oetzel, J.; Sevillano, P.; Eichhöfer, A.; Fenske, D. European Journal of Inorganic Chemistry, 2004 (5), 1100–1106. DOI 10.1002/ejic.200300774. Zhang, X.-Z.; Song, Y.-W.; HuiWu, F.; Zhang, Q.-F. Zeitschrift für Naturforschung B 2007, 62 (6). DOI 10.1515/znb-2007-0605.
|
In single-variable calculus, the functions that one encounters are functions of a variable (usually \(x\) or \(t\)) that varies over some subset of the real number line (which we denote by \(\mathbb{R}\)). For such a function, say, \(y = f(x)\), the \(\textbf{graph}\) of the function \(f\) consists of the points \((x, y) = (x, f(x))\). These points lie in the \(\textbf{Euclidean plane}\), which, in the \(\textbf{Cartesian}\) or \(\textbf{rectangular}\) coordinate system, consists of all ordered pairs of real numbers \((a, b)\). We use the word ``Euclidean'' to denote a system in which all the usual rules of Euclidean geometry hold. We denote the Euclidean plane by \(\mathbb{R}^{2}\); the "2'' represents the number of \(\textit{dimensions}\) of the plane. The Euclidean plane has two perpendicular \(\textbf{coordinate axes}\): the \(x\)-axis and the \(y\)-axis.
In vector (or multivariable) calculus, we will deal with functions of two or three variables (usually \(x, y\) or \(x, y, z\), respectively). The graph of a function of two variables, say, \(z = f(x,y)\), lies in
Euclidean space, which in the Cartesian coordinate system consists of all ordered triples of real numbers \((a, b, c)\). Since Euclidean space is 3-dimensional, we denote it by \(\mathbb{R}^{3}\). The graph of \(f\) consists of the points \((x, y, z) = (x, y, f(x, y))\). The 3-dimensional coordinate system of Euclidean space can be represented on a flat surface, such as this page or a blackboard, only by giving the illusion of three dimensions, in the manner shown in Figure 1.1.1. Euclidean space has three mutually perpendicular coordinate axes (\(x, y\) and \(z\)), and three mutually perpendicular coordinate planes\index{plane!coordinate}: the \(xy\)-plane, \(yz\)-plane and \(xz\)-plane (Figure 1.1.2).
Figure 1.1.1 Figure 1.1.2
The coordinate system shown in Figure 1.1.1 is known as a \(\textbf{right-handed coordinate system}\), because it is possible, using the right hand, to point the index finger in the positive direction of the \(x\)-axis, the middle finger in the positive direction of the \(y\)-axis, and the thumb in the positive direction of the \(z\)-axis, as in Figure 1.1.3.
Fig 1.1.3:
An equivalent way of defining a right-handed system is if you can point your thumb upwards in the positive \(z\)-axis direction while using the remaining four fingers to rotate the \(x\)-axis towards the \(y\)-axis. Doing the same thing with the left hand is what defines a \(\textbf{left-handed coordinate system}\). Notice that switching the \(x\)- and \(y\)-axes in a right-handed system results in a left-handed system, and that rotating either type of system does not change its ``handedness''. Throughout the book we will use a right-handed system.
For functions of three variables, the graphs exist in 4-dimensional space (i.e. \(\mathbb{R}^{4}\)), which we can not see in our 3-dimensional space, let alone simulate in 2-dimensional space. So we can only think of 4-dimensional space abstractly. For an entertaining discussion of this subject, see the book by ABBOT.
So far, we have discussed the \(\textit{position}\) of an object in 2-dimensional or 3-dimensional space. But what about something such as the velocity of the object, or its acceleration? Or the gravitational force acting on the object? These phenomena all seem to involve motion and \(\textit{direction}\) in some way. This is where the idea of a \(\textit{vector}\) comes in.
You have already dealt with velocity and acceleration in single-variable calculus. For example, for motion along a straight line, if \(y = f(t)\) gives the displacement of an object after time \(t\), then \(dy/dt = f\,'(t)\) is the velocity of the object at time \(t\). The derivative \(f\,'(t)\) is just a number, which is positive if the object is moving in an agreed-upon "positive'' direction, and negative if it moves in the opposite of that direction. So you can think of that number, which was called the velocity of the object, as having two components: a \(\textit{magnitude}\), indicated by a nonnegative number, preceded by a
direction, indicated by a plus or minus symbol (representing motion in the positive direction or the negative direction, respectively), i.e. \(f\,'(t) = \pm a\) for some number \(a \ge 0\). Then \(a\) is the magnitude of the velocity (normally called the \(\textit{speed}\) of the object), and the \(\pm\) represents the direction of the velocity (though the \(+\) is usually omitted for the positive direction).
For motion along a straight line, i.e. in a 1-dimensional space, the velocities are also contained in that 1-dimensional space, since they are just numbers. For general motion along a curve in 2- or 3-dimensional space, however, velocity will need to be represented by a multidimensional object which should have both a magnitude and a direction. A geometric object which has those features is an arrow, which in elementary geometry is called a ``directed line segment''. This is the motivation for how we will define a vector.
Definition 1.1
A (nonzero) \(\textbf{vector}\) is a directed line segment drawn from a point \(P\) (called its \(\textbf{initial point}\)) to a point \(Q\) (called its \(\textbf{terminal point}\)), with \(P\) and \(Q\) being distinct points. The vector is denoted by \(\overrightarrow{PQ}\). Its \(\textbf{magnitude}\) is the length of the line segment, denoted by \(\norm{\overrightarrow{PQ}}\), and its \(\textbf{direction}\) is the same as that of the directed line segment. The \(\textbf{zero vector}\) is just a point, and it is denoted by \(\textbf{0}\).
To indicate the direction of a vector, we draw an arrow from its initial point to its terminal point. We will often denote a vector by a single bold-faced letter (e.g. \(\textbf{v}\)) and use the terms ``magnitude" and ``length'' interchangeably. Note that our definition could apply to systems with any number of dimensions (Figure 1.1.4 (a)-(c)).
Figure 1.1.4 Vectors in different dimensions
A few things need to be noted about the zero vector. Our motivation for what a vector is included the notions of magnitude and direction. What is the magnitude of the zero vector? We define it to be zero, i.e. \(\norm{\textbf{0}} = 0\). This agrees with the definition of the zero vector as just a point, which has zero length. What about the direction of the zero vector? A single point really has no well-defined direction. Notice that we were careful to only define the direction of a \(\textit{nonzero}\) vector, which is well-defined since the initial and terminal points are distinct. Not everyone agrees on the direction of the zero vector. Some contend that the zero vector has \(\textit{arbitrary}\) direction (i.e. can take any direction), some say that it has \(\textit{indeterminate}\) direction (i.e. the direction cannot be determined), while others say that it has \(\textit{no}\) direction. Our definition of the zero vector, however, does not require it to have a direction, and we will leave it at that.
Now that we know what a vector is, we need a way of determining when two vectors are equal. This leads us to the following definition.
Definition 1.2
Two nonzero vectors are \(\textbf{equal}\) if they have the same magnitude and the same direction. Any vector with zero magnitude is equal to the zero vector.
By this definition, vectors with the same magnitude and direction but with different initial points would be equal. For example, in Figure 1.1.5 the vectors \(\textbf{u}\), \(\textbf{v}\) and \(\textbf{w}\) all have the same magnitude \(\sqrt{5}\) (by the Pythagorean Theorem). And we see that \(\textbf{u}\) and \(\textbf{w}\) are parallel, since they lie on lines having the same slope \(\frac{1}{2}\), and they point in the same direction. So \(\textbf{u} = \textbf{w}\), even though they have different initial points. We also see that \(\textbf{v}\) is parallel to \(\textbf{u}\) but points in the opposite direction. So \(\textbf{u} \ne \textbf{v}\).
Figure 1.1.5
So we can see that there are an infinite number of vectors for a given magnitude and direction, those vectors all being equal and differing only by their initial and terminal points. Is there a single vector which we can choose to represent all those equal vectors? The answer is yes, and is suggested by the vector \(\textbf{w}\) in Figure 1.1.5.
Unless otherwise indicated, when speaking of "the vector" with a given magnitude and direction, we will mean the one whose initial point is at the origin of the coordinate system.
Thinking of vectors as starting from the origin provides a way of dealing with vectors in a standard way, since every coordinate system has an origin. But there will be times when it is convenient to consider a different initial point for a vector (for example, when adding vectors, which we will do in the next section). Another advantage of using the origin as the initial point is that it provides an easy correspondence between a vector and its terminal point.
Example 1.1
Let \(\textbf{v}\) be the vector in \(\mathbb{R}^{3}\) whose initial point is at the origin and whose terminal point is \((3,4,5)\). Though the \(\textit{point}\) \((3,4,5)\) and the vector \(\textbf{v}\) are different objects, it is convenient to write \(\textbf{v} = (3,4,5)\). When doing this, it is understood that the initial point of \(\textbf{v}\) is at the origin \((0,0,0)\) and the terminal point is \((3,4,5)\).
Figure 1.1.6 Correspondence between points and vectors
Unless otherwise stated, when we refer to vectors as \(\textbf{v} = (a,b)\) in \(\mathbb{R}^{2}\) or \(\textbf{v} = (a,b,c)\) in \(\mathbb{R}^{3}\), we mean vectors in Cartesian coordinates starting at the origin. Also, we will write the zero vector \(\textbf{0}\) in \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\) as \((0,0)\) and \((0,0,0)\), respectively.
The point-vector correspondence provides an easy way to check if two vectors are equal, without having to determine their magnitude and direction. Similar to seeing if two points are the same, you are now seeing if the terminal points of vectors starting at the origin are the same. For each vector, find the (unique!) vector it equals whose initial point is the origin. Then compare the coordinates of the terminal points of these ``new'' vectors: if those coordinates are the same, then the original vectors are equal. To get the ``new'' vectors starting at the origin, you \(\textit{translate}\) each vector to start at the origin by subtracting the coordinates of the original initial point from the original terminal point. The resulting point will be the terminal point of the ``new'' vector whose initial point is the origin. Do this for each original vector then compare.
Example 1.2
Consider the vectors \(\overrightarrow{PQ}\) and \(\overrightarrow{RS}\) in \(\mathbb{R}^{3}\), where \(P = (2,1,5), Q = (3,5,7), R = (1,-3,-2)\) and \(S = (2,1,0)\). Does \(\overrightarrow{PQ} = \overrightarrow{RS}\)?
Solution
The vector \(\overrightarrow{PQ}\) is equal to the vector \(\textbf{v}\) with initial point \((0,0,0)\) and terminal point \(Q - P = (3,5,7) - (2,1,5) = (3 - 2,5 - 1,7 - 5) = (1,4,2)\).
Similarly, \(\overrightarrow{RS}\) is equal to the vector \(\textbf{w}\) with initial point \((0,0,0)\) and terminal point \(S - R = (2,1,0) - (1,-3,-2) = (2 - 1, 1 - (-3),0 - (-2)) = (1,4,2)\). So \(\overrightarrow{PQ} = \textbf{v} = (1,4,2)\) and \(\overrightarrow{RS} = \textbf{w} = (1,4,2)\). \(\therefore \overrightarrow{PQ} = \overrightarrow{RS}\)
Figure 1.1.7
Recall the distance formula for points in the Euclidean plane:
For points \(P = (x_{1}, y_{1})\), \(Q = (x_{2}, y_{2})\) in \(\mathbb{R}^{2}\), the distance \(d\) between \(P\) and \(Q\) is:
\[d = \sqrt{(x_{2} - x_{1})^{2} + (y_{2} - y_{1})^{2}}\]
By this formula, we have the following result:
Note
For a vector \(\overrightarrow{PQ}\) in \(\mathbb{R}^{2}\) with initial point \(P = (x_{1}, y_{1})\) and terminal point \(Q = (x_{2}, y_{2})\), the magnitude of \(\overrightarrow{PQ}\) is:
\[\norm{\overrightarrow{PQ}} = \sqrt{(x_{2} - x_{1})^{2} + (y_{2} - y_{1})^{2}}\]
Finding the magnitude of a vector \(\textbf{v} = (a,b)\) in \(\mathbb{R}^{2}\) is a special case of the above formula with \(P = (0,0)\) and \(Q = (a,b)\):
For a vector \(\textbf{v} = (a,b)\) in \(\mathbb{R}^{2}\), the magnitude of \(\textbf{v}\) is:
\[\norm{\textbf{v}} = \sqrt{a^{2} + b^{2}}\]
To calculate the magnitude of vectors in \(\mathbb{R}^{3}\), we need a distance formula for points in Euclidean space (we will postpone the proof until the next section):
Theorem 1.1
The distance \(d\) between points \(P = (x_{1}, y_{1}, z_{1})\) and \(Q = (x_{2}, y_{2}, z_{2})\) in \(\mathbb{R}^{3}\) is:
\[d = \sqrt{(x_{2} - x_{1})^{2} + (y_{2} - y_{1})^{2} + (z_{2} - z_{1})^{2}}\]
The proof will use the following result:
Theorem 1.2
For a vector \(\textbf{v} = (a,b,c)\) in \(\mathbb{R}^{3}\), the magnitude of \(\textbf{v}\) is:
\[\norm{\textbf{v}} = \sqrt{a^{2} + b^{2} + c^{2}}\]
Proof: There are four cases to consider: \(\textit{Case 1:}\) \(a = b = c = 0\). Then \(\textbf{v} = \textbf{0}\), so \(\norm{\textbf{v}} = 0 = \sqrt{0^{2} + 0^{2} + 0^{2}} = \sqrt{a^{2} + b^{2} + c^{2}}\).
\(\textit{Case 2:}\) \(\textit{exactly two of }\)\(a, b, c\) are \(0\). Without loss of generality, we assume that \(a = b = 0\) and \(c \ne 0\) (the other two possibilities are handled in a similar manner). Then \(\textbf{v} = (0,0,c)\), which is a vector of length \(|c|\) along the \(z\)-axis. So \(\norm{\textbf{v}} = | c | = \sqrt{c^{2}} = \sqrt{0^{2} + 0^{2} + c^{2}} = \sqrt{a^{2} + b^{2} + c^{2}}\).
\(\textit{Case 3:}\) \(\textit{exactly one of }\)\(a, b, c\) is \(0\). Without loss of generality, we assume that \(a = 0\), \(b \ne 0\) and \(c \ne 0\) (the other two possibilities are handled in a similar manner). Then \(\textbf{v} = (0,b,c)\), which is a vector in the \(yz\)-plane, so by the Pythagorean Theorem we have \(\norm{\textbf{v}} = \sqrt{b^{2} + c^{2}} = \sqrt{0^{2} + b^{2} + c^{2}} = \sqrt{a^{2} + b^{2} + c^{2}}\). \(\textit{Case 4:}\) \(\textit{none of }\)\(a, b, c\) are \(0\). Without loss of generality, we can assume that \(a, b, c\) are all positive (the other seven possibilities are handled in a similar manner). Consider the points \(P = (0,0,0)\), \(Q = (a,b,c)\), \(R =(a,b,0),\) and \(S = (a,0,0)\), as shown in Figure 1.1.8. Applying the Pythagorean Theorem to the right triangle \(\triangle PSR\) gives \(\left\vert PR \right\vert^{2} = a^{2} + b^{2}\). A second application of the Pythagorean Theorem, this time to the right triangle \(\triangle PQR\), gives \(\norm{\textbf{v}} = \left\lvert PQ \right\rvert = \sqrt{\left\vert PR \right\vert^{2} + \left\vert QR \right\vert^{2}} = \sqrt{a^{2} + b^{2} + c^{2}}\). This proves the theorem.
\(\tag{\(\textbf{QED}\)}\)
Example 1.3
Calculate the following:
The magnitude of the vector \(\overrightarrow{PQ}\) in \(\mathbb{R}^{2}\) with \(P = (-1,2)\) and \(Q = (5,5)\). \( \textit{Solution:}\) By formula (1.2), \(\norm{\overrightarrow{PQ}} = \sqrt{(5 - (-1))^{2} + (5 - 2)^{2}} = \sqrt{36 + 9} = \sqrt{45} = 3 \sqrt{5}\). The magnitude of the vector \(\textbf{v} = (8,3)\) in \(\mathbb{R}^{2}\). \(\textit{Solution:}\) By formula (1.3), \(\norm{\textbf{v}} = \sqrt{8^{2} + 3^{2}} = \sqrt{73}\). The distance between the points \(P = (2, -1, 4)\) and \(Q = (4, 2, -3)\) in \(\mathbb{R}^{2}\). \(\textit{Solution:}\) By formula (1.4), the distance \(d = \sqrt{(4 - 2)^{2} + (2 - (-1))^{2} + (-3 - 4)^{2}} = \sqrt{4 + 9 + 49} = \sqrt{62}\). The magnitude of the vector \(\textbf{v} = (5,8,-2)\) in \(\mathbb{R}^{3}\). \(\textit{Solution:}\) By formula (1.5), \(\norm{\textbf{v}} = \sqrt{5^{2} + 8^{2} + (-2)^{2}} = \sqrt{25 + 64 + 4} = \sqrt{93}\).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.