text stringlengths 256 16.4k |
|---|
There is the instantaneous force between the mass and the scales, and then there's the reading of the scales. Factors influencing both of these depend on many unknown factors - so here are just some general thoughts.
First, if you drop a mass $M$ from height $h$ onto scales with mass $m$, and the two will then move as one, the collision is considered inelastic and the velocity $v'$ right after the collision can be derived from the velocity $v$ just before by putting
$$v = \sqrt{2gh}\\v'=\frac{M}{M+m}v$$
The time that it takes to accelerate the mass is given by the elasticity of the contact, and is not in general known. The shorter the time, the larger the force - with the average force given by
$$\int F\;dt = m v'\\F_av=\frac{mv'}{\Delta t}$$
if we consider the time-averaged force.
This force, however, probably doesn't register on your scales. I can think of three "main" kinds of scales: the balance, the spring scale, and the "return to zero" scale. None of them would register the initial impact force.
For a balance, the mass of the object is determined by comparing it to a reference mass on the other arm; that's not what you are looking for here.
For a spring scale, the deflection of a spring under the load is a measure of the weight: this is the one that is most amenable to analysis here.
For a "return to zero" scale (I can't think of the proper name - comments, anyone?) the mechanism used generates a current in an electromagnet to restore the scale to the same position it was before the mass was added. These typically have a slow response but are the most accurate - since the force is exactly proportional to the current when the scale is in the same position, there are no effects of spring ageing, temperature etc.
I believe your question can only reasonably be answered for the spring scale. Note that such scales are typically "critically damped" (although that is only approximately true since the degree of damping depends on the mass - so it can only be critically damped for one mass). But let's initially leave that out.
If we have an object with mass $M$ dropping from height $h$ onto a spring loaded scale with mass $m<<M$ and spring constant $k$, the deflection $d$ can be calculated from conservation of energy.
Total drop of the mass = $h+d$
Energy due to drop $E = Mg(h+d)$
This must be equal to the elastic energy stored at the maximum deflection:
$$\frac12 k d^2 = Mg(h+d)$$
We can rearrange and solve for $d$:
$$\frac12 k d^2 - Mgd - Mgh = 0\\d = \frac{Mg ± \sqrt{M^2g^2 + 2 Mghk}}{k}$$
The two roots correspond to the extrema of the (simple harmonic) motion that the mass would execute: for your question, we need the positive root. Note that if $h=0$ the above shows that the maximum deflection of the scale will be twice the distance expected,
$$d = \frac{2Mg}{k}$$
and as the mass is dropped from greater height, the maximum deflection will rapidly increase more. Exactly how much more depends on the value of $k$ - the stiffer the spring, the greater the value of $k$, and the larger the value you read on the scales as the object is dropped from a greater height. All this is in agreement with your own analysis.
This is of course where the damping will come into play: a heavily damped scale will dissipate all the oscillatory energy, and will slowly reach the equilibrium value without ever overshooting. An underdamped scale will oscillate - in the limit of very light damping, it will read the value calculated above. But a critically damped scale will overshoot just once, and we can compute the amount of overshoot.
The solution for a critically damped simple harmonic oscillator is given by
$$x(t) = (A + Bt)\;e^{-\omega_0t}$$
We find A and B from the initial conditions,
$$A = x(0)\\B = \dot{x}(0) + \omega_0 x(0)$$
If we put $x(0) = d = \frac{Mg}{k}$, the overshoot will be given by the initial velocity which we calculated above. For a light scale, this initial velocity is the velocity $v$ of the mass after falling height $h$ (I am going to ignore the additional height $d$ which I will assume to be small; otherwise the equations get a lot messier...) and the equation of motion becomes
$$x(t) = (d + (-v + \omega_0 d)t)e^{-\omega_0 t}$$
(The minus sign is there because the velocity is towards the equilibrium position). Taking the derivative with respect to $t$ we can compute the time when the overshoot is maximum and plug that back in to obtain the overshoot:
$$-\omega_0 (d + (-v + \omega_0 d)t)e^{-\omega_0 t} + (-v + \omega_0 d)e^{-\omega_0 t}=0\\-\omega_0 (d + (-v + \omega_0 d)t)+ (-v + \omega_0 d) = 0\\\omega_0 vt- \omega_0^2 dt + v = 0\\t = \frac{v}{\omega_0(v-\omega_0 d)}$$
For sufficiently large $v (v>\omega_0 d)$, this will result in an extreme (overshoot).
Substituting $t$ back into the expression for $x$ will give the size of the overshoot, from which the mass registered on the balance follows. |
Need help solving the problem $\tan(\tan^{-1}(\frac{2}{5}))$ using inverse functions without the use of a calculator, I have no idea how to use $\frac{2}{5}$ as I am only familiar with numbers on the unit circle.
Think about what the inverse tangent function is.
In English:
$\tan^{-1} x$ takes an input number $x$ and returns a number (or angle), such that the tangent of that number is $x$. So $\tan^{-1} \left( \frac{2}{5} \right)$ will return a number that the $\tan()$ of that number is $\frac{2}{5}$. However, you're putting that number right back into the $\tan$ function ($\tan\left( \tan^{-1} x \right)$). So you're taking the tangent of a number that when input into the tangent function is $\frac{2}{5}$.
In math:
$$y = \tan x \implies x = \tan^{-1} y$$ $$ \tan \left( \tan^{-1} x \right) = x $$ In general, $f\left( f^{-1} (x) \right) = x$
Basically tan$^{-1}(2/5))$ is some angle who's tan value is $2/5$. So, tan(tan$^{-1}(2/5))$ is just $2/5$.
Hint: $$\tan^{-1}(\tan \theta)=\theta $$ for $\theta \in (-\pi/2 , \pi/2)$ |
In the following, $x$ runs in the interval $J=[-1,1]$.We introduce the functions of $x\in J$$$\begin{aligned}A(x) &= \sqrt{58-42 x}\ ,\\B(x) &= \sqrt{149-140\sqrt{ 1-x^2}}\ .\\&\qquad\text{Then we have }\\10^2(58-A^2)^2 + 3^2(149-B^2)^2 &= 420^2\ .\end{aligned}$$So we can formulate an equivalent problem:
Minimize $a+b$ constrained to $a$ between $\sqrt{58\pm 42}$ (i.e. $3$ and $10$), and $b$ between $\sqrt{149\pm 140}$ (i.e. $3$ and $17$) and $$10^2(58-a^2)^2+3^2(149-b^2)^2= 420^2\ .$$
So we search Lagrange multiplicators for the function$$F(a,b;t)=(a+b)-t(10^2(58-a^2)^2+3^2(149-b^2)^2-420^2)$$ to get the
local extremal points. (Then we still have to compare with the marginal values.)We obtain the following system:$$\left\{\begin{aligned}0 &= F'_a(a,b;t) = 1+10^2\;4at\;(58-a^2)\ ,\\0 &= F'_b(a,b;t) = 1+3^2\;4bt\;(149-b^2)\ ,\\0 &= F'_t(a,b;t) = 10^2(58-a^2)^2+3^2(149-b^2)^2- 420^2\ .\end{aligned}\right.$$Algebraically, we have only a slight improvement, compared with the equation $f'(x)=0$, which was the intention in the OP, where we have some radicals. Above we have a purely algebraic system, the slight improvement, and still have to start the solution.
The idea is elimination.
We first eliminate $4t$, which appears linearly, getting:$$\left\{\begin{aligned}10^2\;a\;(58-a^2) &= 3^2\;b\;(149-b^2) \ ,\\10^2(58-a^2)^2 &+3^2(149-b^2)^2= 420^2\ .\end{aligned}\right.$$One possible elimination idea (of $b$) from this point is as follows. We are squaring the first equation, so we have an expression of $b^2(149-b^2)^2$ in terms of $a$, so $(149-b^2)^2$ is a polynomial in $a$ divided by $b^2$. We insert this expression of $(149-b^2)^2$ in the second equation, thus obtaining a formula for $b^2$ as a polynomial in $a$. We insert this $b^2$ in the second equation, thus obtaining an equation only in $a$. Of course, i cannot stop here and say "from here it simple, details left to the reader"... This bloody job is explicitly as follows, using a computer, sage in my case:
var('a,b,bb');
EXPR = ( 10^2*a*(58-a^2) / 3^2 )^2 / bb # (149-b^2)^2
# bb is above a new variable for b^2
eq = solve( 10^2*(58-a^2)^2 + 3^2*EXPR == 420^2, bb )[0]
print "b^2 is the solution bb of:\n%s" % eq
bb = eq.rhs()
a_poly = 10^2*(58-a^2)^2 + 3^2*(149-bb)^2 - 420^2
print "a is a zero point for the expression:"
print a_poly.factor()
Results:
b^2 is the solution bb of:
bb == -100/9*(a^6 - 116*a^4 + 3364*a^2)/(a^4 - 116*a^2 + 1600)
a is a zero point for the expression:
1/9
*(100*a^8 - 13400*a^6 + 398509*a^4 + 2317356*a^2 + 47534400)
*(109*a^4 - 9044*a^2 + 174400)
/((a + 10)^2*(a + 4)^2*(a - 4)^2*(a - 10)^2)
(The last expression was manually broken to fit in page.)So $a$ is a root of the one or the other polynomial in the numerator. So $a^2$ is either the root of $100 U^4 - 13400U^3 + 398509 U^2 + 2317356 U + 47534400$, or the root or $109 U^2 - 9044 U + 174400$.
I did the above "in a human manner", and against my taste and conviction.Let us put it an other way.Of course, we cannot expect to have "simple solutions" and a "simple elimination by quick hint", so let us the computers do the work for us, then we can still decide...
Using sage, we eliminate blindly:
sage: R.<a,b,t> = PolynomialRing(QQ)
sage: R
Multivariate Polynomial Ring in a, b, t over Rational Field
sage: F = a+b - t*( (10*(58-a^2))^2 + (3*(149-b^2))^2 - 420^2 )
sage: J = R.ideal( diff(F,a), diff(F,b), diff(F,t) )
sage: K = J.elimination_ideal([b,t])
sage: K
Ideal (10900*a^12 - 2365000*a^10 + 182067081*a^8 - 5688483592*a^6 + 53723051536*a^4 - 25754227200*a^2 + 8289999360000)
of Multivariate Polynomial Ring in a, b, t over Rational Field
sage: K.gens()[0].factor()
(109*a^4 - 9044*a^2 + 174400)
* (100*a^8 - 13400*a^6 + 398509*a^4 + 2317356*a^2 + 47534400)
and this rather reflects my way to work.We have thus all possible points / all candidates $a=\sqrt{58-42x}$, so that calculating the corresponing $x$ (or the corresponding $b$) and inserting in $f$ (or $F$) can lead to a local minimum.
Let us finally ask for the numerical values, to finish, and get an answer. For each root $a$ of the above polynomial we compute numerically (to a good precision) the value of $f$. (Since the OP may not be interested in $F$.)
f(x) = sqrt(58-42*x) + sqrt(149-140*sqrt(1-x^2))
R.<a> = PolynomialRing(QQ)
P = (109*a^4 - 9044*a^2 + 174400) \
* (100*a^8 - 13400*a^6 + 398509*a^4 + 2317356*a^2 + 47534400)
for aroot in P.roots(ring=AA, multiplicities=False):
if aroot < 3 or aroot > 10:
print "a = %s :: REJECTED" % aroot
continue
x = (58-aroot^2)/42
print "a = %s x = %s f(x) = %s" % (aroot, x, QQbar(f(x)))
We get:
a = -8.857786578527434? :: REJECTED
a = -7.936142667572221? :: REJECTED
a = -7.245077360672018? :: REJECTED
a = -5.520990047273946? :: REJECTED
a = 5.520990047273946? x = 0.6552064023310009? f(x) = 12.09647575142790?
a = 7.245077360672018? x = 0.1311631913780425? f(x) = 10.44030650891055?
a = 7.936142667572221? x = -0.11862762952524599? f(x) = 11.096611974467387?
a = 8.857786578527434? x = -0.4871519778747795? f(x) = 14.02843358892943?
The minimal value of $f(x)$ among the above is obtained in the line:
a = 7.245077360672018? x = 0.1311631913780425? f(x) = 10.44030650891055?
(The value $f(x)$ is smaller than $f(\pm1)$ and $f(0)$, corresponding to possible global minimal values at the boundary.)
The corresponding $a$ is a root of $109a^4 - 9044a^2 + 174400=0$, explicitly:$$a_* = \sqrt{\frac1{109}(4522+18\sqrt{4441})}\ .$$And the corresponding $x_*=(58-{a_*}^2)/42$ is$$x_* = \frac 3{763}(100-\sqrt{4441})\approx 0.13116319137804\dots\ .$$
Note: The numerical computations in between were done with a good enough precision to insure we pick the right candidate. So we have a proof. The following (truly) numerical computation is a check.
sage: var('x');
sage: minimize( sqrt(58-42*x) + sqrt( 149-140*sqrt(1-x^2) ), [0.5] )
(0.13116313434376808) |
I try to calculate the gradient of a function and the divergence of a vector field in spherical coordinates. Nothing special so far, but a formula that I learned in a general relativity lecture creates confusion.
First of all consider a vector field $f$ and a function $h$ on $\mathbb R^3$. Of course in spherical coordinates we have
\begin{align*} \operatorname{div} f &= \frac{1}{r^2}\partial_r (r^2 f^r) + \frac{1}{r \sin \theta} \partial_\theta (\sin \theta f^\theta) + \frac{1}{r \sin \theta} \partial_\varphi f^\varphi\\ \operatorname{grad} h &= e_r \partial_r h + e_\theta \frac{1}{r}\partial_\theta h + e_\varphi \frac{1}{r \sin \theta} \partial_\varphi h \end{align*}
In the general relativity lecture I learned the two formulas
$$\nabla_\mu f^\mu = \frac{1}{\sqrt{|\operatorname{det} g|}}\partial_\mu (\sqrt{|\operatorname{det}g|}f^\mu)$$
and
$$\nabla h = (\nabla h)^\nu \partial_\nu = g^{\mu\nu}\partial_\mu h \partial_\nu $$
where the Riemannian Metric in spherical coordinates reads
$$g = \text d r^2 + r^2 \text d \theta^2 + r^2 \sin^2 \theta \text d \varphi^2$$
so that
$$\sqrt{|\operatorname{det}g|} = r^2 \sin \theta$$
If I now use the two formulas I get different results for the gradient and the divergence:
\begin{align*} \nabla_\mu f^\mu &= \frac{1}{r^2} \partial_r (r^2 f^r) + \frac{1}{\sin\theta}\partial_\theta(\sin\theta f^\theta) + \partial_\varphi f^\varphi\\ \nabla h &= \partial_r h \partial_r + \frac{1}{r^2}\partial_\theta h \partial_\theta + \frac{1}{r^2 \sin^2 \theta}\partial_\varphi h \partial_\varphi \end{align*}
Can someone tell me what went wrong? I guess that something is wrong when I compare the unit vector notation $e_i$ with $\partial_i$ |
Say I have a rational number, $n$, that approximates an irrational number of the form:
$$n \approx {a+\sqrt b \over c}$$
in terms of being irrational.
What is a good way of finding the unknown integer constants $a$, $b$, and $c$?
Ex: $n = {33385282\over 20633239} \approx 1.618034 \approx \phi \approx {1 + \sqrt 5 \over 2}$ (note that $\phi$ (the golden ratio) is merely for example)
Thus $a = 1$, $b = 5$, and $c = 2$ for this example.
I know that since $n^2 \approx {a^2 + 2a\sqrt b + b \over c^2}$,you could find $a$ and $b$ by multiplying $n^2$ by $c^2$, and then subtracting $xnc$, finding the value of $x$ that makes the difference an integer. Then $x = 2a$, and $b$ can be solved using substitution algebra. But, all that implies that $c$ is known, and that's where I'm stuck.
Ex: $c$ is known, thus $c = 2$
$n^2c^2 \approx 10.472135$
$n^2c^2 - nc \approx 7.2360680$
$n^2c^2 - 2nc \approx 4$ ($x$ is now $2$ because $4$ is an interger)
$a = x/2 = {2/2} = 1$, $ b = \text algebra = 5$
So how do I find $c$ first, or what is a better way to go about solving $a$, $b$, and $c$ other than by trial? |
When we learn quantum mechanics, we are told that the only way to extract information from a system is to conduct measurements. We are told that if two observables commute then performing one measurement does not affect the outcome of the other.
This is how we are told measurements work: if $a$ is an eigenvalue of $\hat A$, then if we perform the measurement corresponding to $\hat A$, the probability of measuring $a$ is $\langle \psi| \hat P_a | \psi \rangle$ where $\hat P_a$ is the projection operator onto the $a$-eigenspace. $| \psi \rangle$ then collapses into the state $\frac{\hat P_a | \psi \rangle}{\sqrt{\langle \psi | \hat P_a |\psi \rangle}}$.
What if, however, we have two operators that commute but do not share eigenspaces? Take, for example, the two operators
$\hat A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix}, \hat B = \begin{pmatrix} 3 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 4 \end{pmatrix}.$
Certainly, $[\hat A, \hat B] = 0$. However, these operators do not share eigenspaces for any eigenvalues. Let's say we conduct the measurement $\hat B$ right before we conduct the measurement $\hat A$. When we measure $\hat B$, we will measure an eigenvalue $b$ with probability $\langle \psi| \hat P_b | \psi \rangle$ and the state will collapse into $\frac{\hat P_b | \psi \rangle}{\sqrt{\langle \psi | \hat P_b |\psi \rangle}}$. The probability to measure $a$ after this is just $\frac{\langle \psi | \hat P_a \hat P_b | \psi \rangle }{\langle \psi | \hat P_b | \psi \rangle }$. Afterwards, the state will collapse into $\frac{ \hat P_a \hat P_b | \psi \rangle}{\sqrt{\langle \psi | \hat P_a \hat P_b |\psi \rangle}}$.
Now, the probability of measuring some eigenvalue $a$ does not change depending on whether or not we measured $\hat B$ beforehand. This can be easily checked, as the probability of measuring an eigenvalue $a$ is just, summing over all possible $b$ measurements,
$\sum_b \langle \psi | \hat P_b| \psi\rangle \frac{\langle \psi | \hat P_a \hat P_b | \psi \rangle }{\langle \psi | \hat P_b | \psi \rangle } = \sum_b \langle \psi | \hat P_a \hat P_b | \psi \rangle = \langle \psi | \hat P_a | \psi \rangle$
which is the same as before. Indeed, measuring $\hat B$ before $\hat A$ does not affect the probability of measuring any particular $a$, which is what we were told in class.
HOWEVER, the collapsed state after the $\hat A$ measurement is proportional to $\hat P_a \hat P_b |\psi \rangle$, NOT proportional to $\hat P_a | \psi \rangle$. If $\hat A$ and $\hat B$ share no eigenspaces, then this will always be a different state then if we had conducted no $\hat B$ measurement at all. In other words, the fact that we conducted the $\hat B$ measurement lives on in the fact that our state has collapsed further than it would have otherwise. How do we know that we won't be able to detect this? For example, maybe we could wait some time, let the system evolve in some way, and then pick up some interference effect. How does this extra collapse of the wave function affect causality in QFT with regards to commuting space-like separated local operators? And why is this never talked about in introductory QM books? |
Hello, I am curious if I have this correct and if it has a name.A thin walled cylinder is spinning on its axis along its length in a closed system. It begins to draw itself in converting its invariant mass to kinetic energy. In polar coordinates ##E=\gamma_\theta m c^2, L=\gamma_\theta m...
So the universe is expanding, and galaxies are getting farther apart from one another on average. Does this motion count the same as ordinary motion, in that if a galaxy is being expanded away from us at 0.5c, that clocks in that galaxy would appear to tick slower at 0.866 the rate of clocks here?
Hello everyone!Recently I saw this paper: https://arxiv.org/pdf/1304.4801.pdf ("Any nonlocal model assuming “local parts” conflicts with relativity " by Antoine Suarez).He mentions standard experimental configuration with beam-splitters and detectors. Then he distinguishes possible models...
I know that it would vary depending on the type of research a specific astronomer would be doing ( Astrophysics/Cosmology research versus an Astronomer researching exoplanets ) ; but in your opinion, “how much” or “how well” should an Astronomer with a graduate degree in Astronomy know General...
Hi, I have been reading and watching a lot of physics lately but I have come across this problem.I have the basics of special relativity down, and it all seems clear to me. This is not in question to me. For example, I am reading a book on string theory by Brain Greene, and in it he covers...
Why do we use the equation ##\frac {1}{2}mv^2 = \frac {GmM}{r}## to derive potential velocity, and then put that in the Lorentz factor in order to derive gravitational time dilation? Shouldn't we be using the relativistic definition of kinetic energy -> ##mc^2(\gamma - 1)## to derive the...
Since for the two events of Samir starting the stopwatch, and the stopwatch reaching 10.0s, Samir and his stopwatch are stationary from his own frame of reference, I said it was the proper time and that delta t0 = 10s. Then the speed of the moving frame of reference was 0.6c. I thought placing...
Suppose we take a charged particle and a magnet and place them at some particular distance apart .Now let's take 2 frame of reference.[the charged particle and the magnet are in rest with respect to each other through out the whole event]{both of the frames are inertial}frame(a): this frame...
I've always wondered how we came to come up with such an idea. Was he one day sitting around and thinking, then made a random assumption and go "ah hah!". Or did his idea come up through his calculations on the nature of how gravity should cause interaction? Is their a literal fabric of space...
Peeling this out into its own thread for clarity:How is time dilation of extreme reference frames (photons, black holes, intergalactic space-time) taken into account in Big Bang cosmology? Since from the POV of a singularity or a photon, their clocks have effectively stopped and any lower...
Say there is a circular fence that has a diameter of 10 meters, and a rocket ship that is normally 20 meters goes very quickly so that its relativistic length is 1m from the position of an observer standing at rest with relation to the fence.The rocket ship starts to go in a circle inside the...
Something that crossed my mind recently; I know that satellites have to adjust their clock due to their relativistic time variations in relation to us. I was wondering do they adjust their times in accordance to general relativity or special relativity or both? Or is their speed so insignificant...
1. Homework StatementYou make repeated measurements of the electric field ##\vec E## due to a distant charge, and you find it is constant in magnitude and direction. At time ##t=0## your partner moves the charge. The electric field doesn't change for a while, but at time ##t=24## ns you...
In his book, Landau mentioned varying the relativistic lagrangianHowever, I do not understand how he got from varying the integral of ds to varying only the contravariant components.Would the general procedure not be varying$$\delta S = -mc\delta\int_a^b\frac{dx_idx^i}{\sqrt{ds}}$$ and...
Hi,It is usually claimed that a person in an accelerating elevator with an acceleration equals to the gravity of the earth; this person cannot make any experiment that makes him know whether he is in the elevator or on the surface of earth.However, if this person project two light beams...
Is it a fair prediction to state that in the next several years or so, globally, there will be major investments into gravitational wave research, and many more ‘LIGOs’ being developed?Is it a good idea to venture into that area of physics?
So I have sort of a conceptual question about the big bang and gravity.Imagine yourself in a universe, in which existed about the number of particles/energy in a 3X3 metre room at any given moment. This universe has the same laws of physics, constants and is identical in every way to our...
Hello,There is a common setup used when describing the intimate relationship between electricity and magnetism. I have a question about the setup.Setup:There is some long current-carrying wire. Outside of that wire, there is some test charge.In the first situation, the test charge is...
Hi all.I was wondering if time is dilated whilst travelling in a stable magnetic field that is generated by the object travelling, and if so, does this vary if you reduce or intensify the magnetic field?Also, what happens if the object is generating two opposing magnetic fields, would...
1. Homework StatementWhat is the speed of a photon with respect to another photon if:the two photons are going in the same direction.they are going in opposite direction?2. The attempt at a solutionI think the answer to the first question should be zero and to the second one be 2xC...
HelloTo develop one interesting idea I need to be able to do calculations on (1) scattering of light from bodies in arbitrary motion, possibly at relativistic speeds; (2) Propagation of light in electromagnetic media that are in arbitrary motion (possibly relativistic). For example, I would...
There are many popscience articles about relativity and how relativity is essential for GPS and in many is mentioned something like this: If GPS wasnt corrected for time dilation predicted by GRT, there would be an error of 12 kilometers in GPS positioning per day.But when I checked how GPS...
So I'm kind of confused. The way I understand it, an electromagnetic field is just a regular electric field viewed from a relativistic point of view, meaning that since we see the charges moving relative to us, we feel like the particles and the fields created by them come closer together (I...
If you were to travel alongside a train, as fast the train, to you the train would seem stationary. I read that if you were to travel along a photon of light, as fast as the speed of light, that photon would not seem stationary. Is this true? If so, why?
The earth rotates around its own axis within 24 hours. Theoretically, if at the equator perform a tower with a height - H, ignoring its effect on the slowdown of the Earth's rotation (we assume that the material of low density), objects and other complications, at a certain height, the linear...
I am not a physicist—not even close—just a guy who, for some crazy reason, decided to try to understand some of the basics of relativity. I’d like to understand them well enough to be able to explain them (correctly) to another lay person. I’m trying to see how much I could explain without...
Can we truly have a rest frame or should it be a close to rest frame?Even if I'm stationary and sitting on my porch and the observer in the car passing is moving, I'm still not at 0 velocity.The earth is moving at 67,000 mph and the galaxy is moving at 250,000 mph. I'm never in a single... |
Most often, the $S$-matrix is defined as an operator between asymptotic initial and final Hilbert spaces for a
time-dependent scattering process, i.e. between $t\to-\infty$ and $t\to\infty$. There unitarity encodes conservation of probabilities over time. On the other hand, the book that OP mentions, Ref. 1, talks about a time-independent scattering process. For a discussion of the connection between time-dependent and time-independent scattering, see this Phys.SE question.
In this answer we will only consider time-independent scattering. Ref. 1 defines for a 1D system (divided into three regions $I$, $II$, and $III$, with a localized potential $V(x)$ in the middle region $II$), a $2\times 2$ scattering matrix $S(k)$ as a matrix that tells how two asymptotic incoming (left- and right-moving) waves (of wave number $\mp k$ with $k>0$) are related to two asymptotic outgoing (left- and right-moving) waves. In formulas,
$$\left. \psi(x) \right|_{I}~=~ \underbrace{A(k)e^{ikx}}_{\text{incoming right-mover}} + \underbrace{B(k)e^{-ikx}}_{\text{outgoing left-mover}}, \qquad k>0, \tag{1} $$$$\left. \psi(x)\right|_{III}~=~ \underbrace{F(k)e^{ikx}}_{\text{outgoing right-mover}} + \underbrace{G(k)e^{-ikx}}_{\text{incoming left-mover}}, \qquad\qquad\qquad\tag{2}$$
$$ \begin{pmatrix} B(k) \\ F(k) \end{pmatrix}~=~ S(k) \begin{pmatrix} A(k) \\ G(k) \end{pmatrix}.\tag{3}$$
To show that a finite-dimensional matrix $S(k)$ is unitary, it is enough to show that $S(k)$ is an isometry,
$$ S(k)^{\dagger}S(k)~\stackrel{?}{=}~{\bf 1}_{2\times 2} \quad\Leftrightarrow\quad|A(k)|^2+ |G(k)|^2~\stackrel{?}{=}~|B(k)|^2+ |F(k)|^2,\tag{4}$$
or equivalently,
$$ |A(k)|^2-|B(k)|^2 ~\stackrel{?}{=}~|F(k)|^2-|G(k)|^2.\tag{5} $$
Equation (5) can be justified by the following comments and reasoning.
$\psi(x)$ is a solution to the time-independent Schrödinger equation (TISE)$$ \hat{H} \psi(x) ~=~ E \psi(x), \qquad \hat{H}~:=~\frac{\hat{p}^2}{2m}+V(x),\qquad \hat{p}~:=~\frac{\hbar}{i}\frac{\partial}{\partial x},\tag{6}$$ for positive energy $E>0$.
The solution space for the Schrödinger eq. $(6)$, which is a second-order linear ODE, is a two-dimensional vectors space.
It follows from eq. $(6)$ that the wave numbers $\pm k$, $$k ~:=~\frac{\sqrt{2mE}}{\hbar} ~\geq~ 0,\tag{7} $$ must be the same in the two asymptotic regions $I$ and $III$. This will imply that the $M$-matrix (to be defined below) and the $S$-matrix are diagonal in $k$-space.
Moreover, it follows that there exists a
bijective linear map $$ \begin{pmatrix} A(k) \\ B(k) \end{pmatrix} ~\mapsto~ \begin{pmatrix} F(k) \\ G(k) \end{pmatrix}.\tag{8} $$In Ref. 2, the transfer matrix $M(k)$ is defined as the corresponding matrix$$ \begin{pmatrix} F(k) \\ G(k) \end{pmatrix}~=~ M(k) \begin{pmatrix} A(k) \\ B(k) \end{pmatrix}.\tag9$$ The $S$-matrix $(3)$ is a rearrangement of eq. $(9)$.
One may use the Schrödinger eq. $(6)$ (and the reality of $E$ and $V(x)$) to show that the Wronskian $$ W(\psi,\psi^{\ast})(x)~=~\psi(x)\psi^{\prime}(x)^{\ast}-\psi^{\prime}(x)\psi(x)^{\ast},\tag{10}$$or equivalently the probability current$$ J(x)~=~\frac{i\hbar}{2m} W(\psi,\psi^{\ast})(x),\tag{11}$$does not depend on the position $x$,$$ \frac{\mathrm dW(\psi,\psi^*)(x)}{\mathrm dx}~=~\psi(x)\psi^{\prime\prime}(x)^{\ast}-\psi^{\prime\prime}(x)\psi(x)^{\ast}~\stackrel{(6)}{=}~0.\tag{12}$$Unitarity (5) is equivalent to the statement that$$\left. W(\psi,\psi^*)\right|_{I}~=~\left. W(\psi,\psi^*) \right|_{III}.\tag{13}$$Ref. 3 mentions that eq. $(12)$ encodes conservation of energy in the scattering.
References:
D.J. Griffiths,
Introduction to Quantum Mechanics; Section 2.7 in 1st edition from 1994 and Problem 2.52 in 2nd edition from 1999.
D.J. Griffiths,
Introduction to Quantum Mechanics; Problem 2.49 in 1st edition from 1994 and Problem 2.53 in 2nd edition from 1999.
P.G. Drazin & R.S. Johnson,
Solitons: An Introduction, 2nd edition, 1989; Section 3.2. |
An obstacle problem for Tug-of-War games
1.
Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, United States
2.
Departamento de Análisis Matemático, Universidad de Alicante, Ap 99, 03080, Alicante, Spain
3.
Department of Mathematics, Dartmouth College, Hanover, NH 03755, United States
Mathematics Subject Classification:Primary: 35J60, 91A05, 49L25, 35J2. Citation:Juan J. Manfredi, Julio D. Rossi, Stephanie J. Somersille. An obstacle problem for Tug-of-War games. Communications on Pure & Applied Analysis, 2015, 14 (1) : 217-228. doi: 10.3934/cpaa.2015.14.217
References:
[1]
T. Antunović, Y. Peres and S. Sheffield and S. Somersille, Tug-of-War and infinity Laplace equation with vanishing Neumann boundary conditions,,
[2]
S. N. Armstrong, C. K. Smart and S. J. Somersille, An infinity Laplace equation with gradient term and mixed boundary conditions,,
[3]
G. Aronsson, M. G. Crandall and P. Juutinen, A tour of the theory of absolutely minimizing functions,,
[4]
T. Bhattacharya, E. Di Benedetto and J. Manfredi, Limits as $p \to \infty$ of $\Delta_p u_p = f$ and related extremal problems,,
[5] [6] [7]
M. G. Crandall, H. Ishii and P. L. Lions, User's guide to viscosity solutions of second order partial differential equations,,
[8] [9]
J. J. Manfredi, M. Parviainen and J. D. Rossi, An asymptotic mean value characterization of $p$-harmonic functions,,
[10]
J. J. Manfredi, M. Parviainen and J. D. Rossi, Dynamic programming principle for tug-of-war games with noise,,
[11]
J. J. Manfredi, M. Parviainen and J. D. Rossi, On the definition and properties of $p$-harmonious functions,,
[12]
J. J. Manfredi, M. Parviainen and J. D. Rossi, An asymptotic mean value characterization for a class of nonlinear parabolic equations related to tug-of-war games,,
[13]
Y. Peres, G. Pete and S. Somersille, Biased Tug-of-War, the biased infinity Laplacian and comparison with exponential cones,,
[14] [15] [16]
J. D. Rossi, E. V. Teixeira and J. M. Urbano, Optimal regularity at the free boundary for the infinity obstacle problem,,
show all references
References:
[1]
T. Antunović, Y. Peres and S. Sheffield and S. Somersille, Tug-of-War and infinity Laplace equation with vanishing Neumann boundary conditions,,
[2]
S. N. Armstrong, C. K. Smart and S. J. Somersille, An infinity Laplace equation with gradient term and mixed boundary conditions,,
[3]
G. Aronsson, M. G. Crandall and P. Juutinen, A tour of the theory of absolutely minimizing functions,,
[4]
T. Bhattacharya, E. Di Benedetto and J. Manfredi, Limits as $p \to \infty$ of $\Delta_p u_p = f$ and related extremal problems,,
[5] [6] [7]
M. G. Crandall, H. Ishii and P. L. Lions, User's guide to viscosity solutions of second order partial differential equations,,
[8] [9]
J. J. Manfredi, M. Parviainen and J. D. Rossi, An asymptotic mean value characterization of $p$-harmonic functions,,
[10]
J. J. Manfredi, M. Parviainen and J. D. Rossi, Dynamic programming principle for tug-of-war games with noise,,
[11]
J. J. Manfredi, M. Parviainen and J. D. Rossi, On the definition and properties of $p$-harmonious functions,,
[12]
J. J. Manfredi, M. Parviainen and J. D. Rossi, An asymptotic mean value characterization for a class of nonlinear parabolic equations related to tug-of-war games,,
[13]
Y. Peres, G. Pete and S. Somersille, Biased Tug-of-War, the biased infinity Laplacian and comparison with exponential cones,,
[14] [15] [16]
J. D. Rossi, E. V. Teixeira and J. M. Urbano, Optimal regularity at the free boundary for the infinity obstacle problem,,
[1] [2]
Ángel Arroyo, Joonas Heino, Mikko Parviainen.
Tug-of-war games with varying probabilities and the normalized
[3]
Jan Burczak, P. Kaplický.
Evolutionary, symmetric $p$-Laplacian. Interior regularity of time derivatives and its consequences.
[4] [5]
Fang Liu.
An inhomogeneous evolution equation involving the normalized infinity Laplacian with a transport term.
[6] [7] [8]
Walter Allegretto, Yanping Lin, Shuqing Ma.
Hölder continuous solutions of an obstacle thermistor problem.
[9] [10]
Yinbin Deng, Yi Li, Wei Shuai.
Existence of solutions for a class of p-Laplacian type equation
with critical growth and potential vanishing at infinity.
[11]
Mikko Kemppainen, Peter Sjögren, José Luis Torrea.
Wave extension problem for the fractional Laplacian.
[12]
Fabio Camilli, Elisabetta Carlini, Claudio Marchi.
A model problem for Mean Field Games on networks.
[13] [14]
Laurent Denis, Anis Matoussi, Jing Zhang.
The obstacle problem for quasilinear stochastic PDEs with non-homogeneous operator.
[15]
Fabio Camilli, Paola Loreti, Naoki Yamada.
Systems of convex Hamilton-Jacobi equations with implicit obstacles and the obstacle problem.
[16]
José-Francisco Rodrigues, João Lita da Silva.
On a unilateral reaction-diffusion system and a nonlocal evolution obstacle problem.
[17]
J.I. Díaz, D. Gómez-Castro.
Steiner symmetrization for concave semilinear elliptic and parabolic
equations and the obstacle problem.
[18] [19]
Weisong Dong, Tingting Wang, Gejun Bao.
A priori estimates for the obstacle problem of Hessian type equations on Riemannian manifolds.
[20]
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top] |
I recently read Khinchin's derivation of Liouville's theorem. I was able to follow the math for the most part, however I was hoping for an intuitive understanding about why the form of the measure on an energy surface in phase space is different than the form of the measure on the whole phase space
If we have an $N$-dimensional phase space then the measure on that phase space is simply $dV$. I was able to follow the proof about why that measure is conserved under the natural motion. However, if we restrict our analysis to an energy surface to that phase space then the measure on that surface becomes $\frac{d \Sigma}{| \nabla H |}$, and not just $d \Sigma,$ the area element on that energy surface. From what I understand, the stated reason for this is that $d \Sigma \cdot dn = \frac{d \Sigma}{| \nabla H |}$ (where $dn$ is the normal to the energy surface) is just a volume element in the phase space, and we can then appeal to the fact that we already know that a differential volume element in phase space is conserved. However, if we view the energy surface itself as an $N-1$ dimensional phase space then your old $d \Sigma$ essentially becomes a volume element in this new $N-1$ dimensional space. Why, then, does Liouville's theorem as derived for the higher dimensional phase space not also apply to the $N-1$ dimensional space, thus making $d \Sigma$ the correct measure?
Edit to clarify what my quantities mean: $dV$ is a differential volume element in the phase space. $d \Sigma$ is a differential area element on an energy surface in the phase space. $H$ is my Hamiltonian, and for my $N$ dimensional phase space
$$| \nabla H | = \sqrt{ \sum_{i=1}^{N/2} \left( \frac{\partial H}{\partial q_i} \right)^2 + \left( \frac{\partial H}{\partial p_i} \right)^2}.$$
$dn$ is a differential normal vector to the energy surface. |
Note that $n$ is really the momentum in the $\sigma$ direction so it has the units of the world sheet mass. The exponent $-n\epsilon$ in the regulator has to be dimensionless so $\epsilon$ has the units of the world sheet distance.
Consequently, the removed term $1/\epsilon^2$ has the units of the squared world sheet mass. This are the same units as the energy density in 1+1 dimensions. If you just redefine the stress energy tensor on the world sheet as$$T_{ab} \to T_{ab} + \frac{C}{\epsilon^2} g_{ab}$$where $C$ is a particular number of order one you may calculate (that depends on conventions only), it will redefine your Hamiltonian so that the ground state energy is shifted in such a way that the $1/\epsilon^2$ term is removed.
This "cosmological constant" contribution to the stress-energy tensor may be derived from the cosmological constant term in the world sheet action, essentially $C\int d^2\sigma\sqrt{-h}$. Classically, this term violates the Weyl symmetry. However, quantum mechanically, there are also other loop effects that violate this symmetry – your regulated calculation of the ground state energy is a proof – and this added classical counterterm is needed to
restore the Weyl (scaling) symmetry.
It's important that this counterterm and all the considerations above are unable to change the value of the finite leftover, $-1/12$, which is the true physical finite part of the sum of positive integers. This is the conclusion we may obtain in numerous other regularization techniques. The result is unique because it really follows from the symmetries we demand – the world sheet Weyl symmetry or modular invariance. |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues?
Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson...
Hmm, it seems we cannot just superimpose gravitational waves to create standing waves
The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line
[The Cube] Regarding The Cube, I am thinking about an energy level diagram like this
where the infinitely degenerate level is the lowest energy level when the environment is also taken account of
The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume
Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings
@Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer).
Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it?
Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks.
I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh...
@0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P)
Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio...
the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above
If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\...
@ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there.
@CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer |
If the computer starts in the middle of the stream, it has no way to know—it will be completely confused.Fortunately, that's not how the protocol works. The computer and terminal have to sync up before they can communicate. There are a few different ways of doing that, but once they're in sync (which includes agreeing on how long each 1 or 0 should last), ...
Suppose that your numbers are $n$-bit long. Then you can think of them as elements of the vector space $\mathbb{F}_2^n$. The number $X$ can be written as an XOR of $a_1,\ldots,a_m$ if $X$ is in the linear span of $a_1,\ldots,a_m$. In order to determine whether $X$ is in the linear span of $a_1,\ldots,a_m$, you can use Gaussian elimination.
If you think about it, there are $2^4 = 16$ possible ways of combining two bits to give a single-bit ouput (AND, OR, NOR, NAND, XOR, ...). Can you work out what they all are? This is because there are four possible input combinations ($00$, $01$, $10$, $11$) and any subset of those can be mapped to~$1$. But we could combine more than two bits to give a ...
Access into a bit array is constant time. In particular, if you have an array that is stored contiguously in memory, indexing into the array takes constant time: reading $A[i]$ takes constant time, regardless of how large $i$ is. That's all that you need to do to access a bit array. In particular, if you have an array of bytes, then you can read the $i$th ...
One can observe that, for any boolean values $a,b,c$, we have$a=b$ if and only if$a \text{ xor } c = b \text{ xor } c$.(To prove that we note that $\Rightarrow$ is trivial, and $\Leftarrow$ follows by xor-ing with $c$ once more and applying the cancellation law.)Hence, the equation$x \text{ xor } y = z$is equivalent to$(x \text{ xor } y) \text{ ...
If you want to look for an exact match, use a hashtable. Choose a hash function that hashes a 10,000-bit string to a short hashcode. This approach will be simple and highly efficient: the running time will basically be the time to compute the hash of the 10,000-bit value that you are searching for.
Popcount is going to be your best option here. As j_random_hacker mentions in a comment, popcount on a single word can be done in $O(\log W)$ time if implemented by hand, or $O(1)$ if the machine provides a dedicated opcode for it (which should take a constant number of clock cycles—but then again, $W$ should also be a constant for any given machine).(Note:...
P is in big-endian format.P0 —0111011011010011001011010010011001011001010010001011011010000001,The first bit, 0, means 2*0 + 1 = 1 is not a prime.The second bit, 1, means 2*1 + 1 = 3 is a prime.The third bit, 1, means 2*2 + 1 = 5 is a prime.The fourth bit, 1, means 2*3 + 1 = 7 is a prime.The fifth bit, 0, means 2*4 + 1 = 9 is not a prime.The sixth ...
Contrary to what you think, extracing bytes by shifting and masking is completely unrelated to the storage order (both little-endian and big-endian storage schemes exist and don't influence the results).An int variable is represented by 32 bits. The least significant byte is made of the eight least significant bits and you obtain them byi & 0xFF0b ...
For every $i \in \{0,\ldots,n\}$ (where $n$ is the length of the array) and for every $a,b,c \in \{0,\ldots,15\}$, we determine whether it is possible to partition the first $i$ elements of the array into four subsets, the first three of which XOR to $a,b,c$, respectively. We also compute the XOR of the entire array. Using the information for $i = n$, we can ...
Let f (i, j) = AND ($A_i, A_{i+1}, ..., A_{j-1})$ for $0 ≤ i < j ≤ n$. $f (i, j)$ has the property that whenever $i ≤ i' < j' ≤ j$, $f (i, j) ≤ f (i', j')$.We also have $f (i, j)$ = AND($(f (i, k), f (k, j)$) for any $0 ≤ i < k < j ≤ n$.Because of these properties, we get an algorithm slightly better than brute force as follows:Keep track ...
This can be done by running an AND between n and a number with it's ith bit set to 1, which can be obtained by 1 << i. The result of this operation will be zero if the ith bit in n is zero, or 1 << i otherwise.Case of zero:n = 10111m = 01000 (1 << 3)n & m = 00000Case of one:n = 10111m = 00100 (1 << 2)n &...
Yes, there are ways to improve the efficiency greatly.Let ${}_k{i}$ be the $k$-th digit of $i$ in binary representation, i.e., it is 0 if $\lfloor i/2^k\rfloor$ is even and 1 otherwise. For example, since $19=(10011)_2$, $_019=1$, $_119=1$, $_219=0$, $_319=0$, $_419=1$. In most programming languages, ${}_k{i}$ can be computed as $(i\text{>>}k)\%2$....
First, we have some basic observations:The minimum number of steps to convert $N$ to $0$ equals to the minimum number of steps to convert $0$ to $N$.To convert $0$ to $N$, the optimal way would be to apply Operation 1 and 2 alternatively.Now consider the bit sequence $b_1\ldots b_m0$. After performing Operation 1 and 2 alternatively, we get$$b_1\ldots ...
I think you can just do a simple breadth-first-search on this. First note that:There's only one way to do move #1 (you can perform move #1 in one place, if it's at all possible), and doing it multiple times wouldn't result in a loop.It doesn't make sense to do move #2 twice in a row.The important thing to do while searching is to note whether the number ...
So, a summary of the Wikipedia page on Butterfly Diagrams:In Computing Science, a Butterfly is the portion of a computation that combines the results of (two) smaller computations into the larger one.This is common in Discrete Fourier Transformation algorithms, as well as in the otherwise unrelated Viterbi Algorithm for finding the most likely sequence of ...
Notations :$b(x)$ is the beauty of $x$.$S_c(n) = \sum_{x = 0}^{2^n-1} (b(x)+c)$ where $c\in\mathbb N$$\tilde O(\log t)$ : polylogarithmic in $t$Argument :Let's first show that $S_c(n)$ is easy to calculate for any $c,n\in\mathbb N$. Each number in $[\![0, 2^{n-1}]\!]$ has at most $n$ bits, and for each $i\in[\![1, n]\!]$, exactly half of the numbers ...
You can obtain a formula in CNF form by writing down a truth table (with $2^{24}$ rows) and then converting each row to a single clause. This will yield a formula with $2^{24}$ clauses, but the method is straightforward.If you want a shorter formula, I suggest using logic synthesis, e.g., Quine-McClusky, Espresso.The usual path to get a small formula is ...
If the numbers have width $c\log n$ then there is a randomized $O(n^{2-1/O(\log c)})$ algorithm that finds the maximum OR correctly with high probability. The idea is to find the maximum OR bit by bit, MSB to LSB. We can do this using $c\log n$ queries of the following form: "Is there an OR whose $k$ most significant bits are $b_1\ldots b_k$?". Each such ...
IMO, no sensible answer can be given without some knowledge on the distribution of the bits in those keys.It might very well be that the first, say, 32 bits of the keys are completely discriminant and sorting or hasing on these will do the trick, with no need for full comparisons.It could also be that the 600 first bits are always the same and this would ...
The controlled NOT gate is represented by the matrix$$\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&0&1\\0&0&1&0\end{pmatrix}$$In contrast, the matrix corresponding to negating the second bit is$$\begin{pmatrix}0&1&0&0\\1&0&0&0\\0&0&0&1\\0&0&1&0\end{pmatrix}...
It depends what operations are available to you for 64 bit integers.The product of two 64 bit integers is a 128 bit integer. Many programming languages don't provide such a product, for example in C you will likely have an operation that produces the 64 bit (x * y) modulo $2^{64}$, but not an operation that produces the 128 bit number x * y.On the ...
A (binary) code $C$ is a collection of binary strings of some length $n$, known as codewords. The minimal distance of $C$ is the minimum Hamming distance between (different) codewords. Hamming codes have minimum distance 3, which means that (1) every two codewords differ in at least 3 places, (2) there exist two codewords which differ in exactly 3 places.... |
I) There are already several good answers. OP is asking about the momentum of the non-relativistic string with only transverse displacements, whose Lagrangian density usually is given as
$$ {\cal L}_T ~:=~\frac{\rho}{2} \dot{\eta}^2 - \frac{\tau}{2} \eta^{\prime 2} \tag{1}$$
in textbooks.
II) Let us fix notation: $\rho$ is the 1D mass density; $\tau$ is the string tension; $Y$ is the 1D Young modulus; dot denotes a derivative wrt. $x^0\equiv t$; prime denotes a derivative wrt. $x^1\equiv x$; $\xi$ is the longitudinal displacement in the $x$-direction; and $\eta$ is the transversal displacement in the $y$-direction.
III) First of all, note that the canonical stress-energy-momentum (SEM) tensor $T^{\mu}{}_{\nu}$ (which contains the momentum density $T^0{}_1$) is a pull-back to the world sheet (WS), which we identify with the $(x,t)$-plane. Therefore the momentum direction is often identified with the longitudinal $x$-direction, even if the physical target space (TS) vibrations are in the transverse $y$-direction.
Secondly, note that already for the (conceptionally simpler) longitudinal wave model
$$ {\cal L}_L ~:=~\frac{\rho}{2} \dot{\xi}^2 - \frac{Y}{2} \xi^{\prime 2}, \tag{2}$$
(minus) the canonical momentum density
$$T^0{}_1~=~\rho\dot{\xi}\xi^{\prime} \tag{3}$$
is different from the kinetic momentum density $\rho\dot{\xi}$. This is related to the fact that the model (2) is constructed to describe wave excitations of the string, not overall translations thereof. The take-away message is that it is not necessarily a useful thing to try to make the canonical momentum and the kinetic momentum equal. (And in particular, Ref. 1 does not achieve this. Moreover, Ref. 1 only discusses chiral excitations, i.e. a left-mover or a right-mover, but not a superposition thereof, which is incomplete for a non-linear theory.)
Suffice to say that the different momenta can be treated and understood separately, and that there are conservation laws associated with both types of momenta. Kinetic momentum conservation follows from Newton's laws, while canonical momentum conservation is a consequence of translation symmetry, cf. Noether's theorem. In this answer, we will focus on getting a more realistic physical model of the transversal wave than the Lagrangian density (1).
IV) Our starting point is the simple observation that for an unstretchable string $Y \gg \tau $, a small transversal displacement
$$\eta~=~{\cal O}(\varepsilon),\tag{4}$$
where $\varepsilon \ll 1$, must be accompanied with a longitudinal displacement
$$\xi~=~{\cal O}(\varepsilon^2),\tag{5}$$
cf. Fig. 1 below.
$\uparrow$ Fig. 1. An infinitesimal transversal sawtooth displacement $\varepsilon\ll 1$ of an unstretchable string must be accompanied with a longitudinal displacement $\frac{\varepsilon^2}{2}$.
V) We conclude that a realistic model for transversal excitations $\eta$ must include the possibility for longitudinal displacements $\xi$ as well. Let us therefore consider the Lagrangian density
$$ {\cal L}~:=~{\cal T}-{\cal V}, \qquad {\cal T}~:=~\frac{\rho}{2}\left(\dot{\xi}^2+\dot{\eta}^2\right),\tag{6}$$
where the potential density ${\cal V}$ should be given by Hooke's law. Let
$$ s^{\prime} ~=~ \sqrt{(1+\xi^{\prime})^2 +\eta^{\prime 2} }~=~1+\xi^{\prime} +\frac{\eta^{\prime 2}}{2} -\frac{\xi^{\prime}\eta^{\prime 2}}{2} -\frac{\eta^{\prime 4}}{8} +{\cal O}(\varepsilon^5) \tag{7}$$
be the derivative of the arc-length $s$ wrt. the $x$-coordinate. Modulo possible total derivative terms, the potential density ${\cal V}$ must be of the form
$${\cal V}~=~\frac{k}{2} \left( s^{\prime}-a\right)^2 ~=~ \frac{k}{2} (s^{\prime }-1)^2 + k(1-a) (s^{\prime}-1) + \frac{k}{2} (1-a)^2 \tag{8}$$
for suitable material constants $k$ and $a$, cf. Ref. 1. As will become apparent below, we should identify the two constants $k$ and $a$ as
$$ k ~=~Y+\tau \quad\text{and}\quad \tau~=~ k(1-a). \tag{9}$$
Therefore the potential density (8) becomes
$${\cal V}~\stackrel{(8)+(9)}{=}~ \frac{Y+\tau}{2} (s^{\prime }-1)^2 +\tau (s^{\prime}-1) +\frac{\tau^2}{2(Y+\tau)} $$$$~\stackrel{(7)}{=}~ \tau\left(\xi^{\prime} +\frac{\eta^{\prime 2}}{2} +\frac{\xi^{\prime 2}}{2} \right) + \frac{Y}{2}\left(\xi^{\prime} +\frac{\eta^{\prime 2}}{2} \right)^2 +{\cal O}(\varepsilon^5) +\frac{\tau^2}{2(Y+\tau)}.\tag{10}$$
Keeping only terms to quartic order, and discarding total derivative terms and constant terms, the potential density reads
$${\cal V}_4~:=~ \frac{\tau}{2}\left(\xi^{\prime 2} +\eta^{\prime 2}\right) +\frac{Y}{2}\chi^2 ,\tag{11}$$
where we have defined the shorthand notation
$$ \chi~:=~\xi^{\prime} +\frac{\eta^{\prime 2}}{2} .\tag{12}$$
The quartic potential (11) is surprisingly simple. For an unstretchable string $Y\gg\tau$, we recognize in eq. (11) the constraint
$$ \chi~\approx~0 ,\tag{13}$$
which is at the heart of Fig. 1. The constraint (13) implies that a transversal excitation (4) to the first order in $\varepsilon$ induces a longitudinal excitation (5) to the second order in $\varepsilon$. As we shall see below, even an stretchable string has an affinity for the constraint (13).
VI) As an aside, we may rewrite the quartic potential (11) as a cubic potential
$$ {\cal V}_3~:=~ \frac{\tau}{2}\left(\xi^{\prime 2} +\eta^{\prime 2}\right) -\frac{B^2}{2Y} + B\chi, \tag{14}$$
where $B$ is an auxiliary field. The Euler-Lagrange (EL) equation for $B$ is
$$ B ~\approx~Y\chi.\tag{15} $$
The EL equations for $\xi$ and $\eta$ read
$$ \rho \ddot{\xi}~\stackrel{(14)}{\approx}~ \tau\xi^{\prime\prime} + B^{\prime}~\stackrel{(12)+(15)}{\approx}~ (\tau+Y)\xi^{\prime\prime} + Y \eta^{\prime}\eta^{\prime\prime},\tag{16}$$$$ \rho \ddot{\eta}~\stackrel{(14)}{\approx}~ \tau\eta^{\prime\prime} +\left(B\eta^{\prime}\right)^{\prime}~\stackrel{(12)+(15)}{\approx}~\tau\eta^{\prime\prime}+\frac{3Y}{2}\eta^{\prime 2}\eta^{\prime\prime} + Y(\xi^{\prime}\eta^{\prime})^{\prime},\tag{17} $$
respectively.
VII) If we integrate out the $B$-field in the cubic potential (14),
$$ {\cal V}_3\quad\stackrel{B}{\longrightarrow}\quad{\cal V}_4,\tag{18}$$
we get back the quartic potential (11). The EL equations (16) & (17) become
$$\Box_L\xi~:=~ \ddot{\xi}- c_L^2 \xi^{\prime\prime} ~\approx~ \frac{Y}{\rho} \eta^{\prime}\eta^{\prime\prime}~=~ (c_L^2-c_M^2) \eta^{\prime}\eta^{\prime\prime},\tag{19} $$$$\Box_M\eta~:=~ \ddot{\eta}- c_M^2 \eta^{\prime\prime} ~\approx~ \frac{Y}{\rho}\left( \chi \eta^{\prime}\right)^{\prime}~=~ (c_L^2-c_M^2)\left( \chi \eta^{\prime}\right)^{\prime},\tag{20} $$
where we have defined two speeds
$$c_M^2~:=~\frac{\tau}{\rho}\quad\text{and}\quad c_L^2~:=~\frac{Y+\tau}{\rho}.\tag{21} $$
Let us consider left-moving waves only. A straightforward analysis shows that the EL equations (19) & (20) have two travelling modes:
A faster purely longitudinal $L$-mode $\xi_L(x\!-\!c_Lt) $ with $\eta_L(x\!-\!c_Lt)\approx 0$ (which formally violates the constraint (13), but recall eq. (5)).
A slower mixed $M$-mode $\xi_M(x\!-\!c_Mt) $ and $ \eta_M(x\!-\!c_Mt)$ that satisfies the constraint $\chi_M(x\!-\!c_Mt) \approx 0$ in eq. (13).
VIII) The two travelling modes $L$ and $M$ are independent in the sense that they can pass through each other. However the creation (and annihilation) of the $M$-mode are not independent of the $L$-mode. The constraint (13) has a lopsided effect: A transversal displacement is always associated with a longitudinal retraction. Recall that if we impose Dirichlet boundary conditions at the spatial ends of the string, then an overall longitudinal retraction is not possible. The creation (and annihilation) of an $M$-mode must therefore excite a compensating faster $L$-mode that counteracts the longitudinal component of the $M$-mode. See Ref. 1 for further details.
IX) Finally, it is interesting to try to integrate out the longitudinal field $\xi$ in the quartic model (11). We can solve eq. (19) for the longitudinal field
$$\xi~\approx~ \frac{Y}{2\rho}\int \! dt^{\prime}dx^{\prime}~G(x,t;x^{\prime},t^{\prime})\frac{d}{dx^{\prime}}\eta^{\prime}(x^{\prime},t^{\prime})^2$$$$~\stackrel{\text{int. by parts}}{=}~\frac{Y}{2\rho}\int \! dt^{\prime}dx^{\prime}\left\{-\frac{d}{dx^{\prime}}G(x,t;x^{\prime},t^{\prime})\right\}\eta^{\prime}(x^{\prime},t^{\prime})^2\tag{22} $$
by introducing a Green's function $G(x,t;x^{\prime},t^{\prime})$ and light-cone coordinates
$$ x^{\pm} ~:=~ t \pm \frac{x}{c_L}, \qquad \Delta x^{\pm} ~:=~ \Delta t \pm \frac{ \Delta x}{ c_L}, \qquad \Delta t ~:=~ t - t^{\prime}, \qquad \Delta x ~:=~ x - x^{\prime}.\tag{23}$$
Then the D'Alembertian in 1+1D becomes
$$\Box_L ~=~ 4\partial_+\partial_-\tag{24}. $$
The Green's function $G(x,t;x^{\prime},t^{\prime})$ satisfies by definition
$$\Box_L G(x,t;x^{\prime},t^{\prime}) ~=~ \delta(\Delta t)\delta(\Delta x) ~=~ \frac{2}{c_L} \delta(\Delta x^+)\delta(\Delta x^-).\tag{25}$$
The retarded Green's function is
$$ G_{\rm ret}(x,t;x^{\prime},t^{\prime}) ~=~ \frac{1}{2c_L}\theta(\Delta x^+)\theta(\Delta x^-).\tag{26}$$
However, to achieve a Lagrangian formulation (30) for the $\xi$-reduced quartic theory (11), we should used the symmetrized Green's function
$$ G(x,t;x^{\prime},t^{\prime})~=~\frac{1}{2} G_{\rm ret}(x,t;x^{\prime},t^{\prime})+\frac{1}{2} G_{\rm ret}(x^{\prime},t^{\prime};x,t).\tag{27}$$
It is convenient to introduced the notation
$$ K(x,t;x^{\prime},t^{\prime}) ~:=~ -\frac{d}{dx}\frac{d}{dx^{\prime}}G(x,t;x^{\prime},t^{\prime}) $$$$~=~ -\frac{1}{4c_L}\frac{d}{dx}\frac{d}{dx^{\prime}}\left[\theta(\Delta x^+)\theta(\Delta x^-) + \theta(-\Delta x^+)\theta(-\Delta x^-)\right]$$$$~=~ -\frac{1}{8c_L}\frac{d}{dx}\frac{d}{dx^{\prime}}\left[{\rm sgn}(\Delta x^+){\rm sgn}(\Delta x^-)\right].\tag{28}$$
Then the derivative $\xi^{\prime}$ of the longitudinal field is given simply by
$$ \xi^{\prime} (x,t) ~\approx~ \frac{Y}{2\rho} \int \! dt^{\prime}~dx^{\prime}~K(x,t;x^{\prime},t^{\prime}) ~\eta^{\prime}(x^{\prime},t^{\prime})^2.\tag{29}$$
Finally, we are able to write down an action
$$\begin{align} S_4 \quad\stackrel{\xi}{\longrightarrow}\quad &\int \! dt~dx \left(\frac{\rho}{2}\dot{\eta}^2-\frac{\tau}{2}\eta^{\prime 2} -\frac{Y}{8} \eta^{\prime 4}\right) \cr&-\frac{Y^2}{8\rho} \int dt~dx~dt^{\prime}dx^{\prime}~ \eta^{\prime}(x,t)^2 ~K(x,t;x^{\prime},t^{\prime})~ \eta^{\prime}(x^{\prime},t^{\prime})^2\end{align}\tag{30}$$
for the $\xi$-reduced quartic theory (11). It is easy to check that the corresponding EL equation for $\eta$ is eq. (17), where $\xi^{\prime}$ on the right-hand side of eq. (17) is given by eq. (29).
The action (30) is bi-local, which is expected. (On the bright side, at least the action (30) doesn't depend on higher spacetime derivatives!) However the non-local nature challenge the concept of a SEM tensor (and thereby the canonical momentum density, which was what OP originally asked about). It is still possible to derive Noether conservation laws associated with the WS translation symmetry, but we shall not pursue this here.
References:
D.R. Rowland & C. Pask, The missing wave momentum mystery, Am. J. Phys. 67 (1999) 378. (Hat tip: ACuriousMind.) |
Schedule of the International Workshop on Logic and Algorithms in Group Theory Monday, October 22
10:15 - 10:50 Registration & Welcome coffee 10:50 - 11:00 Opening remarks 11:00 - 12:00 Alex Lubotzky: First order rigidity of high-rank arithmetic groups 12:00 - 14:00 Lunch break 14:00 - 15:00 Zlil Sela: Basic conjectures and preliminary results in non-commutative algebraic geometry 15:00 - 16:00 Katrin Tent: Burnside groups of relatively small odd exponent 16:00 - 16:30 Coffee, tea and cake 16:30 - 17:30 Harald Andres Helfgott: Growth in linear algebraic groups and permutation groups: towards a unified perspective afterwards Reception Tuesday, October 23
09:30 - 10:30 Chloe Perin: Forking independence in the free group 10:30 - 11:00 Group photo and coffee break 11:00 - 12:00 Krzysztof Krupinski: Amenable theories 12:00 - 14:00 Lunch break 14:00 - 15:00 Gregory Cherlin: The Relational Complexity of a Finite Permutation Group 15:00 - 16:00 Todor Tsankov: A model-theoretic approach to rigidity in ergodic theory 16:00 - 16:30 Coffee, tea and cake Wednesday, October 24
09:30 - 10:30 Dan Segal: Small profinite groups 10:30 - 11:00 Coffee break 11:00 - 12:00 Martin Kassabov: On the complexity of counting homomorphism to finite groups 12:00 - 13:00 Anna Erschler: Arboreal structures, Poisson boundary and growth of Groups 19:00 - Dinner - Tuscolo Münsterblick, Gerhard-von-Are-Straße 8, Bonn Thursday, October 25
09:30 - 10:30 Laura Ioana Ciobanu Radomirovic: Equations in groups, formal languages and complexity 10:30 - 11:00 Coffee break 11:00 - 12:00 James Wilson: Distinguishing Groups and the Group Isomorphism problem 12:00 - 14:00 Lunch break 14:00 - 15:00 Alan Reid: Distinguishing certain triangle groups by their finite quotients 15:00 - 16:00 Alla Detinko: Computing with infinite linear groups: methods, algorithms, and applications 16:00 - 16:30 Tea and cake Friday, October 25
09:30 - 10:30 George Willis: Computing the scale 10:30 - 11:00 Coffee break 11:00 - 12:00 Agatha Atkarskaya; Towards a Group-like Small Cancellation Theory for Rings Abstracts Agatha Atkarskaya: Towards a Group-like Small Cancellation Theory for Rings
Let a group $G$ be given by generators and defining relations. It is known that we cannot extract specific information about the structure of $G$ using the defining relations in the general case. However, if these defining relations satisfy small cancellation conditions, then we possess a great deal of knowledge about $G$. In particular, such groups are hyperbolic, that is we can express the multiplication in the group by means of thin triangles. It seems of interest to develop a similar theory for rings. Let $kF$ be the group algebra of the free group $F$ over some field $k$. Let $F$ have a fixed system of generators, then its elements are reduced words in these generators that we call monomials. Let $\mathcal{I}$ be ideal of $kF$ generated by a set of polynomials and let $kF / \mathcal{I}$ be the corresponding quotient algebra. In the present work we state conditions on these polynomials that will enable a combinatorial description of the quotient algebra similar to small cancellation quotients of the free group. In particular, we construct a linear basis of $kF / \mathcal{I}$ and describe a special system of linear generators of $kF / \mathcal{I}$ for which the multiplication table amounts to a linear combination of thin triangles. Constructions of groups with exotic properties make extensive use of small cancellation theory and its generalizations. In the similar way, generalizations of our approach allow to construct various examples of algebras with exotic properties. This is a joint work with A. Kanel-Belov, E. Plotkin and E. Rips.
Gregory Cherlin: The Relational Complexity of a Finite Permutation Group
I am interested in a numerical invariant of finite permutation groups called the {\it relational complexity} which is suggested by the model theoretic point of view. (From a model theorist's perspective, the study of finite structures and the study of finite permutation groups are the same subject.) One conjecture about this invariant states that an almost simple primitive permutation group of relational complexity 2 must be the symmetric group acting naturally [1]. Considerable progress toward a proof has been made lately using a combination of theory and machine computation (e.g., [2]). A variety of computational methods have been devised which are very helpful in the limited context of the stated conjecture, and perhaps in other cases. I aim to provide a sense of what the invariant measures, and to discuss some ways that it can be determined, or estimated (structurally, group theoretically, or computationally).
References:
[1] Gregory Cherlin, Sporadic homogeneous structures, The Gelfand Mathematical Seminars, 1996-1999, pp.~15-48, Birkhäuser, 2000.
[2] Nick Gill and Pablo Spiga, Binary permutation groups: Alternating and Classical Groups, preprint. arXiv:1610.01792 [math.GR]
Related Talks:
* The relational complexity of a finite primitive structure, ICMS, Sep.~19, 2018
* Finite binary homogeneous structures, ICMS, July 10, 2014 Alla Detinko: Computing with infinite linear groups: methods, algorithms, and applications
In the talk we will survey our ongoing collaborative project in a novel domain of computational group theory: computing with linear groups over infinite fields. We provide an introduction to the area, and discuss available methods and algorithms. Special consideration will be given to the most recent developments in computing with Zariski dense groups and applications. This talk is aimed at a general algebraic audience (see also our expository article https://doi.org/10.1016/j.exmath.2018.07.002}).
Anna Erschler: Arboreal structures, Poisson boundary and growth of groups
A group is said to be ICC if all non-identity elements have infinitely many conjugates. In a joint work with Vadim Kaimanovich, given an ICC group, we construct a forest F with the vertex set G and a probability measure on G such that trajectories of the random walk tend almost surely to points of the boundary of F; we show that the Poisson boundary can be identified with the boundary of the forest, endowed with the hitting distribution; we show that the convergence to Poisson boundary has strong convergence property, resembling the case of simple random walks on free groups and we show that the action of G on the Poisson boundary is free.
Our result is a development of a recent result of Joshua Frisch, Yair Hartman, Omer Tamuz and Pooya Vahidi Ferdowsi, who has shown that any ICC group admits a measure with non-trivial Poisson boundary. In a joint work with Tianyi Zheng, we construct measures with power law decay on torsion Grigorchuk groups that have non-trivial Poisson boundary. As an application we obtain near optimal lower bound for the growth of these groups.
Harald Andres Helfgott: Growth in linear algebraic groups and permutation groups: towards a unified perspective
Given a finite group $G$ and a set $A$ of generators, the diameter $ diam(\Gamma(G,A))$ of the Cayley graph $\Gamma(G,A)$ is the smallest $\ell$ such that every element of $G$ can be expressed as a word of length at most $\ell$ in $A \cup A^{-1}$. We are concerned with bounding $diam(G):= \max_A diam(\Gamma(G,A))$. It has long been conjectured that the diameter of the symmetric group of degree $n$ is polynomially bounded in $n$. In 2011, Helfgott and Seress gave a quasipolynomial bound $\exp((\log n)^{4+\epsilon})$. We will discuss a recent, much simplified version of the proof, emphasising the links in commons with previous work on growth in linear algebraic groups.
Reference:
Martin Kassabov: On the complexity of counting homomorphism to finite groups (joint with Eric Samperton)
I will discuss the complexity of determining the number of homomorphisms from finitely presented groups to finite groups, which turns out to be a #P complete problem. Somewhat surprisingly, this is even the case when the target is a finite nilpotent group.
Krzysztof Krupinski: Amenable theories
I will introduce the notion of an amenable theory as a natural counterpart of the notion of a definably amenable group. Roughly speaking, amenability means that there are invariant (under the action of the group of automporphism of a sufficiently saturated model), Borel, probability measures on various types spaces. I will discuss several equivalent definitions and give some examples. Then I will discuss the result that each amenable theory is Gcompact. This is a part of my recent paper (still in preparation) with Udi Hrushovski and Anand Pillay.
Alex Lubotzky: First order rigidity of high-rank arithmetic groups
The family of high rank arithmetic groups is a class of groups playing an important role in various areas of mathematics. It includes SL(n,Z), for n>2 , SL(n, Z[1/p] ) for n>1, their finite index subgroups and many more.
A number of remarkable results about them have been proven including; Mostow rigidity, Margulis Super rigidity and the Quasi-isometric rigidity. We will talk about a further type of rigidity: "first order rigidity" (also called quasi-axiomatisable in some articles). Namely, if G is such a non-uniform characteristic zero arithmetic group and H a finitely generated group which is elementary equivalent to it, then H is isomorphic to G. This stands in contrast with Zlil Sela's remarkable work which implies that the free groups, surface groups and hyperbolic groups (many of which are low-rank arithmetic groups) have many non isomorphic finitely generated groups which are elementary equivalent to them. Joint work with Nir Avni and Chen Meiri.
Chloe Perin: Forking independence in the free group
Model theorists define, in structures whose first-order theory is "stable" (i.e. suitably nice), a notion of independence between elements. This notion coincides for example with linear independence when the structure considered is a vector space, and with algebraic independence when it is an algebraically closed field. Sela showed that the theory of the free group is stable. In a joint work with Rizos Sklinos, we give an interpretation of this model theoretic notion of independence in the free group using Grushko and JSJ decompositions.
Laura Ioana Ciobanu Radomirovic: Equations in groups, formal languages and complexity
For a group G, solving equations where the coefficients are elements in G and the solutions take values in G can be seen as akin to solving Diophantine equations in number theory, answering questions from linear algebra or moregenerally, algebraic geometry. Moreover, the question of satisfiability of equations fits naturally into the framework of the first order theory of G.I will start the talk with a survey containing results from both mathematics and computer science about solving equations in infinite nonabelian groups, with emphasis on free and hyperbolic groups. I will then show how for these groups the solutions to equations can be beautifully described in terms of formal languages, and that the latest techniques involving string compression produce optimal space complexity algorithms. If time allows, I will show how some of the results can carry over to certain group extensions. This is joint work, in several projects, with Volker Diekert, Murray Elder, Derek Holt and Sarah Rees.
Alan Reid: Distinguishing certain triangle groups by their finite quotients
We prove that certain arithmetic Fuchsian triangle groups are profinitely rigid in the sense that they are determined by their set of finite quotients amongst all finitely generated residually finite groups. Amongst the examples are the (2,3,8) triangle group.
Dan Segal: Small profinite groups
I will explore the connections between various conditions of smallness on a profinite group, such as being (topologically) finitely generated, having only finitely many open subgroups of each finite index, or having all finite-index subgroups open, and the extent to which these can be characterized by algebraic properties of the associated system of finite groups.
Reference:
Remarks on profinite groups having few open subgroups, J. Combinatorial Algebra 2 (2018), 87-101. Zlil Sela: Basic conjectures and preliminary results in non-commutative algebraic geometry
Algebraic geometry studies the structure of varieties over fields and commutative rings. Starting in the 1960's ring theorists (Cohn, Bergman and others) have tried to study the >structure of varieties over some non-commutative rings (notably free associative algebras). The lack of unique factorization that they tackled and studied in detail, and the pathologies that they were aware of, prevented any attempt to prove or even speculate what can be the properties of such varieties. Using techniques and concepts from geometric group theory and from low dimensional topology, we formulate concrete conjectures about the structure of these varieties, and prove preliminary results in the direction of these conjectures.
Katrin Tent: Burnside groups of relatively small odd exponent
(joint work with A. Atkarskaya and E. Rips)
The free Burnside group B(n,m) of exponent m is the quotient of the free group on n generators by the normal subgroup generated by its mth powers. Novikov and Adyan showed that for odd m sufficiently large and n at least 2, the group B(n,m) is infinite. In subsequent work, Adyan improved the lower bound on m, the latest bound being 101.
Our approach to Burnside groups combines geometric and combinatorial aspects. We first define a collection of canonical relators for the normal subgroup with nice combinatorial properties. These properties allow us to use generalized small cancellation methods leading to a version of Greendlinger's Lemma sufficient to deduce the infinity of these groups.
Todor Tsankov: A model-theoretic approach to rigidity in ergodic theory
(joint work with Tomás Ibarlucía)
I will discuss a new approach to some rigidity results in the ergodic theory of non-amenable groups via model theory. I will explain how ergodic theory can be formalized in continuous logic and how the model-theoretic notion of algebraic closure plays an important role in understanding strongly ergodic systems.
No prior knowledge of modeltheory or ergodic theory will be assumed.
George Willis: Computing the scale
The scale function on a totally disconnected, locally compact (t.d.l.c.) group is continuous and takes positive integer values. In special cases, such as when the group has a linear [1] or geometric [2] representation, it may be computed. However not all t.d.l.c. groups have representations which allow computation of the scale and other features of the group and the extent to which it is possible to obtain such representations is not known.
[1] H. Glöckner, Scale functions on linear groups over local skew fields, J. Algebra 205, 525-541 (1998).
[2] G. A. Willis, The structure of totally disconnected, locally compact groups, Math. Ann. 300, 341-363 (1994). |
Let $f$ be defined for $x \in \mathbb{R}$ by $$ f\left(x\right)=2\text{arctan}\left(e^x\right)-\frac{\pi}{2} $$ I've shown that $f$ is odd and satisfies for $x \in \mathbb{R}$ $$f'\left(x\right)=\cos\left(f\left(x\right)\right)$$ To prove it, i've used that $$ \cos\left(f\left(x\right)\right)=2\sin\left(\text{arctan}\left(e^x\right)\right)\cos\left(\text{arctan}\left(e^x\right)\right) $$ And then use that $$ \cos\left(\text{arctan}\left(e^x\right)\right)=\frac{1}{\sqrt{1+e^{2x}}} $$ I've two questions :
$\bullet$ Is that the unique solution for $f(0)=0$ ? I've tried to prove it by supposing to different solutions and trying to prove there are infact equals with trigonometric formula but it does not seem to work.
$\bullet$ Is there another way ( even wiser or faster ) to prove it ?
Thanks for those who take time to answer. |
Distance, or farness, is a numerical description of how far apart objects are. In physics or everyday usage, distance may refer to a physical length, or an estimation based on other criteria (e.g. "two counties over"). In mathematics, a distance function or metric is a generalization of the concept of physical distance. A metric is a function that behaves according to a specific set of rules, and is a concrete way of describing what it means for elements of some space to be "close to" or "far away from" each other. In most cases, "distance from A to B" is interchangeable with "distance between B and A".
Contents Mathematics 1 Geometry 1.1 Distance in Euclidean space 1.2 Variational formulation of distance 1.3 Generalization to higher-dimensional objects 1.4 Algebraic distance 1.5 General case 1.6 Distances between sets and between a point and a set 1.7 Graph theory 1.8 Distance versus directed distance and displacement 2 Directed distance 2.1 Displacement 2.2 Other "distances" 3 See also 4 References 5 Mathematics Geometry
In analytic geometry, the distance between two points of the xy-plane can be found using the distance formula. The distance between (
x 1, y 1) and ( x 2, y 2) is given by: d=\sqrt{(\Delta x)^2+(\Delta y)^2}=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}.\,
Similarly, given points (
x 1, y 1, z 1) and ( x 2, y 2, z 2) in three-space, the distance between them is: d=\sqrt{(\Delta x)^2+(\Delta y)^2+(\Delta z)^2}=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2+(z_2-z_1)^2}.
These formula are easily derived by constructing a right triangle with a leg on the hypotenuse of another (with the other leg orthogonal to the plane that contains the 1st triangle) and applying the Pythagorean theorem. In the study of complicated geometries,we call this (most common) type of distance Euclidean distance,as it is derived from the Pythagorean theorem,which does not hold in Non-Euclidean geometries.This distance formula can also be expanded into the arc-length formula.
Distance in Euclidean space
In the Euclidean space
R n, the distance between two points is usually given by the Euclidean distance (2-norm distance). Other distances, based on other norms, are sometimes used instead.
For a point (
x 1
,
x 2
, ...,
x n
) and a point (
y 1
,
y 2
, ...,
y n
), the
Minkowski distance
of order p (
p-norm distance
) is defined as:
1-norm distance = \sum_{i=1}^n \left| x_i - y_i \right| 2-norm distance = \left( \sum_{i=1}^n \left| x_i - y_i \right|^2 \right)^{1/2} p-norm distance = \left( \sum_{i=1}^n \left| x_i - y_i \right|^p \right)^{1/p} infinity norm distance = \lim_{p \to \infty} \left( \sum_{i=1}^n \left| x_i - y_i \right|^p \right)^{1/p} = \max \left(|x_1 - y_1|, |x_2 - y_2|, \ldots, |x_n - y_n| \right).
p need not be an integer, but it cannot be less than 1, because otherwise the triangle inequality does not hold.
The 2-norm distance is the Euclidean distance, a generalization of the Pythagorean theorem to more than two coordinates. It is what would be obtained if the distance between two points were measured with a ruler: the "intuitive" idea of distance.
The 1-norm distance is more colourfully called the
taxicab norm or Manhattan distance, because it is the distance a car would drive in a city laid out in square blocks (if there are no one-way streets).
The infinity norm distance is also called Chebyshev distance. In 2D, it is the minimum number of moves kings require to travel between two squares on a chessboard.
The
p-norm is rarely used for values of p other than 1, 2, and infinity, but see super ellipse.
In physical space the Euclidean distance is in a way the most natural one, because in this case the length of a rigid body does not change with rotation.
Variational formulation of distance
The Euclidean distance between two points in space (A = \vec{r}(0) and B = \vec{r}(T)) may be written in a variational form where the distance is the minimum value of an integral:
D = \int_0^T \sqrt{\left({\partial \vec{r}(t) \over \partial t}\right)^2} \, dt
Here \vec{r}(t) is the trajectory (path) between the two points. The value of the integral (D) represents the length of this trajectory. The distance is the minimal value of this integral and is obtained when r = r^{*} where r^{*} is the optimal trajectory. In the familiar Euclidean case (the above integral) this optimal trajectory is simply a straight line. It is well known that the shortest path between two points is a straight line. Straight lines can formally be obtained by solving the Euler–Lagrange equations for the above functional. In non-Euclidean manifolds (curved spaces) where the nature of the space is represented by a metric g_{ab} the integrand has be to modified to \sqrt{g^{ac}\dot{r}_c g_{ab}\dot{r}^b}, where Einstein summation convention has been used.
Generalization to higher-dimensional objects
The Euclidean distance between two objects may also be generalized to the case where the objects are no longer points but are higher-dimensional manifolds, such as space curves, so in addition to talking about distance between two points one can discuss concepts of distance between two strings. Since the new objects that are dealt with are extended objects (not points anymore) additional concepts such as non-extensibility, curvature constraints, and non-local interactions that enforce non-crossing become central to the notion of distance. The distance between the two manifolds is the scalar quantity that results from minimizing the generalized distance functional, which represents a transformation between the two manifolds:
\mathcal {D} = \int_0^L\int_0^T \left \{ \sqrt{\left({\partial \vec{r}(s,t) \over \partial t}\right)^2} + \lambda \left[\sqrt{\left({\partial \vec{r}(s,t) \over \partial s}\right)^2} - 1\right] \right\} \, ds \, dt
The above double integral is the generalized distance functional between two plymer conformation. s is a spatial parameter and t is pseudo-time. This means that \vec{r}(s,t=t_i) is the polymer/string conformation at time t_i and is parameterized along the string length by s. Similarly \vec{r}(s=S,t) is the trajectory of an infinitesimal segment of the string during transformation of the entire string from conformation \vec{r}(s,0) to conformation \vec{r}(s,T). The term with cofactor \lambda is a Lagrange multiplier and its role is to ensure that the length of the polymer remains the same during the transformation. If two discrete polymers are inextensible, then the minimal-distance transformation between them no longer involves purely straight-line motion, even on a Euclidean metric. There is a potential application of such generalized distance to the problem of protein folding
[1] [2] This generalized distance is analogous to the Nambu-Goto action in string theory, however there is no exact correspondence because the Euclidean distance in 3-space is inequivalent to the space-time distance minimized for the classical relativistic string. Algebraic distance
This is a metric often used in computer vision that can be minimized by least squares estimation. [1][2] For curves or surfaces given by the equation x^T C x=0 (such as a conic in homogeneous coordinates), the algebraic distance from the point x' to the curve is simply x'^T C x'. It may serve as an "initial guess" for geometric distance to refine estimations of the curve by more accurate methods, such as non-linear least squares.
General case
In mathematics, in particular geometry, a distance function on a given set
M is a function d: M× M → R, where R denotes the set of real numbers, that satisfies the following conditions: d( x, y) ≥ 0, and d( x, y) = 0 if and only if x = y. (Distance is positive between two different points, and is zero precisely from a point to itself.) It is symmetric: d( x, y) = d( y, x). (The distance between x and y is the same in either direction.) It satisfies the triangle inequality: d( x, z) ≤ d( x, y) + d( y, z). (The distance between two points is the shortest distance along any path).
Such a distance function is known as a metric. Together with the set, it makes up a metric space.
For example, the usual definition of distance between two real numbers
x and y is: d( x, y) = | x − y|. This definition satisfies the three conditions above, and corresponds to the standard topology of the real line. But distance on a given set is a definitional choice. Another possible choice is to define: d( x, y) = 0 if x = y, and 1 otherwise. This also defines a metric, but gives a completely different topology, the "discrete topology"; with this definition numbers cannot be arbitrarily close. Distances between sets and between a point and a set
d( A, B) > d( A, C) + d( C, B)
Various distance definitions are possible between objects. For example, between celestial bodies one should not confuse the surface-to-surface distance and the center-to-center distance. If the former is much less than the latter, as for a LEO, the first tends to be quoted (altitude), otherwise, e.g. for the Earth-Moon distance, the latter.
There are two common definitions for the distance between two non-empty subsets of a given set:
One version of distance between two non-empty sets is the infimum of the distances between any two of their respective points, which is the every-day meaning of the word, i.e. d(A,B)=\inf_{x\in A, y\in B} d(x,y). This is a symmetric premetric. On a collection of sets of which some touch or overlap each other, it is not "separating", because the distance between two different but touching or overlapping sets is zero. Also it is not hemimetric, i.e., the triangle inequality does not hold, except in special cases. Therefore only in special cases this distance makes a collection of sets a metric space. The Hausdorff distance is the larger of two values, one being the supremum, for a point ranging over one set, of the infimum, for a second point ranging over the other set, of the distance between the points, and the other value being likewise defined but with the roles of the two sets swapped. This distance makes the set of non-empty compact subsets of a metric space itself a metric space.
The distance between a point and a set is the infimum of the distances between the point and those in the set. This corresponds to the distance, according to the first-mentioned definition above of the distance between sets, from the set containing only this point to the other set.
In terms of this, the definition of the Hausdorff distance can be simplified: it is the larger of two values, one being the supremum, for a point ranging over one set, of the distance between the point and the set, and the other value being likewise defined but with the roles of the two sets swapped.
Graph theory
In graph theory the distance between two vertices is the length of the shortest path between those vertices.
Distance versus directed distance and displacement
Distance along a path compared with displacement
Distance cannot be negative and distance travelled never decreases. Distance is a scalar quantity or a magnitude, whereas displacement is a vector quantity with both magnitude and direction. Directed distance is a positive, zero, or negative scalar quantity.
The distance covered by a vehicle (for example as recorded by an odometer), person, animal, or object along a curved path from a point
A to a point B should be distinguished from the straight line distance from A to B. For example whatever the distance covered during a round trip from A to B and back to A, the displacement is zero as start and end points coincide. In general the straight line distance does not equal distance travelled, except for journeys in a straight line. Directed distance
Directed distances are distances with a directional sense. They can be determined along straight lines and along curved lines. A directed distance of a point
C from point A in the direction of B on a line AB in a Euclidean vector space is the distance from A to C if C falls on the ray AB, but is the negative of that distance if C falls on the ray BA (I.e., if C is not on the same side of A as B is).
A directed distance along a curved line is not a vector and is represented by a segment of that curved line defined by endpoints
A and B, with some specific information indicating the sense (or direction) of an ideal or real motion from one endpoint of the segment to the other (see figure). For instance, just labelling the two endpoints as A and B can indicate the sense, if the ordered sequence ( A, B) is assumed, which implies that A is the starting point. Displacement
A displacement (see above) is a special kind of directed distance defined in mechanics. A directed distance is called displacement when it is the distance along a straight line (minimum distance) from
A and B, and when A and B are positions occupied by the same particle at two different instants of time. This implies motion of the particle. The distance traveled by a particle must always be greater than or equal to its displacement, with equality occurring only when the particle moves along a straight path.
Another kind of directed distance is that between two different particles or point masses at a given time. For instance, the distance from the center of gravity of the Earth
A and the center of gravity of the Moon B (which does not strictly imply motion from A to B) falls into this category. Other "distances"
Circular distance is the distance traveled by a wheel. The circumference of the wheel is 2
π × radius, and assuming the radius to be 1, then each revolution of the wheel is equivalent of the distance 2 π radians. In engineering ω = 2 πƒ is often used, where ƒ is the frequency. See also References
^ SS Plotkin, PNAS.2007; 104: 14899–14904, ^ AR Mohazab, SS Plotkin,"Minimal Folding Pathways for Coarse-Grained Biopolymer Fragments" Biophysical Journal, Volume 95, Issue 12, Pages 5496–5507
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
Averaging the above two results.
PRESENTED IN PREPRINT ON FIG 3.
Statistical errors only.
WITH OMEGA/RHO DECAY PARAMETRIZATION.
WITH OMEGA/A1 DECAY PARAMETRIZATION.
Data read from graph.
Data read from graph.
Data read from graph.
Statistical errors only.
Statistical errors only.. An entry 0.00 indicates a statistical error of < 0.005.
FROM THE CHANNEL PI- P --> LAMBDA K0 PI0 WHICH HAS A CROSS SECTION OF 72 +- 4 MUB.
FROM THE CHANNEL PI- P --> LAMBDA K+ PI- WHICH HAS A CROSS SECTION OF 79 +- 3 MUB.
FORWARD CROSS SECTION.
Additional systematic uncertainty of 0.4 pct.
Acceptance corrected cross section for cos(theta)<0.8 and for extrapolation to full solid angle. Additional systematic uncertainty of 0.8 pct.
Acceptance corrected cross section for cos(theta)<0.7 and for extrapolation to full solid angle. Additional systematic uncertainty of 2.1 pct.
DATA FROM 1989 RUN. The cross section are quoted with their statistical and point-to-point systematic uncertainty of both the multihadron acceptance and the luminosity calculation.
DATA FROM 1990 RUN. The cross section are quoted with their statistical and point-to-point systematic uncertainty of both the multihadron acceptance and the luminosity calculation.
Cross sections corrected for the effects of efficiency and kinematic cuts and background. Data from 1989 run, reanalysed.
Assuming additionaly BR(D0-->K PI) of 0.56 +- 0.005.
Corresponding R value.
Inclusive production of ϱ0,K*±(892), andf is studied in\(\bar p\)p interactions at 12 GeV/c. The inclusive cross sections for ϱ0,K*±(892), andf are found to be 6.7±0.3 mb, 1.0±0.2 mb, and 1.4±0.3 mb, respectively. The differential cross sections are presented as a function of c.m. rapidity, Feynmanx and square of the transverse momentumpT2. Comparison with the correspondingpp data shows some interesting differences which can be attributed to the\(\bar p\)p annihilation. The results are compared with the predictions of the quark fusion model.
.
.
.
Axis error includes +- 4/4 contribution.
The photon asymmetry in the reaction p(\vec{\gamma},\pi^{0})p close to threshold has been measured for the first time with the photon spectrometer TAPS using linearly polarized photons from the tagged-photon facility at the Mainz Microtron MAMI. The total and differential cross sections were also measured simultaneously with the photon asymmetry. This allowed determination of the S-wave and all three P-wave amplitudes. The low-energy theorems based on the parameter-free third-order calculations of heavy-baryon chiral perturbation theory for P1 and P2 agree with the experimental values.
Polarized photon beam.
SPIN ROTATION ANGLE MEASUREMENTS.
POLARIZATION MEASUREMENTS FROM THIS EXPERIMENT ALONE.
COMBINED WITH DATA FROM BAKER ET AL., AND SAXON ET AL., (SEE COMMENTS).
MASS DEPENDENCE OF NORMALIZED T-CHANNEL MOMENTS SCALED TO 100 PCT POLARIZED PROTONS.
T DEPENDENCE OF NORMALIZED T-CHANNEL MOMENTS IN THE RHO REGION SCALED TO 100 PCT POLARIZED PROTONS. |
The problem
Find where does the following series converge and where does it converge absolutely $\sum_n \dfrac{(-4+4\sqrt3i)^n}{n} \dfrac{1}{(z+3)^{6n}}$.
My attempt.I first took the following substitution $w=\dfrac{1}{(z+3)^6}$, so I'm left with the following power series $\sum_n \dfrac{(-4+4\sqrt3i)^n}{n} w^n $ which I can find the radius of convegernce in terms of $|w|$ by using the Cauchy-Hadamard formula. Now I'm left to see if there's convergence on the border of that region. For that I'm having trouble deciding. I can't use Dirichlet criterion because the main sequence is not decreasing nor I can use Dedekind criterion. So I'm led to believe that maybe the series doesn't converge on the border, but I can't find a suitable series to compare to.
For what it's worth, if I made the problem a little bit easier, for example by taking $w=\dfrac{1}{(z+3)}$ I can find the radius of convergence of $\sum_n w^n$, but I can't relate that with $\sum_n a_n w^n$ if $a_n$ doesn't converge to 0.
Any hints would be apreciated. |
M3: Introductory Calculus - Material for the year 2019-2020
16 lectures
These lectures are designed to give students a gentle introduction to applied mathematics in their first term at Oxford, allowing time for both students and tutors to work on developing and polishing the skills necessary for the course. It will have an `A-level' feel to it, helping in the transition from school to university. The emphasis will be on developing skills and familiarity with ideas using straightforward examples.
At the end of the course, students will be able to solve a range of ordinary differential equations (ODEs). They will also be able to evaluate partial derivatives and use them in a variety of applications.
General linear homogeneous ODEs: integrating factor for first order linear ODEs, second solution when one solution is known for second order linear ODEs. First and second order linear ODEs with constant coefficients. General solution of linear inhomogeneous ODE as particular solution plus solution of homogeneous equation. Simple examples of finding particular integrals by guesswork. [4]
Introduction to partial derivatives. Second order derivatives and statement of condition for equality of mixed partial derivatives. Chain rule, change of variable, including planar polar coordinates. Solving some simple partial differential equations (e.g. $f_{xy} = 0$, $f_x = f_y$). [3.5]
Parametric representation of curves, tangents. Arc length. Line integrals. [1]
Jacobians with examples including plane polar coordinates. Some simple double integrals calculating area and also $\int_{\mathbb{R}^2} e^{-(x^2+y^2)} dA$. [2]
Simple examples of surfaces, especially as level sets. Gradient vector; normal to surface; directional derivative; $\int^B_A \nabla \phi \cdot d\mathbf{r} = \phi(B)-\phi(A)$.[2]
Taylor's Theorem for a function of two variables (statement only). Critical points and classification using directional derivatives and Taylor's theorem. Informal (geometrical) treatment of Lagrange multipliers.[3.5]
1) M. L. Boas,
Mathematical Methods in the Physical Sciences (Wiley, 3rd Edition, 2005).
2) D. W. Jordan & P. Smith,
Mathematical Techniques (Oxford University Press, 3rd Edition, 2003).
3) E. Kreyszig,
Advanced Engineering Mathematics (Wiley, 10th Edition, 2011).
4) K. A. Stroud,
Advanced Engineering Mathematics (Palgrave Macmillan, 5th Edition, 2011). |
Let $X_1,X_2,...$ a sequence of random variables identically distributed following a Pareto distribution of parameter $\alpha>0$ such that $P(X_1>x)=x^{-\alpha}$ for $x\geqslant 1$. Show that for a sequence of constants $c_1,c_2,...$ we have:
$\limsup\frac{\log(X_n)}{c_n}=\frac{1}{\alpha}$ almost surely.
I thought of the following possible resolution however I am stuck:
I need to prove $P(\limsup\frac{\log(X_n)}{c_n}\neq \frac{1}{\alpha})=0$. By Borel Cantelli if I prove $\sum_{n=1}^{\infty}P(\frac{\log(X_n)}{c_n}\neq \frac{1}{\alpha}))<\infty$ then it implies $P(\limsup\frac{\log(X_n)}{c_n}\neq \frac{1}{\alpha})=0$
However I do not know how to make the computations.
Questions:
Is there something wrong with my reasoning? How should I solve this question?
Thanks in advance! |
Frequent Links ISO 217
It has been suggested that this article be merged into Paper_size. (Discuss) Proposed since October 2014.
The
ISO 217:1995 standard defines the RA and SRA paper formats.
These paper series are untrimmed raw paper. RA stands for “raw format A” and SRA stands for “supplementary raw format A”. The RA and SRA formats are slightly larger than the corresponding A series formats. This allows bleed (ink to the edge) on printed material that will be later cut down to size. These paper sheets will after printing and binding be cut to match the A format.
The ISO A0 format has an area of 1.00 m² The ISO RA0 format has an area of 1.05 m² The ISO SRA0 format has an area of 1.15 m²
RA Series Formats SRA Series Formats RA0 860 × 1220 SRA0 900 × 1280 RA1 610 × 860 SRA1 640 × 900 RA2 430 × 610 SRA2 450 × 640 RA3 305 × 430 SRA3 320 × 450 RA4 215 × 305 SRA4 225 × 320 Tolerances
Paper in the RA and SRA series format is intended to have a <math>1:\sqrt{2}</math> aspect ratio but the dimensions of the start format have been rounded to whole centimetres.
For example, the RA0 format would be <math>\sqrt{1.05}*2^{-0.25} m \times \sqrt{1.05}*2^{0.25} m \approx 861.7 mm \times 1218.6 mm</math>, which has been rounded to <math>860 mm \times 1220 mm</math>.
The resulting real ratios are:
<math>43:61 \approx 1:1.4186</math> for RA0, RA2, RA4; <math>61:86 \approx 1:1.4098</math> for RA1, RA3; <math>45:64 \approx 1:1.4222</math> for SRA0, SRA2, SRA4; <math>32:45 = 1:1.40625</math> for SRA1, SRA3. Other ISO paper standards ISO 216:1975, defines two series of paper sizes: A and B ISO 269:1985, defines a C series for envelopes International standard paper sizes: ISO 216 details and rationale ISO 216 at iso.org ISO 217 at iso.org ISO 269 at iso.org |
Separable Differential Equations
A first order differential equation is \( \textcolor{blue}{\mbox{separable}} \) if it can be written as
\begin{equation} \label{eq:3.5.1}
h(y)y' = g(x), \end{equation}
where the left side is a product of \(y'\) and a function of \(y\) and the right side is a function of \(x\). Rewriting a separable differential equation in this form is called \( \textcolor{blue}{\mbox{separation of variables}} \). In Section 2.1 we used separation of variables to solve homogeneous linear equations. In this section we'll apply this method to nonlinear equations.
To see how to solve \eqref{eq:3.5.1}, let's first assume that \(y\) is a solution. Let \(G(x)\) and \(H(y)\) be antiderivatives of \(g(x)\) and \(h(y)\); that is,
\begin{equation} \label{eq:3.5.2}
H'(y) = h(y) \quad \mbox{and} \quad G'(x) = g(x) \end{equation}
Then, from the chain rule,
\begin{eqnarray*}
{d \over dx} H(y(x)) = H'(y(x)) y'(x) = h(y) y'(x). \end{eqnarray*}
Therefore \eqref{eq:3.5.1} is equivalent to
\begin{eqnarray*}
{d \over dx} H(y(x)) = {d \over dx} G(x). \end{eqnarray*}
Integrating both sides of this equation and combining the constants of integration yields
\begin{equation} \label{eq:3.5.3}
H(y(x)) = G(x) + c. \end{equation}
Although we derived this equation on the assumption that \(y\) is a solution of \eqref{eq:3.5.1}, we can now view it differently: Any differentiable function \(y\) that satisfies \eqref{eq:3.5.3} for some constant \(c\) is a solution of \eqref{eq:3.5.1}. To see this, we differentiate both sides of \eqref{eq:3.5.3}, using the chain rule on the left, to obtain
\begin{eqnarray*}
H'(y(x)) y'(x) + G'(x), \end{eqnarray*}
which is equivalent to
\begin{eqnarray*}
h(y(x))y'(x)=g(x) \end{eqnarray*}
because of \eqref{eq:3.5.2}.
In conclusion, to solve \eqref{eq:3.5.1} it suffices to find functions \(G=G(x)\) and \(H=H(y)\) that satisfy \eqref{eq:3.5.2}. Then any differentiable function \(y=y(x)\) that satisfies \eqref{eq:3.5.3} is a solution of \eqref{eq:3.5.1}.
Example \(\PageIndex{1}\)
Solve the equation
\begin{eqnarray*}
y'=x(1+y^2). \end{eqnarray*} Answer
Separating variables yields
\begin{eqnarray*}
{y'\over 1+y^2}=x. \end{eqnarray*}
Integrating yields
\begin{eqnarray*}
\tan^{-1}y={x^2\over2}+c \end{eqnarray*}
Therefore
\begin{eqnarray*}
y=\tan\left({x^2\over2}+c\right). \end{eqnarray*} Example \(\PageIndex{2}\)
(a) Solve the equation
\begin{equation} \label{eq:3.5.4}
y'=-{x\over y}. \end{equation}
(b) Solve the initial value problem
\begin{equation} \label{eq:3.5.5}
y'=-{x\over y}, \quad y(1)=1. \end{equation}
(c) Solve the initial value problem
\begin{equation} \label{eq:3.5.6}
y'=-{x\over y}, \quad y(1)=-2. \end{equation} Answer
(a) Separating variables in \eqref{eq:3.5.4} yields
\begin{eqnarray*}
yy'=-x. \end{eqnarray*}
Integrating yields
\begin{eqnarray*}
{y^2\over2}=-{x^2\over2}+c, \quad \mbox{ or, equivalently, } \quad x^2+y^2=2c. \end{eqnarray*}
The last equation shows that \(c\) must be positive if \(y\) is to be a solution of \eqref{eq:3.5.4} on an open interval. Therefore we let \(2c=a^2\) (with \(a > 0\)) and rewrite the last equation as
\begin{equation} \label{eq:3.5.7}
x^2+y^2=a^2. \end{equation}
This equation has two differentiable solutions for \(y\) in terms of \(x\):
\begin{equation} \label{eq:3.5.8}
y=\phantom{-} \sqrt{a^2-x^2}, \quad -a < x < a, \end{equation}
and
\begin{equation} \label{eq:3.5.9}
y= - \sqrt{a^2-x^2}, \quad -a < x < a. \end{equation}
The solution curves defined by \eqref{eq:3.5.8} are semicircles above the \(x\)-axis and those defined by \eqref{eq:3.5.9} are semicircles below the \(x\)-axis (Figure \(3.5.1\)).
(b) The solution of \eqref{eq:3.5.5} is positive when \(x=1\); hence, it is of the form \eqref{eq:3.5.8}. Substituting \(x=1\) and \(y=1\) into \eqref{eq:3.5.7} to satisfy the initial condition yields \(a^2=2\); hence, the solution of \eqref{eq:3.5.5} is
\begin{eqnarray*}
y=\sqrt{2-x^2}, \quad - \sqrt{2}< x < \sqrt{2}. \end{eqnarray*}
(c) The solution of \eqref{eq:3.5.6} is negative when \(x=1\) and is therefore of the form \eqref{eq:3.5.9}. Substituting \(x=1\) and \(y=-2\) into \eqref{eq:3.5.7} to satisfy the initial condition yields \(a^2=5\). Hence, the solution of \eqref{eq:3.5.6} is
\begin{eqnarray*}
y=- \sqrt{5-x^2}, \quad -\sqrt{5} < x < \sqrt{5}. \end{eqnarray*} Figure \(3.5.1\)
\begin{figure}[H]
\centering
\includegraphics[bb=-78 148 689 643,width=5.67in,height=3.66in,keepaspectratio]{fig020201}
\color{blue}
\caption{
{\bf (a)} $y=\sqrt{2-x^{2}}$,\, $-\sqrt{2}<x<\sqrt{2}$; \,
{\bf (b)} $y=-\sqrt{5-x^{2}}$,\, $-\sqrt{5}<x<\sqrt{5}$}
\label{figure:2.2.1}
\end{figure}
Implicit Solutions of Separable Equations
In Examples \((3.5.1)\) and \((3.5.2)\) we were able to solve the equation \(H(y)=G(x)+c\) to obtain explicit formulas for solutions of the given separable differential equations. As we'll see in the next example, this isn't always possible. In this situation we must broaden our definition of a solution of a separable equation. The next theorem provides the basis for this modification. We omit the proof, which requires a result from advanced calculus called as the \( \textcolor{blue}{\mbox{implicit function theorem}} \).
Theorem \(\PageIndex{1}\)
Suppose \(g=g(x)\) is continuous on \((a,b)\) and \(h=h(y)\) are continuous on \((c,d).\) Let \(G\) be an antiderivative of \(g\) on \((a,b)\) and let \(H\) be an antiderivative of \(h\) on \((c,d).\) Let \(x_0\) be an arbitrary point in \((a,b),\) let \(y_0\) be a point in \((c,d)\) such that \(h(y_0)\ne0,\) and define
\begin{equation} \label{eq:3.5.10}
c=H(y_0)-G(x_0). \end{equation}
Then there's a function \(y=y(x)\) defined on some open interval \((a_1,b_1),\) where \(a\le a_1<x_0<b_1\le b,\) such that \(y(x_0)=y_0\) and
\begin{equation} \label{eq:3.5.11}
H(y)=G(x)+c \end{equation}
for \(a_1<x<b_1\). Therefore \(y\) is a solution of the initial value problem
\begin{equation} \label{eq:3.5.12}
h(y)y'=g(x),\quad y(x_0)=x_0. \end{equation}
\begin{equation} \label{eq:2.2.12}
Proof
Add proof here and it will automatically be hidden
It's convenient to say that \eqref{eq:3.5.11} with \(c\) arbitrary is an \( \textcolor{blue}{\mbox{implicit solution}} \) of \(h(y)y'=g(x)\). Curves defined by \eqref{eq:3.5.11} are integral curves of \(h(y)y'=g(x)\). If \(c\) satisfies \eqref{eq:3.5.10}, we'll say that \eqref{eq:3.5.11} is an \( \textcolor{blue}{\mbox{implicit solution of the initial value problem}} \) \eqref{eq:3.5.12}. However, keep these points in mind:
a. For some choices of \(c\) there may not be any differentiable functions \(y\) that satisfy \eqref{eq:3.5.11}.\
b. The function \(y\) in \eqref{eq:3.5.11} (not \eqref{eq:3.5.11} itself) is a solution of \(h(y)y'=g(x)\).
Example \(\PageIndex{3}\)
(a) Find implicit solutions of
\begin{equation} \label{eq:3.5.13}
y'={2x+1\over5y^4+1}. \end{equation}
(b) Find an implicit solution of
\begin{equation} \label{eq:3.5.14}
y'={2x+1\over5y^4+1},\quad y(2)=1. \end{equation} Answer
(a) Separating variables yields
\begin{eqnarray*}
(5y^4+1)y'=2x+1. \end{eqnarray*}
Integrating yields the implicit solution
\begin{equation} \label{eq:3.5.15}
y^5+y=x^2+x+ c. \end{equation}
of \eqref{eq:3.5.13}.
(b) Imposing the initial condition \(y(2)=1\) in \eqref{eq:3.5.15} yields \(1+1=4+2+c\), so \(c=-4\). Therefore
\begin{eqnarray*}
y^5+y=x^2+x-4 \end{eqnarray*}
is an implicit solution of the initial value problem \eqref{eq:3.5.14}. Although more than one differentiable function \(y=y(x)\) satisfies \ref{eq:3.5.13}) near \(x=1\), it can be shown that there's only one such function that satisfies the initial condition \(y(1)=2\).
Figure \(3.5.2\) shows a direction field and some integral curves for \eqref{eq:3.5.13}.
Figure \(3.5.2\)
\begin{figure}[tbp]
\centering
\includegraphics[bb=-78 148 689 643,width=5.67in,height=3.66in,keepaspectratio]{fig020202}
\color{blue}
\caption{A direction field and integral curves for
$y'=\dst{\frac{2x+1}{5y^{4}+1}}$}
\label{figure:2.2.2}
\end{figure}
Constant Solutions of Separable Equations
An equation of the form
\begin{eqnarray*}
y'=g(x)p(y) \end{eqnarray*}
is separable, since it can be rewritten as
\begin{eqnarray*}
{1\over p(y)}y'=g(x). \end{eqnarray*}
However, the division by \(p(y)\) is not legitimate if \(p(y)=0\) for some values of \(y\). The next two examples show how to deal with this problem.
Example \(\PageIndex{4}\)
Find all solutions of
\begin{equation} \label{eq:3.5.16}
y'=2xy^2. \end{equation} Answer
Here we must divide by \(p(y)=y^2\) to separate variables. This isn't legitimate if \(y\) is a solution of \eqref{eq:3.5.16} that equals zero for some value of \(x\). One such solution can be found by inspection: \(y \equiv 0\). Now suppose \(y\) is a solution of \eqref{eq:3.5.16} that isn't identically zero. Since \(y\) is continuous there must be an interval on which \(y\) is never zero. Since division by \(y^2\) is legitimate for \(x\) in this interval, we can separate variables in \eqref{eq:3.5.16} to obtain
\begin{eqnarray*}
{y'\over y^2}=2x. \end{eqnarray*}
Integrating this yields
\begin{eqnarray*}
-{1\over y}=x^2+c, \end{eqnarray*}
which is equivalent to
\begin{equation} \label{eq:3.5.17}
y=-{1\over x^2+c}. \end{equation}
We've now shown that if \(y\) is a solution of \eqref{eq:3.5.16} that is not identically zero, then \(y\) must be of the form \eqref{eq:3.5.17}. By substituting \eqref{eq:3.5.17} into \eqref{eq:3.5.16}, you can verify that \eqref{eq:3.5.17} is a solution of \eqref{eq:3.5.16}. Thus, solutions of \eqref{eq:3.5.16} are \(y\equiv0\) and the functions of the form \eqref{eq:3.5.17}. Note that the solution \(y\equiv0\) isn't of the form \eqref{eq:3.5.17} for any value of \(c\).
Figure \(3.5.3\) shows a direction field and some integral curves for \eqref{eq:3.5.16}
Figure \(3.5.3\)
\begin{figure}[tbp]
\centering
\includegraphics[bb=-78 148 689 643,width=5.67in,height=3.66in,keepaspectratio]{fig020203}
\color{blue}
\caption{A direction field and integral curves for $y'=2xy^{2}$}
\label{figure:2.2.3}
\end{figure}
Example \(\PageIndex{5}\)
Find all solutions of
\begin{equation} \label{eq:3.5.18}
y'={1\over2}x(1-y^2). \end{equation} Answer
Here we must divide by \(p(y)=1-y^2\) to separate variables. This isn't legitimate if \(y\) is a solution of \eqref{eq:3.5.18} that equals \(\pm1\) for some value of \(x\). Two such solutions can be found by inspection: \(y \equiv 1\) and \(y\equiv-1\). Now suppose \(y\) is a solution of \eqref{eq:3.5.18} such that \(1-y^2\) isn't identically zero. Since \(1-y^2\) is continuous there must be an interval on which \(1-y^2\) is never zero. Since division by \(1-y^2\) is legitimate for \(x\) in this interval, we can separate variables in \eqref{eq:3.5.18} to obtain
\begin{eqnarray*}
{2y'\over y^2-1}=-x. \end{eqnarray*}
A partial fraction expansion on the left yields
\begin{eqnarray*}
\left[{1\over y-1}-{1\over y+1}\right]y'=-x, \end{eqnarray*}
and integrating yields
\begin{eqnarray*}
\ln\left|{y-1\over y+1}\right|=-{x^2\over2}+k; \end{eqnarray*}
hence,
\begin{eqnarray*}
\left|{y-1\over y+1}\right|=e^ke^{-x^2/2}. \end{eqnarray*}
Since \(y(x)\ne\pm1\) for \(x\) on the interval under discussion, the quantity \((y-1)/(y+1)\) can't change sign in this interval. Therefore we can rewrite the last equation as
\begin{eqnarray*}
{y-1\over y+1}=ce^{-x^2/2}, \end{eqnarray*}
where \(c=\pm e^k\), depending upon the sign of \((y-1)/(y+1)\) on the interval. Solving for \(y\) yields
\begin{equation} \label{eq:3.5.19}
y={1+ce^{-x^2/2}\over 1-ce^{-x^2/2}}. \end{equation}
We've now shown that if \(y\) is a solution of \eqref{eq:3.5.18} that is not identically equal to \(\pm1\), then \(y\) must be as in \eqref{eq:3.5.19}. By substituting \eqref{eq:3.5.19} into \eqref{eq:3.5.18} you can verify that \eqref{eq:3.5.19} is a solution of \eqref{eq:3.5.18}. Thus, the solutions of \eqref{eq:3.5.18} are \(y\equiv1\), \(y\equiv-1\) and the functions of the form \eqref{eq:3.5.19}. Note that the constant solution \(y \equiv 1\) can be obtained from this formula by taking \(c=0\); however, the other constant solution, \(y \equiv -1\), can't be obtained in this way.
Figure \(3.5.4\) shows a direction field and some integrals for \eqref{eq:3.5.18}.
Figure \(3.5.4\)
\begin{figure}[H]
\centering
\includegraphics[bb=-78 148 689 643,width=5.67in,height=3.66in,keepaspectratio]{fig020204}
\color{blue}
\caption{
A direction field and integral curves for
$y'=\dst{\frac{x(1-y^2)}{2}}$}
\label{figure:2.2.4}
\end{figure}
Differences Between Linear and Nonlinear Equations
Theorem \(3.4.2\) ~\ref{thmtype:2.1.2} states that if \(p\) and \(f\) are continuous on \((a,b)\) then every solution of
\begin{eqnarray*}
y'+p(x)y=f(x) \end{eqnarray*}
on \((a,b)\) can be obtained by choosing a value for the constant \(c\) in the general solution, and if \(x_0\) is any point in \((a,b)\) and \(y_0\) is arbitrary, then the initial value problem
\begin{eqnarray*}
y'+p(x)y=f(x),\quad y(x_0)=y_0 \end{eqnarray*}
has a solution on \((a,b)\).
The not true for nonlinear equations. First, we saw in Examples \((3.5.4)\) and \((3.5.5)\) that a nonlinear equation may have solutions that can't be obtained by choosing a specific value of a constant appearing in a one-parameter family of solutions. Second, it is in general impossible to determine the interval of validity of a solution to an initial value problem for a nonlinear equation by simply examining the equation, since the interval of validity may depend on the initial condition. For instance, in Example \((3.5.2)\) we saw that the solution of
\begin{eqnarray*}
{dy\over dx}=-{x\over y},\quad y(x_0)=y_0 \end{eqnarray*}
is valid on \((-a,a)\), where \(a=\sqrt{x_0^2+y_0^2}\).
Example \(\PageIndex{6}\)
Solve the initial value problem
\begin{eqnarray*}
y'=2xy^2, \quad y(0)=y_0 \end{eqnarray*}
and determine the interval of validity of the solution.
Answer
First suppose \(y_0\ne0\). From Example \((3.5.4)\), we know that \(y\) must be of the form
\begin{equation} \label{eq:3.5.20}
y=-{1\over x^2+c}. \end{equation}
Imposing the initial condition shows that \(c=-1/y_0\). Substituting this into \eqref{eq:3.5.20} and rearranging terms yields the solution
\begin{eqnarray*}
y= {y_0\over 1-y_0x^2}. \end{eqnarray*}
This is also the solution if \(y_0=0\). If \(y_0<0\), the denominator isn't zero for any value of \(x\), so the the solution is valid on \((-\infty,\infty)\). If \(y_0>0\), the solution is valid only on \((-1/\sqrt{y_0},1/\sqrt{y_0})\). |
To define characteristic classes in smooth vector bundles $E\longrightarrow M$ there is a more or less standard procedure: to choose a connection $\nabla$ and to derive the curvature $\Omega$, which is an $End(E)$-valued 2-form. In each chart $U_\alpha$, $\Omega$ may be described by a $r\times r$ matrix ($r$ rank of $E$) whose entries are 2-forms. The matrices change when the charts changes, but due to the tensoriality and the nature of $\Omega$, some quantities such as the trace or the determinant do not change for overlaping charts. (See, for instance, the first chapter of Lecture Notes on Seiberg-Witten Invariants)
Now to take full advantage of these invariant quantities (= to define Chern classes) one considers powers of the curvature matrix and their traces:
$$\Bigl(\frac{i}{2\pi}\Omega_\alpha\Bigr)^k\qquad\text{tr}\Bigl[\Bigl(\frac{i}{2\pi}\Omega_\alpha\Bigr)^k\Bigr]$$
The procedure is ok, but if one studies it closer, one realizes that we are indeed defining some 'pseudo-wedge' map
$$\Omega^p(End(E))\times\Omega^q(End(E))\longrightarrow\Omega^{p+q}(End(E))$$
by simply taking the product of matrices whose entries are forms. But the question is
Is there any way to define this pseudo-wedge product intrinsically, that is, by using only the classical wedge product $\Omega^p(M)\times\Omega^q(M)\longrightarrow\Omega^{p+q}(M)$ together with some linear algebra? Perhaps there is already some book making an explicit definition; in this case it would be most helpful for me to have good references.
Any idea or suggestion is welcome.
EDIT: See the discution below about the definition of Wikipedia and the relationship with the curvature of connections in vector bundles. |
The question title says it all: We know that in general, specifying the short rate $r(t)$ does
not specify the bond prices $P(t, T)$. So how can a model for short rates—for example the Vasicek model—be powerful enough to price interest rate derivatives?
The question title says it all: We know that in general, specifying the short rate $r(t)$ does
Let $r(s)$ be the process of a short rate. Then, by risk neutral pricing, $$ P(t,T) = \mathbb{E}^\mathbb{Q}\left[ \exp\left( -\int_t^T r(s)\mathrm{d}s\right) \Bigg| \mathcal{F}_t\right].$$ Thus, the zero-coupon bond is determined completely by the short rate process. Here, $P(t,T)$ denotes the time $t$ price of a zero-coupon bond maturing at time $T$. You just take the risk-neutral expectation of the discounted payoff. The payoff is $1$ for almost all states of the world $\omega\in\Omega$ (assuming no default risk). Thus, the price of the bond is the conditional expectation of the discount factor. The risk-neutral measure $\mathbb{Q}$ uses a bank account $(B_t)$ as numeraire with $\mathrm{d}B_t=r(t)B_t\mathrm{d}t$.
Short rate models (such as Vasicek, Hull-White, CIR, etc.) specify a stochastic model for $r(s)$, typically a (perhaps multidimensional) SDE and then, you can find (sometimes analytical) prices for bonds, bond options, swaptions etc.
The easiest case is a deterministic and constant short rate $r(s)\equiv r$. Then, $$P(t,T)=e^{-r(T-t)}$$ and clearly the short rate $r$ gives you the bond price. |
The more radicals prefer Oristano and its surroundings, because if the frequent mistral wind in Capo Mannu rages, you can jump the biggest waves of the Mediterranean (3-5 meters) with the wind perfectly at the side, or to be a little more comfortable logistically
The great variety of the landscape, the changeable scenarios and a never the same orography, actually allow Punta Trettu to practice a great variety of sporting activities.
Surely the beauty and transparency of the waters make snorkeling, underwater fishing, free diving, and scuba diving with unforgettable experiences, and Camping Coccorrocci will provide you with all the assistance you need to practice them.
If you prefer to be immersed in the sea, the thrill of “surfing” above twenty knots, pulled by a sail in the sky pushed by the wind, the exceptional characteristics of the Coccorrocci coast will transform the practice of kite surfing or wind surfing into a mine of emotions, whether you are an expert athlete in these activities or a novice. Constant and never violent winds, long sandy coasts and a breathtaking landscape, will be the ideal stage to prove yourself and have fun among the waves.
For lovers of jogging, trail running or road or dirt cycling, the vast territory that acts as a stage at Camping Coccorrocci offers plenty of choice, and a trained staff will be able to advise and guide you for the best experience possible.
A short distance from the campsite, the coast of Museddu offers the possibility of a horseback ride on an almost endless track, given that for almost nine kilometers you will have the chance to gallop with the sea to your right and a splendid pine forest to your left. The essence of a holiday in Sardinia, the rediscovery of the bond with nature, with animals, with one’s body. Doing sport and at the same time discovering a culture, regenerating yourself in a vacation that will become a cure for your body and your spirit. Kitefoil introduction
Foil is a technology that allows a hull (propelled by a motor or in this case a sail) to
emerge totally from the water, thanks to the hydrodynamic action of the submerged surface.
In fact, the pressure of the water under the
wings, combined with the depression that forms above them, generates a force of lift opposed to the weight, and allows a great reduction of resistance to motion and consequently an increase in efficiency.
The curve in figure shows qualitatively the
rapid reduction of the resistance, once, reached a certain speed, the hull comes out of the water.
Kitefoil is composed of the following elements:
Fuselage: It extends in length in the direction of motion and transmits the sustaining force to the hull through the mast, to which it is connected; Mast or Keel: It transmits the sustaining force to the hull, connecting it to the fuselage and to the immersed surfaces that create the lift; Supporting and stabilizing wings:These are the surfaces that create lift. The first is able to give all the lift required to separate the hull from the surface of the water, while the second balances the moment provided by the first, with a consequent stabilization effects. History of the hydrofoil
Hydrofoils have been used in different types of boats for over 100 years.
The first person that designed and built an hydrofoil was an Italian named Enrico Forlanini, in 1906. For his hydrofoil Forlanini used a system of 4 groups of parallel wings (a pair in the bow and a pair in the stert) of decreasing width, unlike the single hydrofoil wings in use today.
Forlanini’s design was resumed and improved by various other inventors over the following decades (in particular Alexander Graham Bell and Casey Baldwin), until around the
50’s the world began to invest massively in boats using hydrofoil fins, for both military and commercial use. The boom was reached in the 60 ‘s- 70’s but since then their use in motor boats has gradually decreased, due to various problems; not only due to construction and maintenance costs, but also safety and environmental issues. Materials for hydrofoils were in fact metallic, the same used for the structure of the boat.
The same problems occurred for hydrofoils used in the sailing or hobby disciplines, that began in the 60s, but was soon abandoned.
Since the turn of the century
investments in this technology have resumed. Mainly because new composite materials made it possible to produce extremely light and resistant appendages, different hydrofoil researches began again, in order to identify the best shape and structure for every hull and wind. A wide interest in hydrofoil sailing technology spread trough the media thanks to its use in the 2010 America’s Cup. Some sectors in which the foil has developed, however, are only now becoming popular. Unfortunately research has already reached a moment of stationarity, because the significant risks involved in the sector do not attract investors’ interest. Hydrofoil in kitesurfing
The application of hydrofoil to kitesurfing dates back to the 2000s. The design of modern hydrofoil for kitesurfing varies in geometry based on its type of use. The main categories are:
beginner; freestyle; racing boards. Kitefoils for beginners are designed to be stable at low speeds.
Those for
freestyle instead are more suitable for performing acrobatics and jumps and therefore have greater maneuverability, in addition to being structurally more resistant, in order to be able to withstand impacts on landing jumps. Racing kitefoils are designed to reach the highest possible speeds with the greatest stability for all the different wind conditions. To do this, the latter have a minimal design and are made of carbon fiber, to be as light and resistant as possible.
Hydrofoil for kitesurfing (also called kitefoil) is a combination of various components, each with a very precise function . Although it is easy to design a single fin suitable to a certain sea condition, it is far more complicated to create a kitefoil that is best suited to a wide range of wind conditions, and therefore to a larger speed range.
To best explain the operation of the components of a hydrofoil fin, it is important initially to understand the most important
moments to which the board is subjected: roll, pitch and yaw.
Pitch Yaw and Roll in a Hydrofoil
To understand how kitesurf works we can consider just the first 2, roll and pitch. Yaw can be ignored because load conditions are approximately simmetric and the mast twist can therefore be denied.
Kitefoils must produce enough lift to rise out of the water, giving support to the kitesurfer, and at the same time produce a moment of such magnitude as to allow balancing. The lift created must be sufficient in a wide range of speeds from the starting speed (“take off” speed) to the maximum speed (“top” speed).
The
take-off speed is the speed at which lift begins to be such as to allow the kitesurfer and the board to separate from the water. As resistance decreases, due to the fact that the board is now no longer in contact with water, but in the air (which density is about 1000 times lower than that of water), there is an increase in speed; this increase in speed corresponds to an increase in lift for the main foil, and a change in lift capacity of the stabilizer, which may vary depending on the type used, as will be discussed below.
There are two different functioning systems of the
stabilizer, which can have a positive bearing capacity and a negative bearing capacity. In the case of the positive flow stabilizer, in order to balance the moment of force Fp (Force weight of the kiter minus the force exerted by the kite) must have a arm smaller than the second case and therefore the kiter must have a greater ability to stay in equilibrium. The balance of the moment becomes evident in the behavior of each kitesurfer who uses kitefoils, which centers the back foot on the mast and uses the front foot to apply a force that balances the moment. In simplified terms, the board represents a lever on which the rider applies a force, while he balances the strength of the stabilizer with his front foot, counteracting the moment generated by load-bearing and resistant forces.
The stabilizer moment and the rider’s need to counterbalance it, leads to a more stable equilibrium, and the rider’s
ability lies in maintaining the balance in situations of variable winds and during maneuvers such as tacks or jibes.
Contrary to its name
n egative flow stabilizer improves stability because the proportion of the fin / lift ratio is reduced and therefore its efficiency decreases. The task of the kitefoil designer is to create a geometry that allows at the same time both: A sufficient bearing capacity in a wide range of windy conditions; Produce a stabilizing moment sufficient to allow the achievement of equilibrium.
So, ultimately it is required to
maximize Lift / Resistance ratio without unduly compromising stability. The design of a kitefoil is subject to a number of constraints that must be considered in the optimization phase. If one wants to design a kitefoil for racing, he should consider the rules imposed by the IKA (International Kitefoil Association) which specifies that the maximum length of a kitefoil, (measured perpendicularly to the board) cannot exceed 5000 mm (in the current state of the art foils are about 1.2m long, that is far from 5 m). Furthermore, the appendices can be up to one, and their purpose must be mainly to create lift. No limitations are imposed regarding the materials. Other limitations that must be considered in the design of a kitefoil are imposed on the structural design, since the kitefoil must have an optimized geometry that has to be easy to build and at the same time must be able to withstand the stresses to which it is subjected. Theoretical bases
To understand the functioning of the hydrofoil, we have to analyze the physics of a easy wing profile. The wing of the hydrofoil creates a lift force, perpendicular to the flow direction, and a drag force, oriented with flow direction. The angle of attack α is the angle between the flow direction and the contour string.
The lift produced by a profile is directly proportional to the area of the wing surface “A_L” and proportional to the square of the relative velocity of the flow “v”; it also depends on the density of the fluid “ρ” and the on the lift coefficient “C_L”:
F_L=lift=\frac{1}{2}\rho A_mC_Lv^2
Resistence is a function of:
wing surface “A_R”; relative speed of the water “v”; drag Coefficient “C_D”; water density “ρ”:
F_D=drag=\frac{1}{2}\rho A_mC_Dv^2
The lift and drag dimensionless coefficients C_L and C_D respectively, could be analytically, numerically or experimentally calculated, and are function of the profile shape.
C_L=\frac{L}{\frac{1}{2}\rhoA_mv^2}
C_D=\frac{D}{\frac{1}{2}\rhoA_mv^2} \sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}
Golden Rule for beginners: The bigger the better. Ok, so you have done a few lessons rented a board a few times and now your love for surfing has really grown. You are in the market for picking out a new board.
Which surf board suit me?
Our advice for
beginners is always the bigger the better.
We are talking of a longboard around
8’ or longer, preferably “soft top”, for two reasons: The soft topmakes it a lot more buoyant for finding your balanceand adjusting your pop ups; Supplementing your paddlingskills. Believe in the magic of the soft tops for making it easier to paddlefast enough to catch that curling wave.
If you have
already rented a couple of times, you probably already know that you always have to always check out the board first for any dents or imperfections, because a smooth surface is a sweat ride.
If you are thinking: “A NEW BOARD! that man will cost way too much!”, you should also be aware that there is always the option of
second hand boards and most local surf or skate shops have class oldies kicking round for prices much more in the surf and chill bracket.
The longer you surf for, the more confortable you’ll get with your board. People think that the higher your skill level goes up, the more you will edge towards a smaller short or Malibu board, but it really depends on what kind of wave your in the market for.
Our advice is that if you want to keep catching those smalls or even bigger waves, doing a bit of dancing on and chill fun on the board you should look at staying with your long, possibly just moving from soft top to resin ones.
If some strong curve, fast trick and one day some barrell action is in your expectations, a smaller sharper board is definitely more suited for you. There are 3 variable you need to keep always in mind when choosing the right board.
Height / weight and own surfing ability. Once these bad boys are taken into consideration you just need to choose the best board for your personal taste. May it be bright purple or charcoal black.
PS: If you are going to be surfing on a regular basis, it’s a good idea to build yourself a quiver of different boards of all shapes and sizes, so that you can be out in the water everyday.
ORISTANO. From the Oristano waves of Capo Mannu to the Spanish ones of Almerimar to win the silver medal of the first edition of the World Youth Windsurfing Championship organized by the Club Victor Fernàndez and branded Pwa Youth World Cup.
This is the great new conquest of the very young and promising waver Nicolò Spanu who, at just 13 years of age with his board, managed to tame the wind and ride the wave of success that led him to one of the most coveted podiums of youth windsurfing.
Son of art, Nicolò inherited his passion from his father, Matteo Spanu, 42 years old from Oristano, one of the most famous Italian wavers and federal instructor at the Eolo windsurfing school, who, in the same days, was awarded the second placed in the Master category and fourth place in the overall category of the Spanish national windsurfing Championship Cef which also took place in the waters of western Almeria.
“After a few days of waiting – reports Matteo Spanu about the child – the climatic conditions have become favorable, the waves and strong winds have arrived that have allowed the six finalist boys to enter the water to compete in the discipline waves and to Nicolò to earn a deserved second place “.
The competition was attended by the 38 best young athletes from all over the world who faced each other in the different age categories, including the Youth Male Under 17 for which Nicolò competed supported by the sponsors ASD Lions fit club, Sabarrastyle, GAsails, Mormaii, 99bords, Maverx mart, AL360 and LSDfins.
Nicolò climbed to his first table when he was very small and grew up together with his great passion that he was able to cultivate by challenging the waves of the whole world following the trail trodden by his father, who is organizing the Italian windsurfing championship to be held in Funtana Meiga in the spring. Maui, Brazil, Cape Verde, South Africa are just some of the stops that have enriched the waver curriculum of the two Oristano athletes and allowed them to collect one medal after another.
But the young agonist proves to be a champion even out of the water, managing to balance the sporting commitments with the school ones and to obtain excellent results even among the school desks. “School has always been a priority – continues Matteo Spanu -. At the beginning of the year we withdraw the program in such a way that Nicolò can study even in the months in which we are away for the competitions and it happened to attend for short periods also the schools of the countries where the competitions are held. It is important to point out that Nicolò has these experiences because he deserves it by constantly engaging not only in sports, but also in studies ».
Hard training, correct nutrition, constancy, sacrifice and determination are just some of the keys that guarantee such a high level of success. “Both my son and I do two training sessions a week followed by our athletic trainer Daniele Concas of the Lions fit club of Oristano – concludes Matteo Spanu -. In Cagliari we do instead of
muscle treatments by the athletic trainer Giuseppe Pugliese ».
And after the Spanish experience the two wavers don’t stop and get ready to choose the best waves of the Moroccan sea, where the American Championship will be held in spring. |
Current browse context:
physics.atom-ph
Change to browse by: References & Citations Bookmark(what is this?) Physics > Atomic Physics Title: Terahertz-driven phase transition applied as a room-temperature terahertz detector
(Submitted on 1 Sep 2017)
Abstract: There are few demonstrated examples of phase transitions that may be driven directly by terahertz-frequency electric fields, and those that are known require field strengths exceeding 1 MVcm$^{-1}$. Here we report a room-temperature phase transition driven by a weak ($\ll 1$ Vcm$^{-1}$), continuous-wave terahertz electric field. The system consists of caesium vapour under continuous optical excitation to a high-lying Rydberg state, which is resonantly coupled to a nearby level by the terahertz electric field. We use a simple model to understand the underlying physical behaviour, and we demonstrate two protocols to exploit the phase transition as a narrowband terahertz detector: the first with a fast (20 $\mu$s) nonlinear response to nano-Watts of incident radiation, and the second with a linearised response and effective noise equivalent power (NEP) $\leq 1$ pWHz$^{-1/2}$. The work opens the door to a new class of terahertz devices controlled with low field intensities and operating around room temperature. Submission historyFrom: Christopher Wade [view email] [v1]Fri, 1 Sep 2017 11:56:57 GMT (2686kb,D) |
I think the author of this problem forgot to add units to $K_c$ since it's
not a dimensionless entity:
$$K_c = \frac{[\ce{NO}]^2}{[\ce{N2O}][\ce{O2}]^{0.5}}$$
and should be $K_c = \pu{1.7e-13 mol^{0.5} L^{-0.5}}$. In general
$$[K_c] = \mathrm{dim}(c)^{Δn}$$
where square brackets denote the dimensions of the quantity $K_c$, $c$ is concentration and $Δn$ is the difference in the amounts between gaseous products and reactants, e.g. here
$$Δn = 2 - (1 + 0.5) = 0.5$$
On the other hand, the equilibrium constant $K$ you use for determining the standard Gibbs energy, must be dimensionless. Since we are dealing with gases only, the easiest way is to use $K_p$, which is dimensionless as required (when normalized to the standard state of pressure $p^\circ = \pu{1 bar}$):
$$K_p = \frac{\left(\frac{p(\ce{NO})}{p^\circ}\right)^2}{\left(\frac{p(\ce{N2O})}{p^\circ}\right) \left(\frac{p(\ce{O2})}{p^\circ}\right)^{0.5}}$$
so that in general
$$[K_p] = \mathrm{dim}(p)^{Δn}\cdot \mathrm{dim}(p^\circ)^{-Δn}$$
$K_p$ and $K_c$ are related (via the ideal gas law):
$$K_p = K_c (RT)^{Δn}$$
So, the equilibrium constant is
$$\begin{align}K &= K_p(p^\circ)^{-Δn}\\ &= K_c(RT)^{Δn}(p^\circ)^{-Δn} \\ &= \pu{1.7e-13 mol^{0.5} L^{-0.5}}\cdot(\pu{8.314e-2 L bar K−1 mol−1}\cdot\pu{298 K})^{0.5}(\pu{1 bar})^{-0.5} \\ &= \pu{8.5e-13}\end{align}$$
Note that here I used gas constant expressed as $\pu{8.314e-2 L bar K−1 mol−1}$ since in this case all dimensions are cancelled out and $K$ is left dimensionless. Now we can finally find the standard Gibbs energy:
$$\begin{align}Δ G^\circ &= -RT\ln K\\ &= -\pu{8.314 J mol-1 K-1}\cdot\pu{298 K}\cdot\ln\left(\pu{8.5e-13}\right)\\ &= \pu{68.9 kJ mol-1} \end{align}$$
Here the product before the logarithm includes $R = \pu{8.314 J mol-1 K-1}$ to get answer in $\pu{kJ mol-1}$ straight away. |
In Exercises [exer:7.4.1} –[exer:7.4.18} find the general solution of the given Euler equation on \((0,\infty)\).
[exer:7.4.1] \(x^2y''+7xy'+8y=0\)
[exer:7.4.2] \(x^2y''-7xy'+7y=0\)
[exer:7.4.3] \(x^2y''-xy'+y=0\)
[exer:7.4.4] \(x^2y''+5xy'+4y=0\)
[exer:7.4.5] \(x^2y''+xy'+y=0\)
[exer:7.4.6] \(x^2y''-3xy'+13y=0\)
[exer:7.4.7] \(x^2y''+3xy'-3y=0\)
[exer:7.4.8] \(12x^2y''-5xy''+6y=0\)
[exer:7.4.9] \(4x^2y''+8xy'+y=0\)
[exer:7.4.10] \(3x^2y''-xy'+y=0\)
[exer:7.4.11] \(2x^2y''-3xy'+2y=0\)
[exer:7.4.12] \(x^2y''+3xy'+5y=0\)
[exer:7.4.13] \(9x^2y''+15xy'+y=0\)
[exer:7.4.14] \(x^2y''-xy'+10y=0\)
[exer:7.4.15] \(x^2y''-6y=0\)
[exer:7.4.16] \(2x^2y''+3xy'-y=0\)
[exer:7.4.17] \(x^2y''-3xy'+4y=0\)
[exer:7.4.18] \(2x^2y''+10xy'+9y=0\)
[exer:7.4.19]
Adapt the proof of Theorem [thmtype:7.4.3} to show that \(y=y(x)\) satisfies the Euler equation
\[\label{eq:7.4.{exer:7.4.19}A} ax^2y''+bxy'+cy=0\nonumber\]
on \((-\infty,0)\) if and only if \(Y(t)=y(-e^t)\)
\[a {d^2Y\over dt^2}+(b-a){dY\over dt}+cY=0.\nonumber\]
on \((-\infty,\infty)\).
Use (a) to show that the general solution of Equation \ref{eq:7.4.{exer:7.4.19}A} on \((-\infty,0)\) is
\[\begin{aligned} y&=&c_1|x|^{r_1}+c_2|x|^{r_2}\mbox{ if $r_1$ and $r_2$ are distinct real numbers; } \\ y&=&|x|^{r_1}(c_1+c_2\ln|x|)\mbox{ if $r_1=r_2$; } \\ y&=&|x|^{\lambda}\left[c_1\cos\left(\omega\ln|x|\right)+ c_2\sin\left(\omega\ln|x| \right)\right]\mbox{ if $r_1,r_2=\lambda\pm i\omega$ with $\omega>0$}.\end{aligned}\nonumber\]
[exer:7.4.20] Use reduction of order to show that if
\[ar(r-1)+br+c=0\nonumber\]
has a repeated root \(r_1\) then \(y=x^{r_1}(c_1+c_2\ln x)\) is the general solution of
\[ax^2y''+bxy'+cy=0\nonumber\]
on \((0,\infty)\).
[exer:7.4.21] A nontrivial solution of
\[P_0(x)y''+P_1(x)y'+P_2(x)y=0\nonumber\]
is said to be
oscillatory on an interval \((a,b)\) if it has infinitely many zeros on \((a,b)\). Otherwise \(y\) is said to be nonoscillatory on \((a,b)\). Show that the equation
\[x^2y''+ky=0 \quad (k=\; \mbox{constant})\nonumber\]
has oscillatory solutions on \((0,\infty)\) if and only if \(k>1/4\).
[exer:7.4.22] In Example
Example \(\PageIndex{1}\):
Add text here. For the automatic number to work, you need to add the “AutoNum” template (preferably at7.4.2} we saw that \(x_0=1\) and \(x_0=-1\) are regular singular points of Legendre’s equation
\[(1-x^2)y''-2xy'+\alpha(\alpha+1)y=0. \tag{A}\]
Introduce the new variables \(t=x-1\) and \(Y(t)=y(t+1)\), and show that \(y\) is a solution of (A) if and only if \(Y\) is a solution of
\[t(2+t){d^2Y\over dt^2}+2(1+t){dY\over dt}-\alpha(\alpha+1)Y=0,\nonumber\]which has a regular singular point at \(t_0=0\).
Introduce the new variables \(t=x+1\) and \(Y(t)=y(t-1)\), and show that \(y\) is a solution of (A) if and only if \(Y\) is a solution of
\[t(2-t){d^2Y\over dt^2}+2(1-t){dY\over dt}+\alpha(\alpha+1)Y=0,\nonumber\]which has a regular singular point at \(t_0=0\).
[exer:7.4.23] Let \(P_0,P_1\), and \(P_2\) be polynomials with no common factor, and suppose \(x_0\ne0\) is a singular point of
\[P_0(x)y''+P_1(x)y'+P_2(x)y=0. \tag{A}\]Let \(t=x-x_0\) and \(Y(t)=y(t+x_0)\).
Show that \(y\) is a solution of (A) if and only if \(Y\) is a solution of
\[R_0(t){d^2Y\over dt^2}+R_1(t){dY\over dt}+R_2(t)Y=0. \tag{B}]where
\[R_i(t)=P_i(t+x_0),\quad i=0,1,2.\nonumber\]
Show that \(R_0\), \(R_1\), and \(R_2\) are polynomials in \(t\) with no common factors, and \(R_0(0)=0\); thus, \(t_0=0\) is a singular point of (B). |
How do I
1 show that $M=[0,3]\subset \mathbb{R}$ is a manifold with boundary?
2 find a $C^2$ partition of unity for the open cover $M=[0,2)\cup(1,3]$? 3 show that $\omega=(x-2)dx$ is/is not an orientation on $M$?
What I know:
Let $M$ be a manifold in vector space $V$. Then it is covered by coordinate patches $f:A\rightarrow B$, where $A$ is open in a vector space $W$ and $B\subset M$ is open in $V$. Now for $a\in W^*$ we deine the halfspace $H_a=a^{-1}[0,\infty)$. Its boundary is $\partial H_a=\ker a=a^{-1}\{0\}$. Now a coordinate patch with boundary is $f:A\rightarrow B$ if $A$ is open in $H_a$ and $B$ is open in $M$. And a manifold with boundary is the union of such coordinate patches with boundary.
For a manifold $M=\bigcup_i A_i$ there exist smooth functions $g_i:M\rightarrow [0,1]$ such that for all $x\in M$ and all $i$:
1. $\sum_i g_i(x)=1$, 2. $\text{supp}(g_i)=\overline{\{x\in M:g_i(x)\neq 0\}}\subset A_i$ 3. $x\in M$ has neighbourhood on which all but finitely many $g_i$ are zero. This is a partition of unity.
An orientation on $n$-dimensional $M$ is a $C^1$ differential $n$-form $\omega\in \Omega^n(M)$ that is nowhere $0$. |
In the answers to this question, it is said that
The de-icing system on most turbine aircraft (including MD-82 involved in that accident) uses bleed air from the engines, that is it extracts some air from behind the (low pressure stage of the) compressor. This air is therefore not ejected from the nozzle and not producing thrust, so the thrust is reduced.
My question is:
Why is bleed air taken from some stage of the compressor used? Why not e.g. exhaust gas?
For de-icing, the temperature of the used medium must be above 0°C (melting ice), while the environmental temperature usually is far below. So the engine has to invest energy to generate bleed air with reasonable temperature, the air cools down to still >0°C during de-icing, and is then vented out to the environment. There, it expands and cools down far below the environmental temperature. This expansion means that energy is wasted, and also, when the bleed air is cooled down / expanded before inserting it into the de-icing system (I don't know if this is done), energy would be lost. In addition, air/pressure is lost in the compressor stage, which makes the combustion less effective. (Again, I don't know how much air is taken, and how big the effect is)
On the other side, the exhaust gas of the engine is very hot due the combustion and could be used without the need of extra compression. So, using some exhaust gas, one would not waste so much energy. If it's too dirty, heat exchangers could be used to heat fresh air.
I would think of this reasons
Bleed air is anyway used for many purposes in an aircraft, so this is more economic than a completely separated system De-icing is not used for long time during flight, so again no need for a dedicated system
EDIT:
I'd like to expand the question to explain what makes me curious. In the comment, the correctness of this sentence is challenged:
There, it expands and cools down far below the environmental temperature.
This is a simple thermodynamic process. The air is compressed adiabatically, i.e. without adding heat to it. The heat comes from the thermal energy of the air, now also compressed to a lower volume. De-icing cools down the air, and when releasing the air to the environment, it expands to the orignal pressure. As thermal energy has been removed during de-icing, the temperature drops below environmental temperature.
Here is the math behind:
The relation of pressure and temperature in this case is:
$$ p_1^{1-\gamma}\cdot T_1^\gamma=p_2^{1-\gamma}\cdot T_2^\gamma\qquad \gamma \approx 1.4$$
Let's assume a pilot switches on de-icing during flight at 11km altitude. There, environmental pressure is 0.25bar (atmospheres) and temperature is -50°C (223K). It was also said here in the answers that it's possible that the bleed air has about 200°C (473K). The formula now gives a bleed air pressure of 3.47 bar, so a pressure ratio of about 14. The air is now cooled down while the pressure is maintained by the engine. I assume de-icing will be effective for bleed air temperatures above 0°C. So if the air is released at this temperature, the temperature will fall down to -144°C (128K). Another number: If releasing at 100°C, the temperature will drop to -97°C (175K).
(Of course, the air will mix with the environmental air immediately)
In principle, one can play with the numbers, increase/decrease altitude / temperatures and discuss how adiabatically this (de)compression processes are.
Anyway, this is a big air conditioner, using the thermal energy for de-icing and wasting the cooled air. If one only needs hot air, something coming from the exhaust system would always be more efficient.
This is not really efficient. May be the bleed air behind the de-icing system can still be used for other purposes, as it still has the pressure? |
Can n! be a perfect square when n is an integer greater than 1? (But is it possible, to prove without Bertrand's postulate. Because bertrands postulate is quite a strong result.)
Assume, $n\geq 4$. By Bertrand's postulate there is a prime, let's call it $p$ such that $\frac{n}{2}<p<n$ . Suppose, $p^2$ divides $n$. Then, there should be another number $m$ such that $p<m\leq n$ such that $p$ divides $m$. So, $\frac{m}{p}\geq 2$, then, $m\geq 2p > n$. This is a contradiction. So, $p$ divides $n!$ but $p^2$ does not. So $n!$ is not a perfect square.
That leaves two more cases. We check directly, $2!=2$ and $3!=6$ are not perfect squares.
There is a prime between n/2 and n, if I am not mistaken.
Hopefully this is a little more intuitive (although quite a bit longer) than the other answers up here.
Let's begin by stating a simple fact : (1) when factored into its prime factorization, any perfect square will have an even number of each prime factor.
If $n$ is a prime number, then $n$ will not repeat in any of the other factors of $n!$, meaning that $n!$ cannot be a perfect square (1). Consider if $n$ is composite. $n!$ will contain at least two prime factors ($n=4$ is the smallest composite number that qualifies the restraints), so let's call $p$ the largest prime factor of $n!$
The only way that $n!$ can be a perfect square is if $n!$ contains $p$ and a second multiple of $p$ (1). Obviously, this multiple must be greater than $p$ and less than $n.$
Using Bertrand's postulate, we know that there exists an additional prime number, let's say $p'$, such that $p < p' < 2p$. Because $p$ is the largest prime factor of $n!$, we know that $p' > n$ (If it were the opposite, then we would reach a contradiction).
Thus it follows that $2p > p' > n$. Because $2p$ is the smallest multiple of $p$ and $2p > n$, then $n!$ only contains one factor of $p$. Therefore it is impossible for $n!$ to be a perfect square.
If $n$ is prime, then for $n!$ to be a perfect square, one of $n-1, n-2, ... , 2$ must contain n as a factor. But this means one of $n-1, n-2, ... , 2 \geq n$, which is impossible.
If $n$ is not prime, then the first prime less than $n$ will be $p = n-k$, $0<k<n-1, 2\leq p<n$. No number less than $p$ will contain $p$ as a factor, so for $n!$ to be a perfect square there must exist a multiple of $p$, I'll call it $bp$, $1<b<n,$ such that$ p<bp\leq n$. Now according to chebyshev's theorem for any no. $p$ there exists a prime number between $p$and $2p.$ so if $r< n < 2r$ and also $p<n$ , so such an $n!$ would never be a perfect square. Hope this helps.
You can refer this.
Your statement has a generalization. There is a work by Erdos and Selfridge stating that the product of at least two consecutive natural numbers is never a power. Here is it: http://ad.bolyai.hu/~p_erdos/1975-46.pdf. |
Suppose I wanted to travel to one of the recently discovered potentially Earth-like planets such as Kepler 186f that is 490 light years away. Assuming I had a powerful rocket and enough fuel, how long would it take me?
Start by considering what is seen by the people watching you from the Earth. Nothing can travel faster than the speed of light, $c$, so the quickest you could get to Kepler 186f would be if you were travelling at $c$ in which case it would take 490 years. In practice it would take longer than this because you have to accelerate from rest when you leave the Earth and decelerate to a halt again when you get to your destination.
So far this isn’t very interesting. What makes the problem interesting is that clocks on fast moving objects run slow due to time dilation. If you could travel near to the speed of light the time that passes for you will be less than 490 years, and in fact can be a lot less, as we’ll see below.
First let’s take the simple case where you travel at some constant velocity $v$, and we won’t worry about how you accelerated to $v$ or how you’re going to slow down again. We’ll call the distance to the star $d$. For the people watching from Earth the time taken is just the distance you travel divided by your velocity:
$$ t = \frac{d}{v} $$
So if the distance is 490 light years and you’re travelling at the speed of light the time taken is just 490 years. But how much time would you measure on your wristwatch? To do the calculation properly you need to use the Lorentz transformations, but in fact the answer turns out to be very simple. The time you measure, $\tau$, is given by:
$$ \tau = \frac{t}{\gamma} $$
where $t$ is the time measured on Earth and $\gamma$ is the Lorentz factor and is given by:
$$ \gamma = \frac{1}{\sqrt{1 - \tfrac{v^2}{c^2}}} $$
Or if you want the whole expression written out in full, the time you measure is:
$$ \tau = \frac{d}{v} \sqrt{1 - \frac{v^2}{c^2}} $$
To give you a feel for this I’ve done the calculation for the 490 light year trip to Kepler 186f and I’ve drawn a graph of the time you measure as a function of your speed:
The blue line is the travel time as measured on Earth, so it goes to 490 years as $v \rightarrow c$. The red line is the time measured on your wristwatch, which goes to zero as $v \rightarrow c$.
But this isn’t very realistic since it ignores acceleration and deceleration. Suppose instead you travel halfway to the star at constant acceleration, then you flip over and travel halfway at constant deceleration. This allows you to start from rest and end at rest, and you also get a nice artificial gravity during the trip. But how can you calculate the time dilation for a trip that involves acceleration?
The details of the calculation are given in Chapter 6 of Gravitation by Misner, Thorne and Wheeler. I won’t reproduce the calculation here because it’s surprisingly boring. You solve a couple of simultaneous equations to get differential equations for the time, $t$, and distance, $x$, and you solve these two differential equations to get:
$$ t = \frac{c}{a} \sinh\left(\frac{a\tau}{c}\right) \tag{1} $$
$$ x = \frac{c^2}{a} \left(\cosh\left(\frac{a\tau}{c} \right) – 1 \right) \tag{2} $$
In these equations $\tau$ is the time measured on your wristwatch, $t$ is the time measured by the observers on Earth and $x$ is the distance travelled as measured by the observers on Earth. The times $t$ and $\tau$ start at zero at the moment you begin accelerating and leave the Earth. Finally $a$ is your constant acceleration. Note that $a$ is the acceleration you measure i.e. it’s the acceleration shown by an accelerometer you hold while you’re sat in the rocket.
To do the calculation, for example for the trip to Kepler 186f, you take the first half of the journey while the rocket is accelerating and set $x$ to this distance. So for Kepler 186f $x = 245$ light years. Then you solve equation (2) to get the elapsed time on the rocket $\tau$, and finally plug this into equation (1) to get the elapsed time on Earth. This is the time for half the trip, so just double it to get the time for the whole trip. I’ve done this for a range of accelerations to get this graph:
Again the blue line is the time measured on Earth and the red line is your time. At an acceleration of only 0.1g the travel time is already down to 76 years (just doable in a single lifetime) and at a more comfortable 1g the travel time is a shade over 12 years.
Since the values aren't that easy to read off the graph here are some representative values:
$$\begin{matrix} a (/g) & \tau (/\text{years}) & t (/\text{years}) \\ 0.01 & 374.9 & 655.9 \\ 0.1 & 76.8 & 509.0 \\ 1 & 12.1 & 491.9 \\ 10 & 1.7 & 490.2 \end{matrix}$$
Footnotes for non-non-nerds
Assuming you have more than a casual interest in Physics (why else would you be reading this!) there is lots more interesting stuff about accelerated motion. For example you might wonder how the spaceship accelerating at 1g can travel 490 light years in 12.1 years if nothing can travel faster than light. The answer is that the spaceship doesn’t travel 490 light years - the Lorentz contraction caused by its high speed means it travels a much shorter distance.
We’ve got the equations for distance and time above, and you can combine them to work out the velocity as a function of spaceship time $\tau$. I won’t do this since it’s just algebra; instead I’ll just quote the result:
$$ v = c \tanh \left( \frac{a\tau}{c} \right) \tag{3} $$
If the spaceship is travelling at velocity $v$ relative to the Earth and destination star then the Earth and star are travelling at velocity $v$ relative to the spaceship, and the crew of the spaceship see distances contracted by the Lorentz factor:
$$ d’ = \frac{d}{\gamma} = d\sqrt{1 - \frac{v^2}{c^2}} $$
When the spaceship sets off its distance to the star is 490 light years, but as it accelerates this distance decreases for two reasons. Firstly (obviously) the ship moves towards the star, but secondly Lorentz contraction makes the remaining distance smaller.
To calculate this effect you work out $x(\tau)$ using equation (2) for the first half of the trip. Since the trip is symmetrical you can reflect about the halfway point to get $x(\tau)$ for the second half of the journey. Then the distance left is just (for Kepler 186f) 490 light years - $x$. Calculate the velocity using equation (3) (again for the first half then reflect about the halfway point). Calculate the Lorentz factor from the velocity and multiply to get the contracted distance left. The results for 1g acceleration look like this:
To make the data clearer I’ve plotted the remaining distance for the last half of the trip on an expanded scale to the right. The discontinuity is where the spaceship switches from acceleration to deceleration. The graph shows that the occupants of the ship see the distance they have left to travel shrink rapidly as their speed increases. Conversely, as they start decelerating the Lorentz contraction decreases and the distance left to travel decreases only slowly until they are close to the destination.
protected by Qmechanic♦ May 2 '14 at 19:02
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Optimality conditions for $ E $-differentiable vector optimization problems with the multiple interval-valued objective function
1.
Faculty of Mathematics and Computer Science University of Łódź, Banacha 22, 90-238 Łódź, Poland
2.
Department of Mathematics, Hadhramout University, P.O. BOX : (50511-50512), Al-Mahrah, Yemen
In this paper, a nonconvex vector optimization problem with multiple interval-valued objective function and both inequality and equality constraints is considered. The functions constituting it are not necessarily differentiable, but they are $ E $-differentiable. The so-called $ E $-Karush-Kuhn-Tucker necessary optimality conditions are established for the considered $ E $-differentiable vector optimization problem with the multiple interval-valued objective function. Also the sufficient optimality conditions are derived for such interval-valued vector optimization problems under appropriate (generalized) $ E $-convexity hypotheses.
Keywords:$ E $-differentiable function, $ E $-differentiable vector optimization problem with multiple interval-valued objective function, $ E $-Karush-Kuhn-Tucker necessary optimality conditions, $ E $-convex function. Mathematics Subject Classification:Primary: 90C29, 90C30, 90C46, 90C26. Citation:Tadeusz Antczak, Najeeb Abdulaleem. Optimality conditions for $ E $-differentiable vector optimization problems with the multiple interval-valued objective function. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2019089
References:
[1]
I. Ahmad, A. Jayswal and J. Banerjee, On interval-valued optimization problems with generalized invex functions,
[2]
I. Ahmad, D. Singh and B. A. Dar,
Optimality conditions for invex interval valued nonlinear programming problems involving generalized
[3]
G. Alefeld and J. Herzberger,
[4] [5]
S. Chanas and D. Kuchta,
Multiobjective programming in optimization of interval objective functions – a generalized approach,
[6] [7]
X. Chen and Z. Li,
On optimality conditions and duality for non-differentiable interval-valued programming problems with the generalized (
[8] [9] [10] [11] [12] [13]
E. Hosseinzade and H. Hassanpour,
The Karush-Kuhn-Tucker optimality conditions in interval-valued multiobjective programming problems,
[14] [15]
A. Jayswal, I. Stancu-Minasian and I. Ahmad,
On sufficiency and duality for a class of interval-valued programming problems,
[16]
S. Karmakar and A. K. Bhunia,
An alternative optimization technique for interval objective constrained optimization problems via multiobjective programming,
[17]
L. Li, S. Liu and J. Zhang, Univex interval-valued mapping with differentiability and its application in nonlinear programming,
[18]
L. Li, S. Liu and J. Zhang, On interval-valued invex mappings and optimality conditions for interval-valued optimization problems,
[19] [20]
O. L. Mangasarian,
[21]
A. E.-M. A. Megahed, H. G. Gomma, E. A. Youness and A.-Z. H. El-Banna, Optimality conditions of
[22]
F. Mirzapour, Some properties on
[23]
R. E. Moore,
[24] [25] [26]
D. Singh, B. A. Dar and D. S. Kim,
KKT optimality conditions in interval valued multiobjective programming with generalized differentiable functions,
[27] [28] [29] [30]
H.-C. Wu,
The Karush-Kuhn-Tucker optimality conditions in multiobjective programming problems with interval-valued objective functions,
[31] [32] [33] [34]
E. A. Youness,
Characterization of efficient solution of multiobjective
[35]
J. K. Zhang, S. Y. Liu, L. F. Li and Q. X. Feng,
The KKT optimality conditions in a class of generalized convex optimization problems with an interval-valued objective function,
[36]
H.-C. Zhou and Y.-J. Wang, Optimality condition and mixed duality for interval-valued optimization, in
show all references
References:
[1]
I. Ahmad, A. Jayswal and J. Banerjee, On interval-valued optimization problems with generalized invex functions,
[2]
I. Ahmad, D. Singh and B. A. Dar,
Optimality conditions for invex interval valued nonlinear programming problems involving generalized
[3]
G. Alefeld and J. Herzberger,
[4] [5]
S. Chanas and D. Kuchta,
Multiobjective programming in optimization of interval objective functions – a generalized approach,
[6] [7]
X. Chen and Z. Li,
On optimality conditions and duality for non-differentiable interval-valued programming problems with the generalized (
[8] [9] [10] [11] [12] [13]
E. Hosseinzade and H. Hassanpour,
The Karush-Kuhn-Tucker optimality conditions in interval-valued multiobjective programming problems,
[14] [15]
A. Jayswal, I. Stancu-Minasian and I. Ahmad,
On sufficiency and duality for a class of interval-valued programming problems,
[16]
S. Karmakar and A. K. Bhunia,
An alternative optimization technique for interval objective constrained optimization problems via multiobjective programming,
[17]
L. Li, S. Liu and J. Zhang, Univex interval-valued mapping with differentiability and its application in nonlinear programming,
[18]
L. Li, S. Liu and J. Zhang, On interval-valued invex mappings and optimality conditions for interval-valued optimization problems,
[19] [20]
O. L. Mangasarian,
[21]
A. E.-M. A. Megahed, H. G. Gomma, E. A. Youness and A.-Z. H. El-Banna, Optimality conditions of
[22]
F. Mirzapour, Some properties on
[23]
R. E. Moore,
[24] [25] [26]
D. Singh, B. A. Dar and D. S. Kim,
KKT optimality conditions in interval valued multiobjective programming with generalized differentiable functions,
[27] [28] [29] [30]
H.-C. Wu,
The Karush-Kuhn-Tucker optimality conditions in multiobjective programming problems with interval-valued objective functions,
[31] [32] [33] [34]
E. A. Youness,
Characterization of efficient solution of multiobjective
[35]
J. K. Zhang, S. Y. Liu, L. F. Li and Q. X. Feng,
The KKT optimality conditions in a class of generalized convex optimization problems with an interval-valued objective function,
[36]
H.-C. Zhou and Y.-J. Wang, Optimality condition and mixed duality for interval-valued optimization, in
[1]
Anurag Jayswala, Tadeusz Antczakb, Shalini Jha.
Second order modified objective function method for twice differentiable vector optimization problems over cone constraints.
[2]
Xiuhong Chen, Zhihua Li.
On optimality conditions and duality for non-differentiable interval-valued programming problems with the generalized (
[3]
Kequan Zhao, Xinmin Yang.
Characterizations of the $E$-Benson proper efficiency in vector optimization problems.
[4]
Zhiang Zhou, Xinmin Yang, Kequan Zhao.
$E$-super efficiency of set-valued optimization problems involving improvement sets.
[5]
Francesco Mainardi.
On some properties of the Mittag-Leffler function $\mathbf{E_\alpha(-t^\alpha)}$,
completely monotone for $\mathbf{t> 0}$ with $\mathbf{0<\alpha<1}$.
[6] [7]
Alfonso Castro, Guillermo Reyes.
Existence of multiple solutions for a semilinear problem and a counterexample by E. N. Dancer.
[8]
Yuhua Sun, Laisheng Wang.
Optimality conditions and duality in nondifferentiable interval-valued programming.
[9] [10]
Tao Chen, Yunping Jiang, Gaofei Zhang.
No invariant line fields on escaping sets of the family $\lambda e^{iz}+\gamma e^{-iz}$.
[11]
Zutong Wang, Jiansheng Guo, Mingfa Zheng, Youshe Yang.
A new approach for uncertain multiobjective programming problem based on $\mathcal{P}_{E}$ principle.
[12] [13] [14]
Alejandro Cataldo, Juan-Carlos Ferrer, Pablo A. Rey, Antoine Sauré.
Design of a single window system for e-government services: the chilean case.
[15] [16]
Vladimir V. Marchenko, Klavdii V. Maslov, Dmitry Shepelsky, V. V. Zhikov.
E.Ya.Khruslov. On the occasion of his 70th birthday.
[17] [18] [19] [20]
2018 Impact Factor: 1.025
Tools
Article outline
[Back to Top] |
Suppose $f_1,f_2,\ldots $ is a sequence of convex functions that converges to a continuous convex $f$. Let $x_1^*,x_2^*$ be their respective (not necessarily unique) minima, and let y be a minima of $f$ (once again need not be unique). Can we prove that there exists a version of $x_1^*,x_2^*,\ldots$ such that $x_n^*\rightarrow y$ ?
No; here's a counterexample: let $f = 0$ and consider the minimizer $y = 0.$ Then you can construct convex functions which converge to $0$ pointwise but whose minima are always moving away from $y =0,$ e.g. $f_n(x) = (x - n)^2/n^n.$
No. Let $f_n=x^2/n$ for $n$ odd and $(x-1)^2/n$ for $n$ even. Then $x_n^*$ is an alternating sequence of $1$s and $0$s, which does not converge to anything. But $f_n$ converges pointwise to $f=0$.
We can modify this to make the convergence uniform, by using an absolute value instead of a square, or to make $f$ nonconstant, by adding $\max(|x-1/2|,1)$ to $f_n$.
If $f$ has a unique minimum, the statement is true. Let $a$ be the $\lim\inf$ of $x_n^*$ and $b$ be the $\lim\sup$. Let $y$ be the unique minimum of $f$. Assume $a< y$. Clearly $f( (a+y)/2) > f(y)$. By convexity, $f(a)> f((a+y)/2)$. So for $n$ sufficiently large, $f_n(a)> f_n((a+y)/2)> f_n(y)$. But this implies that the minimum of $f_n$ is greater than $(a+y)/2$. So the $\lim \inf$ of $x_n^*$ is greater than or equal to $(a+y)/2$, which is greater than $a$. This is a contradiction so $a\geq y$. By symmetry $b\leq y$. Hence because $a \leq b$, $a=b=y$ and $y$ is the limit. |
Finding the phase response of a biquad at a specific frequency is simple. Recall the transfer function of a biquad:
$$H(z) = \frac{b_0 + b_1z^{-1} + b_2z^{-2}}{a_0 + a_1z^{-1} + a_2z^{-2}}$$
The frequency response of a system can be calculated by letting $z = e^{j\omega}$, where $\omega$ is a normalized frequency in the range $[-\pi, \pi)$. SO, it would look like this:
$$H(e^{j\omega}) = \frac{b_0 + b_1e^{-j\omega} + b_2e^{-j2\omega}}{a_0 + a_1e^{-j\omega} + a_2e^{-j2\omega}}$$
Because of the complex exponentials, the value of $H(e^{j\omega})$ will be complex. The phase response at the frequency $\omega$ is just the phase angle of the resulting complex number. The magnitude response at the same frequency is likewise equal to the magnitude of the number.
The only other detail you might need is how to arrive at $\omega$: given a signal sampled at sample rate $f_s$ Hz, if you want to know the frequency response at a given frequency $f$ Hz, you can use the above equation, and let:
$$\omega = \frac{2 \pi f}{f_s}$$ |
77 11 Homework Statement A pendulum is made up of a light rigid beam of length 𝐿=0.50m and two point masses. The beam is attached to a fixed point at one end. One of the masses is of mass 𝑀=2.0kg and is attached to the beam at the opposite end to this fixed point. The other mass is of mass 𝑚=0.80kg and is attached to the beam a distance 𝑥=0.30m away from the fixed point. Homework Equations $$I = \Sigma m r^{2}$$
I completed this problem in two different ways, and wonder why they give different answers.
Firstly, I calculate the moment of inertia of the system as [itex]I = 0.572 kg m^{2}[/itex], and the total torque acting on the system as [itex]12.152 N[/itex]. Thus I can apply the rotational analogue of NII to write $$-12.152\theta = 0.572\ddot{\theta}$$ which is the SHM condition, with time period of For the second method, I calculated the centre of mass of the rod/particles as being 0.443 m from the pivot, and worked this through in the normal way to obtain the standard [itex]T=2 \pi \sqrt{ \frac{l}{g} }[/itex] relation. which gives a value of For reference, this what I did explicitly:$$-m_{tot}g\theta = ma$$I used [itex]x = l\theta[/itex] where [itex]x[/itex] is the tangential displacement of the centre of mass and [itex]l[/itex] is the distance of the centre of mass of the pivot. I know that the moment of inertia of a system is Why is it that using the centre of mass of the pendulum does not give the correct answer for time period?
Firstly, I calculate the moment of inertia of the system as [itex]I = 0.572 kg m^{2}[/itex], and the total torque acting on the system as [itex]12.152 N[/itex]. Thus I can apply the rotational analogue of NII to write $$-12.152\theta = 0.572\ddot{\theta}$$ which is the SHM condition, with time period of
1.36 seconds. This is the correct answer.
For the second method, I calculated the centre of mass of the rod/particles as being 0.443 m from the pivot, and worked this through in the normal way to obtain the standard [itex]T=2 \pi \sqrt{ \frac{l}{g} }[/itex] relation. which gives a value of
1.33 seconds.
For reference, this what I did explicitly:$$-m_{tot}g\theta = ma$$I used [itex]x = l\theta[/itex] where [itex]x[/itex] is the tangential displacement of the centre of mass and [itex]l[/itex] is the distance of the centre of mass of the pivot.
I know that the moment of inertia of a system is
notnecessarily equal to the moment of inertia of its centre of mass, so it would obviously wrong to use the moment of inertia of the centre of mass in the first method. I believe the mistake in the second method has something to do with this line of reasoning, but can't pinpoint it since my second method makes no reference to moment of inertia.
Why is it that using the centre of mass of the pendulum does not give the correct answer for time period? |
Connell D’Souza our co-blogger has worked with a team that develops robotic boats. The outcome is clearly impressive.
—
For today’s post, I would like to introduce you to Alejandro Gonzalez. Alex is a member of the RoboBoat team – VantTec of Tecnológico de Monterrey in Monterrey, Mexico. I met Alex at RoboBoat 2018 where I got a chance to see his team’s innovative solution to the tasks at the competition – an Unmanned Aerial Vehicle (UAV) guiding an Unmanned Surface Vehicle (USV) through the course, and I was glad when Alex offered to write about his team’s work with MATLAB and Simulink for the Racing Lounge Blog! Alex will talk about using the Robotics System Toolbox to develop a path planning algorithm and the Aerospace Blockset to build a dynamic model of their boat to tune controllers. So, let me hand it off to Alex to take it away – Alex, the stage is yours!
—
I lead VantTec, a student robotics group and we build an autonomous robotic boat for RoboBoat. The competition encourages collaboration between unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) for the docking task – the UAV tells the USV where to dock. We decided to take this collaboration further and use the UAV to develop a path for the USV to follow through the course.
The challenges with autonomous navigation of a robot are threefold:
Creating a map of the environment Choosing/developing a path planning algorithm Developing a robust controller to follow the desired path
So, how do we tackle these challenges?
Creating a Map of the Environment
To create a map of the environment we use our UAV to click a bird’s-eye-view photo of the course. This photo can create a grid to work on. Here, computer vision and artificial intelligence algorithms can give a relative position of obstacles, referenced to the picture dimensions or the vehicle itself.
For this to work, first we take a picture from above with the aerial vehicle. We use a DJI Phantom 4 with a mobile application we developed for autonomous waypoint navigation; the UAV takes the picture and sends it to a mobile phone. Next, we send this picture to our central ground station, where a neural network we developed detects the buoys and creates bounding boxes around them. From these bounding boxes, we obtain the center of each buoy, which we arrange in a matrix to create the map.
Using the Robotics System Toolbox’s binary occupancy grid, the data gathered creates a map where the robot can navigate. Here, the obstacles’ relative coordinates will set their location inside the grid.
Below, is an example on how to create the grid; the values 50 and 10 should be changed to the dimensions in meters that the UAV camera frames on the taken picture. Then, the variable xy is the set of obstacles taken from their centers. The sample code is an example of the kind of matrix that should be introduced. Our computer vision module creates a similar matrix with a corresponding vector of coordinates for each obstacle.
robotics.BinaryOccupancyGrid(50,10,30); xy = [3 2; 8 5; 13 7; 20 1; 25 8; 32 6; 38 3; 40 9; 42 4; 23 2; 28 5; 33 7]; setOccupancy(map, xy, 1);
Then, the function inflate can change the obstacle dimensions by a known or obtained radius.
inflate(map,0.3); Choosing/ Developing a Path Planning Algorithm
The Robotics Systems Toolbox presents another solution, this time using a sampling-based path planner algorithm called Probabilistic Roadmap. In this case, a tag on the vehicle can help for its aerial recognition, resulting in the start location coordinates and end location coordinates and number of nodes to set are required to get the route the vehicle needs to follow.
prm = robotics.PRM prm.Map = map; startLocation = [3 3]; endLocation = [47 7]; prm.NumNodes = 25; % Search for a solution between start and end location. path = findpath(prm, startLocation, endLocation); while isempty(path) prm.NumNodes = prm.NumNodes + 25; update(prm); path = findpath(prm, startLocation, endLocation); end Developing a Robust Controller
The challenge of developing a robust controller is easier with a model of the vehicle to reference it. The better your model, the better your controller. A kinematic model serves as a start, but a dynamic model of the robot is better suited to create a simulation environment. For an underactuated USV, a 3 DOF dynamic model can achieve the environment needed to work with.
Simulink is a great tool to develop these kinds of model, even more using the toolboxes available. Aerospace Blockset presents Utilities blocks, which includes math operations with 3×3 matrices, needed for 3 DOF dynamic models.
Building the Model
The equation for the dynamic model is:
$ \tau = M \dot{\nu} + C(\nu)\nu + D(\nu)\nu $
or rewritten:
$ \dot{\nu} = M^{-1} [\tau – C(\nu) – D(\nu)] $
The first matrix in the equation is the inertia tensor. This M matrix is constructed using the 3×3 Matrix utility block from the Aerospace Blockset.
$M = \begin{pmatrix} m – X_{\dot{u}} & 0 & -m y_G \\ 0 & m – Y_{\dot{u}} & m x_{G} – Y_{\dot{r}} \\ -my_{G} & m x_{G} – N_{\dot{\nu}} & I_{Z} – N_{\dot{r}} \end{pmatrix} $
Then, it was made a subsystem for the overall dynamic model, having the vehicle physical constants (m, X_G, Y_G, I_Z) and hydrodynamic coefficients needed as inputs and the matrix (M) as output.
The second matrix in the system is a vector of forces ($\tau $-matrix) which is programmed as shown below. Then, it was inserted into a subsystem for the overall model, with the boat beam (B) and individual thrust (Tport & Tstbd) as inputs and the vector of forces (T) as output.
$ \tau = \begin{pmatrix} \tau_{x} \\ \tau_{y} \\ \tau_{z} \end{pmatrix} = \begin{pmatrix} (T_{port} + T_{stbd}) \\ 0 \\ 0.5*B (T_{port} – T_{stbd}) \end{pmatrix} $
Then, it was inserted into a subsystem for the overall model, with the boat beam and individual thrust as inputs and the vector of forces as output.
Similarly, the next matrix is the Coriolis matrix (C matrix). As shown below, the sum of two 3×3 matrices is needed and hence the matrix sum block was used. Then, a subsystem was created which has, as inputs, physical parameters (X_G, Y_G, m), hydrodynamic coefficients and the values of the surge and sway speed as well as the yaw rate (V local) and the Coriolis matrix as the output:
$ C(\nu) = \begin{pmatrix} 0 & 0& -m(x_G r + \nu) \\ 0 & 0& -m(y_G r – u) \\ m(x_G r + \nu) & m(y_G r – u) & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0& \frac{Y_{\dot{\nu}} \nu +\frac{Y_{\dot{r}} + N_{\dot{\nu}}}{2}r}{200}\\ 0 & 0 & -X_{\dot{u}} u \\ \frac{-Y_{\dot{\nu}} \nu -\frac{Y_{\dot{r}} + N_{\dot{\nu}}}{2}r}{200} & X_{\dot{u}} u & 0 \end{pmatrix} $
The next matrix is the drag matrices. Like the Coriolis matrix, the Drag matrix is a sum of two matrices, but this time with a negative sign. Again, a subsystem was created, with all the hydrodynamic coefficients required, surge and sway speeds, and the yaw rate as inputs and the matrix as output.
$D(\nu) = \begin{pmatrix} Y_u & 0 & 0 \\ 0 & Y_{\nu} & Y_r \\ 0 & N_{\nu} & N_r \end{pmatrix} – \begin{pmatrix} X_{u\mid u \mid}\mid u \mid & 0 & 0 \\ 0 & Y_{\nu \mid \nu \mid} \mid \nu \mid + Y_{\nu \mid r \mid} \mid r \mid & Y_{r \mid \nu \mid} \mid \nu \mid + Y_{r \mid r \mid} \mid r \mid \\ 0 & Y_{\nu \mid \nu \mid} \mid \nu \mid + Y_{\nu \mid r \mid} \mid r \mid & Y_{r \mid \nu \mid} \mid \nu \mid + Y_{r \mid r \mid} \mid r \mid \end{pmatrix} $
Afterwards, a matrix sum was used for the first algebraic part of the equation.
Then, the resultant matrix is multiplied with the inverted M matrix. The result is the derivative of the local reference frame velocity vector, and it is subsequently integrated.
The transformation matrix is represented as shown below and is used to relate the local reference frame with the global reference frame:
$ J(\eta) = \begin{pmatrix} cos \psi & -sin \psi & 0 \\ sin \psi & cos \psi & 0\\ 0 & 0 & 1 \\ \end{pmatrix} $
The local velocity vector represented by V-local is transformed to the global reference frame and then integrated to obtain the x,y and orientation or heading of the boat and is stored in the vector defined by “n_global” as shown below. You can use a demux block to index into the individual elements of the vector.
Finally, a subsystem was created with the equations necessary to obtain the hydrodynamic coefficients, after introducing parameters that can be measured or estimated. These hydrodynamic coefficients are collected into a Simulink Bus to enable data transfer to other subsystems of the model.
Developing a Model-Based Controller
The programmed equation creates a dynamic boat model to base a controller on. Here the body-fixed frame (v) and North-East-Down -fixed frame (n) are the outputs and the thruster values or control commands as inputs to the model. You can also set up the boat parameters to be accepted as mask variables, this will give you a parameterized model that can be modified as you make physical changes to your boat. With this parameterized model, you can use Control System Toolbox and Simulink Control Design to design a controller that can follow the desired path generated earlier.
Here I show you an example surge speed and heading controller that we developed. To test this controller, we used the Signal Builder block to create an example sinusoidal trajectory that represents the desired heading. As you can imagine, in our complete system this trajectory is generated from the map as we discussed earlier, but we are showing a test input for now.
From the plots below, we can see that our controller is able to track the heading fairly well. This can be improved by tuning the controller gains and the XY Graph below shows the trajectory of our boat with our test control inputs. |
In this simple note http://arxiv.org/abs/0907.1813 (to appear in Colloq. Math.), Rossi and I proved a characterization in terms of "inversion of Riesz representation theorem".
Here is the result: let $X$ be a normed space and recall Birkhoff-James ortogonality: $x\in X$ is orthogonal to $y\in X$ iff for all scalars $\lambda$, one has $||x||\leq||x+\lambda y||$.
Let $H$ be a Hilbert space and $x\rightarrow f_x$ be the Riesz representation. Observe that $x\in Ker(f_x)^\perp$, which can be required using Birkhoff-James orthogonality:
Theorem: Let $X$ be a normed (resp. Banach) space and $x\rightarrow f_x$ be an isometric mapping from $X$ to $X^*$ such that
1) $f_x(y)=\overline{f_y(x)}$
2) $x\in Ker(f_x)^\perp$ (in the sense of Birkhoff and James)
Then $X$ is a pre-Hilbert (resp. Hilbert) space and the mapping $x\rightarrow f_x$ is the Riesz representation. |
I am using JHEP class for my physics thesis, but everything written in math environment becomes automatically in bold font. I have tried with different compilers like: TeXworks and TeXmaker. I even send the tex file to my mentor who compiled it on an apple computer and then it works just fine, no bold font.
So the problem has to be on my end, but I am running out of ideas on why it doesn't work. I have tried searching for a solution but it doesn't seem to be a common issue.
Here is the LateX code:
\documentclass[a4paper,11pt]{article} \usepackage{jheppub} \usepackage[T1]{fontenc} \begin{document} Testing $X=56\in \mathcal{H}$ \begin{equation} \label{eq:x} \begin{split} x &= 1 \,, \qquad y = 2 \,, \\ z &= 3 \,. \end{split} \end{equation} \end{document}
and I added a link to the package with the full example provided by JHEP.
Everything inside the math environment is written in a bold font. And I stress that if someone else compiles the document it does not happen.
I added the package as was recommended and it fixed the rendering issue. But it created an additional problem; it either completely removed the parenthesis/symbols or just shifted them.
\documentclass[a4paper,11pt]{article}\pdfoutput=1 \usepackage{jheppub} \usepackage[swedish,english]{babel}\usepackage[T1]{fontenc}\usepackage{MnSymbol}\usepackage{lmodern}\begin{document}\begin{subequations}\begin{align}\omega (\alpha X + \beta Y,Z) &= \alpha\omega(X,Z)+\beta\omega(Y,Z)\\\omega (X,Y)&=-\omega (Y,X) \\X\minushookup \omega &=0\quad \text{iff} X=0\quad\end{align}\end{subequations}\begin{equation}N^{\perp} = \{X\in V \,|\,\omega (X,Y)=0\, \forall \, Y\in N \}.\end{equation}\end{document} |
I have two identical fermions in an infinite potential well. They are non-interacting. How should I show that the first excited state is four-fold degenerate? Is the wavefunction just the superposition of the wavefunction of each fermion?
In the ground state, both electrons are in the state with the lowest value of "n". E.g., in the case of an infinite potential well, the lowest quantum number is n=1. In this case, both electrons have n=1, but one electron is spin up and one electron is spin down, because of exclusion.
The first excited state is one in which one of the electrons has n=1 (lowest single particle level) and one of the electrons has n=2 (first excited single particle level). In this case, either spin is okay for either electron.
So there are four states: (n=1,up; n=2,up), (n=1,up; n=2, down), (n=1,down; n=2, up), (n=1,down; n=2,down).
And, No, the wave function is not just the superposition for each one. The wave function is a Slater Determinant of the single-particle wavefunctions.
For example, in the case of the ground state, the spatial part of the wavefunction is symmetric $$ \sin(x_1\pi/L)\sin(x_2\pi/L) $$ and the spin part of the wavefunction is anti-symmetric $$ |\uparrow\downarrow>-|\downarrow\uparrow>\;. $$
You can work out the four excited states similarly. |
Research Open Access Published: Existence and regularity of time-dependent global attractors for the nonclassical reaction-diffusion equations with lower forcing term Boundary Value Problems volume 2016, Article number: 10 (2016) Article metrics
1027 Accesses
Abstract
Based on the notation of time-dependent attractors, we prove the existence and the regularity of time-dependent global attractors for a class of nonclassical reaction-diffusion equations when the forcing term \(g(x)\in H^{-1}(\Omega)\) and the nonlinear function satisfies the critical exponent growth, which is weaker than the conditions used in (Jing and Liu in Appl. Anal. 94(7):1439-1449, 2015).
Introduction
Recently, Conti and Di Plinio
et al. [2–4] presented the notation of time-dependent global attractors and studied the long-time behavior of the wave equations and oscillation equations in the topology space equipped with the norm related to the time, respectively. Motivated by these results we investigate the existence and regularity of time-dependent global attractors for a class of nonclassical reaction-diffusion equations
Here Ω is a bounded set of \(\mathbb{R}^{n}\) (\(n\geq3\)) with smooth boundary
∂Ω. \(\lambda>0\), \(\tau\in\mathbb{R}\), and \(\varepsilon(t)\) is a decreasing bounded function satisfying
and there exists \(\nu>0\) such that
The nonlinearity \(f\in C^{1}(\mathbb{R})\) with \(f(0)=0\), is assumed to satisfy the following conditions:
where \(\lambda_{1}\) is the first eigenvalue of −△ in \(H_{0}^{1}(\Omega)\),
C is a positive constant.
The nonclassical reaction-diffusion equation arises as a mathematical model to describe physical phenomena, such as non-Newtonian flows, solid mechanics, and heat conduction [5–7]. Aifantis provides a quite general approach for obtaining these equations (see [5, 8]).
When \(\varepsilon(t)\) in (1.1) is only a positive constant, the long-time behavior of solutions for (1.1) has been extensively studied by several authors in [9–19] and the references therein. For instance, some authors obtained the existence of global(pullback) attractors of solutions for both the autonomous case [9, 10, 12, 14, 16] and the nonautonomous case [15, 17, 19]. Anh and Toan [18] investigated the existence and upper semicontinuity of uniform attractor in \(H^{1}(\mathbb{R^{N}})\) for this problem; besides, they also considered the case of singularly oscillating external forces on \(\mathbb{R^{N}}\) [20]. The existence of exponential attractors was obtained in [11, 13, 15]. In the general case of a time dependence, to the best of our knowledge, only Ding and Liu [1] proved the existence and regularity of time-dependent global attractors of (1.1) when the force term \(g\in L^{2}(\Omega)\) (\(\Omega\subset\mathbb{R}^{3}\)) and the nonlinear term
f satisfies the following conditions:
In this paper, following the general lines of the approach used in [2–4], we investigate the existence and regularity of the time-dependent attractors for the process \(U(t,\tau)\) generated by (1.1) under weaker conditions than [1].
Preliminaries
Without loss of generality, denote \(H=L^{2}(\Omega)\) with the inner products \(\langle\cdot,\cdot\rangle\) and norms \(\Vert \cdot \Vert \). For \(0\leq \sigma\leq2\), we define the hierarchy of compactly nested Hilbert spaces
Then, for \(t\in\mathbb{R}\) and \(-1\leq\sigma\leq1\), we introduce the time-dependent spaces
endowed with the time-dependent norms
The symbol
σ is always omitted whenever zero. In particular, the time-dependent phase space where we settle the problem is
then we have the compact embeddings
with injection constants independent of \(t\in\mathbb{R}\). Note that the spaces \(\mathcal{H}_{t}\) are all the same as linear spaces; besides, since \(\varepsilon(t)\) is a decreasing function of
t, for every \(u\in H_{1}\) and \(t\geq\tau\in\mathbb{R}\) we have
Hence the norms \(\Vert u\Vert _{\mathcal{H}_{t}}^{2}\) and \(\Vert u\Vert _{\mathcal {H}_{\tau}}^{2}\) are equivalent for any fixed \(t, \tau\in\mathbb{R}\), but the equivalent constant blows up when \(t\rightarrow+\infty\).
The main results A priori estimates
Under the assumptions of (1.2)-(1.5), if \(g\in H^{-1}(\Omega)\), then using the standard Galerkin approximation method ([21]), we can obtain the result concerning the existence and uniqueness of solution for the problem (1.1); see, for example, [6, 9, 10]. Thus, based on the subsequent Lemma 3.2 we get the following results.
Lemma 3.1 Furthermore, let \(u_{i}(\tau)\in\mathcal{H}_{\tau}\) be two initial conditions such that \(\Vert u_{i}(\tau)\Vert _{\mathcal{H}_{\tau}}\leq R\) (\(i=1,2\)) and denote by \(u_{i}(t)\) the corresponding solutions to the problem (1.1). Then the following estimate holds: for some constant \(K=K(R)>0\). Proof
We write \(u_{i}(t)=U(t,\tau)u_{i}(\tau)\), \(\overline{u}= U(t,\tau )u_{1}(\tau)-U(t,\tau)u_{2}(\tau)\). Then the difference between the two solutions satisfies
with initial datum \(u(\tau)=u_{1}(\tau)-u_{2}(\tau)\). Multiplying by 2
u̅ in \(L^{2}(\Omega)\) we obtain
Thus, we end up with the differential inequality
and an application of the Gronwall lemma on \([\tau, t]\) completes the proof. □
By means of the Lemma 3.1, a family of maps with \(t\geq\tau\in\mathbb{R}\)
define a strongly continuous process on a family of spaces \(\{\mathcal {H}_{t}\}_{t\in\mathbb{R}}\).
Lemma 3.2 Assume that (1.2)-(1.5) hold. For any \(u_{\tau}\in \mathcal{H}_{\tau}\), \(t\geq\tau\), let \(U(t,\tau)u_{\tau}\) be the solution of (1.1) with initial value \(u_{\tau}\). Then there is a positive constant K, such that Proof
Multiplying (1.1) by \(2u+2u_{t}\) in
H we obtain
Let
it yields
namely
where
In view of the condition (1.5), there are \(0<\nu<1\) and \(c\geq0\), such that
and
So we deduce that
Therefore, for any \(K> M_{2}\), there exists \(t_{0}>\tau\) such that
As a result, if
u is a solution of the systems (1.1), if we let \(B_{t}=\bigcup_{t \geq\tau}U(t,\tau)B_{\tau}\), where
then \(B_{t}\) is a bounded time-dependent absorbing set of \(\{U(t,\tau)\} _{t\geq\tau}\). Moreover, \(B_{t}\) is positively invariant. □
On the other hand, from the above discussion, for every \(R\geq0\) there exist positive constants
μ and \(t_{0}=t_{0}(R)\) such that The time-dependent global attractors and regularity
The main result concerning the asymptotic behavior of problem (1.1) is contained in the following theorem.
Theorem 3.3 The process \(U(t,\tau)\) generated by problem (1.1) admits an invariant time- dependent global attractor \(\mathscr{U}=\{ A_{t}\}_{t\in\mathbb{R}}\) in \(\mathcal{H}_{t}\). Besides, \(A_{t}\) is bounded in \(\mathcal{H}_{t}^{1}\), with a bound independent of t.
In order to show that the process is asymptotically compact, we shall exhibit a pullback attracting family of (non-void) compact sets. For this purpose, we exploit a suitable decomposition of the process in the sum of a decaying part and of a compact one.
The decomposition
Since the injection \(i: L^{2}(\Omega)\hookrightarrow H^{-1}(\Omega)\) is dense, we know that for every \(g\in H^{-1}(\Omega)\) and any \(\eta>0\), there is a \(g^{\eta}\in L^{2}(\Omega)\) which depends on
g and η such that
Let \(\mathfrak{B}=\{\mathbb{B}_{t}(R_{0})\}_{t\in\mathbb{R}}\) be a time-dependent absorbing set as in Lemma 3.2 and let \(\tau\in\mathbb {R}\) be fixed. Then, for any \(u_{\tau}\in\mathbb{B}_{\tau}(R_{0})\), we divide \(U(t,\tau)u_{\tau}\) into the sum
where
respectively, solve the following systems:
and
In the following, the generic constant \(C\geq0\) depends only on \(\mathfrak{B}\).
Lemma 3.4 Proof
Multiplying (3.12) by \(2v^{\eta}\) in
H we obtain
By (3.5), we have
and using the Cauchy and Young inequalities we get
In view of (1.3) we get \(1-\varepsilon'(t)\geq\varepsilon(t)>0\), thus we find
Taking \(\delta=\min\{2\lambda, 1\}>0\), then
Applying the Gronwall lemma on the interval \([\tau, t]\) with \(t\geq\tau \), it follows that
The proof is complete. □
Summing up, the following uniform boundedness holds:
In order to prove our further result, we also need the condition
Lemma 3.5 Proof
Multiplying (3.13) by \(2A^{1/3}w^{\eta}\) in
H we have
Using the Young equality, it leads to
Since \(\frac{3(n-2)\gamma}{3n+4}<1\), from (3.9) it follows that
where we have used the embedding \(H_{1}=D(A^{\frac {1}{2}})\hookrightarrow L^{\frac{2n}{n-2}}\) with the facts that \(\frac {6n\gamma}{3n+4}\leq\frac{2n}{n-2}\) and \(H_{2/3}=D(A^{1/3})\hookrightarrow L^{\frac{6n}{3n-4}}\). Moreover, making use of the embedding \(H_{2/3}\subset L^{18/5}(\Omega )\), we have
As a result, we deduce
Using \(1-\varepsilon'(t)\geq\varepsilon(t)>0\), we conclude
Taking \(\delta=\min\{2\lambda, 1\}>0\), we have
Applying the Gronwall lemma on the interval \([\tau, t]\) with \(t\geq\tau \) we obtain
The proof is complete. □
Existence of the invariant attractor
In line with the Lemma 3.5, we consider a family of \(\mathscr{K}=\{ K_{t}\}_{t\in\mathbb{R}}\), where
It is clear that \(K_{t}\) is compact since embedding \(\mathcal {H}_{t}^{1/3}\Subset\mathcal{H}_{t}\) is compact; besides, since the injection constants are independent of
t, \(\mathscr{K}\) is uniform. Finally, Lemma 3.2, Lemma 3.4, and Lemma 3.5 imply that \(\mathscr {K}\) is pullback attracting; indeed,
where \(\delta_{t}(B,C)\) is the Hausdorff semidistance of two nonempty sets
B, C.
Hence the process \(U(t, \tau)\) is asymptotically compact, which proves the existence of the unique time-dependent global attractor \(\mathscr {U}=\{A_{t}\}_{t\in\mathbb{R}}\). The invariance of \(\mathscr {U}\) follows by the strong continuity of the process stated in Lemma 3.1.
Regularity of the attractor
The minimality of \(\mathscr{U}\) in \(\mathscr{K}\) establishes that \(A_{t}\subset K_{t}\) for all \(t\in\mathbb{R}\). Therefore, we immediately obtain the following regularity result.
Lemma 3.6
\(A_{t}\)
is bounded in \(\mathcal{H}_{t}^{1/3}\) ( with a bound independent of t).
To prove that \(A_{t}\) is uniformly bounded in \(\mathcal{H}_{t}^{1}\), as claimed in Theorem 3.3, we argue as follows. Fix \(\tau\in\mathbb{R}\), for \(u_{\tau}\in A_{\tau}\), we split the solution \(U(t,\tau)u_{\tau }=u(t)\) into the sum \(U_{0}(t,\tau)u_{\tau}+U_{1}(t,\tau)u_{\tau}\), where \(U_{0}(t,\tau)u_{\tau}=v^{\eta}(t)\) and \(U_{1}(t,\tau)u_{\tau }=w^{\eta}(t)\), instead of (3.12)-(3.13), solving, respectively,
As a particular case of Lemma 3.4, we know that
Lemma 3.7 for some \(M_{1}=M_{1}(\mathscr{U}) > 0\). Proof
Multiplying (3.17) by \(2Aw^{\eta}\) in
H we obtain
Therefore, we conclude
From (1.3) we have \(1-\varepsilon'(t)\geq\varepsilon(t)>0\), so we deduce
Applying the Gronwall lemma on the interval \([\tau, t]\) with \(t\geq\tau \), we obtain
The proof is complete. □
Proof of Theorem 3.3
where
Since \(\mathscr{U}\) is invariant, this means
Hence, \(A_{t}\subset\overline{K_{t}^{1}}= K_{t}^{1}\); that is, \(A_{t}\) is bounded in \(\mathcal{H}_{t}^{1}\) with a bound independent of \(t\in\mathbb{R}\). □
References 1.
Jing, D, Liu, YF: Time-dependent global attractor for the nonclassical diffusion equations. Appl. Anal.
94(7), 1439-1449 (2015) 2.
Di Plinio, F, Duane, GS, Temam, R: Time-dependent attractor for the oscillation equation. Discrete Contin. Dyn. Syst.
29, 141-167 (2011) 3.
Conti, M, Pata, V, Temam, R: Attractors for processes on time-dependent spaces. Applications to wave equation. J. Differ. Equ.
255, 1254-1277 (2013) 4.
Conti, M, Pata, V: Asymptotic structure of the attractor for processes on time-dependent spaces. Nonlinear Anal., Real World Appl.
19, 1-10 (2014) 5.
Aifantis, EC: On the problem of diffusion in solids. Acta Mech.
37, 265-296 (1980) 6.
Kuttler, K, Aifantis, E: Quasilinear evolution equation in nonclassical diffusion. SIAM J. Math. Anal.
19, 110-120 (1988) 7.
Chen, PJ, Gurtin, ME: On a theory of heat conduction involving two temperatures. Z. Angew. Math. Phys.
19, 614-627 (1968) 8.
Aifantis, EC: Gradient nanomechanics: applications to deformation, fracture, and diffusion in nanopolycrystals. Metall. Trans. A, Phys. Metall. Mater. Sci.
42(10), 2985-2998 (2011) 9.
Xiao, YL: Attractors for a nonclassical diffusion equation. Acta Math. Appl. Sinica (Engl. Ser.)
18, 273-276 (2002) 10.
Sun, CY, Wang, SY, Zhong, CK: Global attractors for a nonclassical diffusion equation. Acta Math. Sin. Engl. Ser.
23, 1271-1280 (2007) 11.
Liu, YF, Ma, QZ: Exponential attractors for a nonclassical diffusion equation. Electron. J. Differ. Equ.
2009, 9 (2009) 12.
Ma, QZ, Liu, YF, Zhang, FH: Global attractors in \(H_{1}(\mathbb{R}^{N})\) for nonclassical diffusion equation. Discrete Dyn. Nat. Soc.
2012, Article ID 672762 (2012) 13.
Zhang, YJ, Ma, QZ: Exponential attractors for nonclassical diffusion equation with lower regular forcing term. Int. J. Mod. Nonlinear Theory Appl.
3, 15-22 (2014) 14.
Wang, SY, Li, DS, Zhong, CK: On the dynamics of a class of nonclassical parabolic equations. J. Math. Anal. Appl.
317, 565-582 (2006) 15.
Sun, CY, Yang, MH: Dynamics of the nonclassical diffusion equations. Asymptot. Anal.
59, 51-81 (2008) 16.
Wu, HQ, Zhang, ZY: Asymptotic regularity for the nonclassical diffusion equation with lower regular forcing term. Dyn. Syst.
26, 391-400 (2011) 17.
Zhang, FH, Liu, YF: Pullback attractors in \(H^{1}(R^{N})\) for non-autonomous nonclassical diffusion equations. Dyn. Syst.
29, 106-118 (2014) 18.
Anh, CT, Toan, ND: Existence and upper semicontinuity of uniform attractors in \(H^{1}(\mathbb{R}^{N})\) for nonautonomous nonclassical diffusion equations. Ann. Pol. Math.
111(3), 271-295 (2014) 19.
Anh, CT, Bao, TQ: Dynamics of non-autonomous nonclassical diffusion equations on \(\mathbb{R}^{N}\). Commun. Pure Appl. Anal.
11, 1231-1252 (2012) 20.
Anh, CT, Tona, ND: Nonclassical diffusion equations on \(\mathbb{R}^{N}\) with singularly oscillating external forces. Appl. Math. Lett.
38, 20-26 (2014) 21.
Temam, R: Infinite-Dimensional Dynamical Systems in Mechanics and Physic. Springer, New York (1997)
Acknowledgements
This work was partly supported by the NSFC (11561064, 11361053) and the NSF of Gansu Province (145RJZA112), in part by the Fundamental Research Funds of Gansu Universities.
Additional information Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. |
Just like other systems, the Microwave systems consists of many Microwave components, mainly with source at one end and load at the other, which are all connected with waveguides or coaxial cable or transmission line systems.
Following are the properties of waveguides.
Consider a waveguide having 4 ports. If the power is applied to one port, it goes through all the 3 ports in some proportions where some of it might reflect back from the same port. This concept is clearly depicted in the following figure.
For a two-port network, as shown in the following figure, if the power is applied at one port, as we just discussed, most of the power escapes from the other port, while some of it reflects back to the same port. In the following figure, if
V 1 or
If the source is applied to the opposite port, another two combinations are to be considered. So, for a two-port network, 2 × 2 = 4 combinations are likely to occur.
The travelling waves with associated powers when scatter out through the ports, the Microwave junction can be defined by S-Parameters or
Scattering Parameters, which are represented in a matrix form, called as " Scattering Matrix".
It is a square matrix which gives all the combinations of power relationships between the various input and output ports of a Microwave junction. The elements of this matrix are called
"Scattering Coefficients" or "Scattering (S) Parameters".
Consider the following figure.
Here, the source is connected through $i^{th}$ line while $a_1$ is the incident wave and $b_1$ is the reflected wave.
If a relation is given between $b_1$ and $a_1$,
$$b_1 = (reflection \: \: coefficient)a_1 = S_{1i}a_1$$
Where
$S_{1i}$ = Reflection coefficient of $1^{st}$ line (where $i$ is the input port and $1$ is the output port)
$1$ = Reflection from $1^{st}$ line
$i$ = Source connected at $i^{th}$ line
If the impedance matches, then the power gets transferred to the load. Unlikely, if the load impedance doesn't match with the characteristic impedance. Then, the reflection occurs. That means, reflection occurs if
$$Z_l \neq Z_o$$
However, if this mismatch is there for more than one port, example $'n'$ ports, then $i = 1$ to $n$ (since $i$ can be any line from $1$ to $n$).
Therefore, we have
$$b_1 = S_{11}a_1 + S_{12}a_2 + S_{13}a_3 + ............... + S_{1n}a_n$$
$$b_2 = S_{21}a_1 + S_{22}a_2 + S_{23}a_3 + ............... + S_{2n}a_n$$
$$.$$
$$.$$
$$.$$
$$.$$
$$.$$
$$b_n = S_{n1}a_1 + S_{n2}a_2 + S_{n3}a_3 + ............... + S_{nn}a_n$$
When this whole thing is kept in a matrix form,
$$\begin{bmatrix} b_1\\ b_2\\ b_3\\ .\\ .\\ .\\ b_n \end{bmatrix} = \begin{bmatrix} S_{11}& S_{12}& S_{13}& ...& S_{1n}\\ S_{21}& S_{22}& S_{23}& ...& S_{2n}\\ .& .& .& ...& . \\ .& .& .& ...& . \\ .& .& .& ...& . \\ S_{n1}& S_{n2}& S_{n3}& ...& S_{nn}\\ \end{bmatrix} \times \begin{bmatrix} a_1\\ a_2\\ a_3\\ .\\ .\\ .\\ a_n \end{bmatrix}$$
Column matrix $[b]$ Scattering matrix $[S]$ Matrix $[a]$
The column matrix $\left [ b \right ]$ corresponds to the reflected waves or the output, while the matrix $\left [ a \right ]$ corresponds to the incident waves or the input. The scattering column matrix $\left [ s \right ]$ which is of the order of $n \times n$ contains the reflection coefficients and transmission coefficients. Therefore,
$$\left [ b \right ] = \left [ S \right ]\left [ a \right ]$$
The scattering matrix is indicated as $[S]$ matrix. There are few standard properties for $[S]$ matrix. They are −
$[S]$ is always a square matrix of order (nxn)
$[S]_{n \times n}$
$[S]$ is a symmetric matrix
i.e., $S_{ij} = S_{ji}$
$[S]$ is a unitary matrix
i.e., $[S][S]^* = I$
The sum of the products of each term of any row or column multiplied by the complex conjugate of the corresponding terms of any other row or column is zero. i.e.,
$$\sum_{i=j}^{n} S_{ik} S_{ik}^{*} = 0 \: for \: k \neq j$$
$$( k = 1,2,3, ... \: n ) \: and \: (j = 1,2,3, ... \: n)$$
If the electrical distance between some $k^{th}$ port and the junction is $\beta _kI_k$, then the coefficients of $S_{ij}$ involving $k$, will be multiplied by the factor $e^{-j\beta kIk}$
In the next few chapters, we will take a look at different types of Microwave Tee junctions. |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
I would like to derive the Fourier transform of $f(x)=\ln(x^2+a^2)$, where $a\in \mathbb{R}^+$ by making use of the properties:
\begin{equation} \mathcal{F}[f'(x)]=(ik)\hat{f}(k)\\ \mathcal{F}[-ixf(x)]=\hat{f}'(k) \end{equation} For the Fourier transform I use the definition given by:
\begin{equation} \hat{f}(k)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(x)e^{-ikx}dx, k \in \mathbb{R} \end{equation} Until now I found out that by taking the derivative of $f$ and finding the Fourier transform of $f'$ I can then use the relation $\mathcal{F}[f'(x)]=(ik)\hat{f}(k)$ and find $\hat{f}$. The derivative of $f$ would be: \begin{equation} f'(x)=\frac{2x}{x^2+a^2} \end{equation} and by considering $g(x)=1/(x^2+a^2)$, I then have: \begin{equation} f'(x)=2xg(x) \end{equation} Now I know that the Fourier transform of $g$ is given by:
\begin{equation} \hat{g}(k)=\frac{1}{a}\sqrt{\frac{\pi}{2}}e^{-a|k|}, a \in \mathbb{R}, k\in \mathbb{R} \end{equation} Now I must find the Fourier transform of $xg(x)$ which would be given by the derivative of $\hat{g}$ right? But how can this possible since $\hat{g}$ has no derivative?
I think I am really close now but I need that extra tip.
Thank you! |
Maass forms of levels 1 to 10 with $0 \leq R\leq 10$
The horizontal axis is the spectral parameter $R$ with the Laplace eigenvaluesatisying $\lambda=1/4+R^2$. The vertical axis is the level $N$. Each pointcorresponds to a Maass form of weight 0 and trivial character on $\Gamma_0(N)$with the color showing whether the symmetry is
evenor odd. For $N>100$ there are only results for prime level.
Examples of some ranges with complete data:
$1\leq N\leq10, \, 0\leq R\leq 10$ $N$ prime and $100\leq N \leq 250, \, 0\leq R\leq 2$ $N$ prime and $100\leq N \leq 1000, \, 0\leq R\leq 1$ Clicking on a dot takes youto the homepage of the Maass form. |
Given two groups $G_1$ and $G_2$ with operations $*_1$ and $*_2$, the direct product of sets $G_1\times G_2$ is a group under the operation \[ (a_1, a_2)*(b_1, b_2) = (a_1*_1 b_1, a_2 *_2 b_2). \] This has normal subgroups \[ H_1=\{(a,e_2) \mid a \in G_1\}\cong G_1\] and \[ H_2=\{(e_1,a) \mid a \in G_2\}\cong G_2\] such that $H_1\cap H_2 = \{(e_1,e_2)\}$ and $H_1H_2=G_1\times G_2$.
Conversely, if a group $G$ has normal subgroups $H$ and $K$ such that $H\cap K=\{e\}$ and $HK=G$, then $G\cong H\times K$.
Authors: Knowl status: Review status: beta Last edited by John Jones on 2019-05-23 18:50:36 Referred to by:
Not referenced anywhere at the moment.
History:(expand/hide all) Differences(show/hide) |
I wrote the following proof. I used the following code and it resulted in the following output:
\begin{proof}\[S_n = \frac{a(1-r^n)}{1-r}\]\begin{align*} \lim _{n\to \infty} S_n &= \lim _{n\to \infty} \frac{a(1-r^n)}{1-r}\\ &= \lim _{n\to \infty} \frac {a}{1-r} - \lim _{n\to \infty} \frac{ar^n}{1-r}\\ &= \lim _{n\to \infty} \frac {a}{1-r} &&\tag{since $|r| < 1$}\\ &= \frac {a}{1-r} && \qedhere\end{align*}\end{proof}
The problem is that the bottom bit which was made using the align* environment is not aligned nicely with the first line so it doesn't look that nice. Could anyone please suggest a way to make it look nicer?
Thanks for your help |
Convert between Percent, Fractions and Decimals 03:41 minutes Get your free trial content now! Video Transcript Transcript Convert between Percent, Fractions and Decimals
This is Toni. He's the owner of "Toni's 100% Pizza," which is famous for its delicious, customized pizzas. At the moment, he's taking a customer's order.
The customer orders a pizza with 50% mushrooms, 0.125 of pepperoni and three-eighths tomatoes. Hmm, that is a weird order! How can Toni make such a pizza?
What does Percent mean?
To fulfill this order, Toni has to
convert between percents, decimals, and fractions.
Let's start with
converting decimals and fractions to percents. Percent means 'per hundred' and describes part of a whole just like fractions and decimals.
If the pizza were cut into 100 slices, 1 slice would equal 1 percent. The percent sign looks like two little zeros with a diagonal line dividing them... or maybe a division sign that lost its balance.
Converting Decimals and Fractions to Percents
Let's
convert decimals to percents. To convert a decimal to a percent you first multiply by 100. 0.125 multiplied by 100 is 12.5. Now we add the percent sign. So the answer is 12.5%.
Now let's
convert fractions into decimals. The numerator is divided into the denominator by using long division.
8 is written
outside of the division sign and 3 is written under the division sign. As you know, 8 can't go into 3, so you put a zero followed by a decimal point above the division bar and add point zero zero zero after the 3.
Then, see if 8 can go into 30; yup! 3 times. 3 times 8, is 24. We subtract this from 30, leaving us with 6. The next 0 is brought down to be the next digit. This process is repeated until the remainder is 0. The result is 0.375.
Now you just have to convert the decimal 0.375 to a percent. You already know: multiply by 100 and then add the percent sign! Finally you get 37.5%.
Converting Percent and Decimals into Fractions
Now, Toni can make the pizza. 50 percent mushrooms, which is easy. But, what about 12.5 and 37.5 percent?Maybe we should
convert everything into fractions.
Let's start with 50%. Write the
percent divided by 100. So you have 50 over 100. Then reduce the fraction. This gives us one-half.
Now let's convert 0.125. 0.125 means 125 thousandths. So you write 125 as the numerator and 1000 as the denominator. Now
reduce the fraction. The answer is one-eighth.
Now, Toni can make the pizza. The remaining part has one-eighth pepperoni. And there are three-eighths tomatoes.
Remember how to convert between Percents, Fractions and Decimals
To
remember better how to convert between percents, fractions and decimals, you can use this triangle. You can convert percents to fractions by dividing them by 100. Remember to reduce the fraction. If you want to convert a fraction to a decimal, just divide the two numbers and there you have it! To convert decimals to a percent, multiply the decimal by 100 and add a percent sign.
Let's see what Toni does as payback for this prankster.
Convert between Percent, Fractions and Decimals Übung Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Convert between Percent, Fractions and Decimals kannst du es wiederholen und üben. Describe how to convert a decimal number or a fraction into a percent. Tipps
As you know, $\frac12$ and $0.5$ are the same.
You probably know how to write this as a percent.
The picture on the right shows how to convert the fraction $\frac 38$ into its decimal form.
Lösung
If you need to compare numbers, it's helpful if all the numbers appear in the same form.
Compare Like Forms fractions $~\leftrightarrow~$ fractions decimals $~\leftrightarrow~$ decimals percents $~\leftrightarrow~$ percents
To convert a
decimal number into percent: multiply the decimal by $100$ put a percent sign behind the number $0.125\xrightarrow{\times 100}12.5\%$ fraction into percent: calculate the decimal form of the fraction by dividing the numerator by the denominator use the steps you learned to change a decimal into a percent that means: $\frac38=0.375\xrightarrow{\times 100}37.5\%$ Explain the meaning of percent. Tipps
$100¢$ are in $\$1$.
Here we have a customary percent sign.
$1\%$ of $\$1$ is $1¢$.
Lösung
We can remember the meaning of percent for example by conversion from cent to dollar:
$100¢$ are in $\$1$ $\frac1{100}$ of $\$1$ is $1¢$ per hundred.
If Toni divides his pizza in $100$ even slices, we can say that one slice equals $1\%$ of the pizza.
Help Toni convert the customer's strange order. Tipps
Did you know you can also reducing fractions using factors? To simplify a fraction to its lowest terms, divide the numerator as well as the denominator by a common factor. Repeat until there are no more common factors.
Lösung
To convert a
percent into a fraction, we write the percent as a fraction divided by $100$ and then simplify the fraction by dividing the numerator as well as the denominator by the Greatest Common Factor. $50~\%=\frac{50}{100}=\frac12$ percent into a decimal, either you transform it to a fraction first and then divide the numerator by the denominator, or you take away the percent sign and divide the resulting number by 100. $50~\%=\frac{50}{100}=50 \div 100= 0.5$ $50~\%= 50 \div 100= 0.5$ decimal to a fraction, count the number of places in the numerator you need to move the decimal point to get a whole number, and then add the same number of zeros to the right of $1$ in the denominator. Afterwards, simplify the fraction by dividing the numerator as well as the denominator by the Greatest Common Factor. $0.125=\frac{125}{1000}=\frac18$ decimal to a percent, multiply it by $100$ and add a percent sign to the end. $0.125=0.125 \times 100 \%=12.5\%$ fraction to a decimal, you divide the numerator by the denominator. $\frac {3}{8}=3 \div 8= 0.375$. fraction to a percent, you must first convert the number to a decimal by dividing the numerator by the denominator. Then, multiply it by $100$ and add a percent sign at the end. $\frac {3}{8}=3 \div 8= 0.375=0.375 \times 100 \%=37.5\%$ Prepare your own pizza. Tipps
Each value is given as a fraction. If possible, simplify each fraction.
The pizza is divided in $10$ slices. Write each fraction with the denominator $10$. The number of slices is written in the numerator.
Lösung
Since the pizza is divided into $10$ slices, we should try to convert each fraction so that we get $10$ in the denominator. Then, we can easily compare the fractions to each other.
$\frac3{15}=\frac15=\frac2{10}$, or $2$ slices with pepperoni $\frac{16}{40}=\frac{16\div 4}{40\div 4}=\frac4{10}$, or $4$ slices with cheese $\frac{15}{50}=\frac{15\div 5}{50\div 5}=\frac3{10}$, or $3$ slices with mushrooms $\frac{3}{30}=\frac{3\div 3}{30\div 3}=\frac1{10}$, or $1$ slice with bacon Convert each number from its current form to percent form. Tipps
You can simplify a fraction by dividing the numerator and the denominator by the GCF: $\frac{10}{20}=\frac{10\div 10}{20\div 10}=\frac12$.
Lösung
There are $4$ percent values given. We can convert percent into a fraction by writing $8\%$ as $\frac8{100}$. This fraction can be simplified to $\frac2{25}$ by dividing the numerator and denominator by $4$. You can convert percent into decimal form by dividing the percent by $100$. In our example above, we have $8\%=0.08$.
$75~\%=0.75~\longleftrightarrow~75\%=\frac{75}{100}=\frac34$ $25~\%=0.25~\longleftrightarrow~25\%=\frac{25}{100}=\frac14$ $40~\%=0.40~\longleftrightarrow~40\%=\frac{40}{100}=\frac25$ $55~\%=0.55~\longleftrightarrow~55\%=\frac{55}{100}=\frac{11}{20}$ Change fractions or decimals into percent form. Tipps
All the percentages should add up to $100\%$.
To convert a decimal number into percent, multiply by $100$, and then write a percent sign at the end of the resulting number.
To transform a fraction to percent:
Convert the fraction into a decimal, and then multiply by $100$ or scale raise terms so that the denominator is equal to $100$ as shown in the picture Lösung
Toni's guest orders a pizza with the following ingredients:
Olives:$\frac15=\frac{1\times 20}{5\times 20}=\frac{20}{100}=20\%$ Onions:$0.30\xrightarrow{\times 100}30\%$ Pepperoni:$\frac3{20}=0.15\xrightarrow{\times 100}15\%$ Chili Peppers:$0.35\xrightarrow{\times 100}35\%$ |
Illinois Journal of Mathematics Illinois J. Math. Volume 48, Number 3 (2004), 965-976. On the structure of the set of semidualizing complexes Abstract
We study the structure of the set of semidualizing complexes over a local ring. In particular, we prove that for a pair of semidualizing complexes $X_1$ and $X_2$ such that $G_{X_{2}}\dim X_{1}<\infty $ we have $X_2\simeq X_1\otimes^{L}_R\func{\mathbf{R}Hom}_R(X_{1},X_{2})$. Specializing to the case of semidualizing modules over artinian rings we obtain a number of quantitative results for rings possessing a configuration of semidualizing modules of special form. For rings with ${\mathfrak m}^3=0$ this condition reduces to the existence of a nontrivial semidualizing module and we prove a number of structural results in this case.
Article information Source Illinois J. Math., Volume 48, Number 3 (2004), 965-976. Dates First available in Project Euclid: 13 November 2009 Permanent link to this document https://projecteuclid.org/euclid.ijm/1258131064 Digital Object Identifier doi:10.1215/ijm/1258131064 Mathematical Reviews number (MathSciNet) MR2114263 Zentralblatt MATH identifier 1080.13009 Subjects Primary: 13D25 Citation
Gerko, A. On the structure of the set of semidualizing complexes. Illinois J. Math. 48 (2004), no. 3, 965--976. doi:10.1215/ijm/1258131064. https://projecteuclid.org/euclid.ijm/1258131064 |
I have come across this question that asks to find Fourier series coefficients of the following signal. $$1+\sin (\omega_0 t) + \cos (\omega_0 t) + \cos (2\omega_0 t + \pi / 4) $$
In my intuition, the signal is already in Fourier series form, and the question asks just to find the trigonometric Fourier coefficients.
I started breaking down the original signal and rearranging, which gives: $$1+\cos(\omega_0 t) + \frac{1}{\sqrt 2}\cos (2\omega_0 t) + \sin(\omega_0 t) - \frac{1}{\sqrt 2}\sin(2\omega_0 t)$$
While figuring out the pattern in the signal, the coefficient of cosine term is always 1, while the coefficient of sine term alternates in sign. So, in more general terms, $$1+\sum_{n=1}^{2}{(\frac{1}{\sqrt 2})^{n-1}.\cos(n\omega_0 t) + (-\frac{1}{\sqrt 2})^{n-1}\sin(n\omega_0 t)}$$
With an analogy to the trigonometric Fourier series, the coefficients are found out to be $$a_0 = 1, a_n = (\frac{1}{\sqrt 2})^{n-1}, b_n=(-\frac{1}{\sqrt 2})^{n-1}$$
Is this what am I required to do when asked to find the Fourier coefficients? Or should I take that signal and then follow the whole procedure of finding a0, an and bn using the Euler's coefficient formula? |
In Wikipedia's ideal gas article there is a derivation for the thermodynamic entropy that results in $$S = Nk\ln\left[\frac{V}{N} \left(\frac{U}{\hat{c}_V Nk}\right)^{\hat{c}_V} \frac{1}{\Phi}\right],$$ and the article states:
[...] $\Phi$ may vary for different gases, but will be independent of the thermodynamic state of the gas.
I have a notion that $\Phi$ should look something like the volume of the individual gas molecules multiplied by the latent heat of vaporization per molecule divided by $k$ to the power of $\hat{c}_V$, which means it would have the following (approximate) values for the named gases
$\mathrm{N}_2$: $\Phi = 2\times \frac{4}{3}\pi(56\operatorname{pm})^3\times(58 \operatorname{meV} / k)^{5/2}= 1.7\times10^{-23} \operatorname{K}^{5/2} \operatorname{m}^3$ $\mathrm{O}_2$: $\Phi = 2\times \frac{4}{3}\pi(48\operatorname{pm})^3\times (71 \operatorname{meV} / k)^{5/2}= 1.8\times10^{-23} \operatorname{K}^{5/2} \operatorname{m}^3$ $\mathrm{H}_2$: $\Phi = 2\times \frac{4}{3}\pi(53\operatorname{pm})^3 \times (9.4 \operatorname{meV} / k)^{5/2}= 1.5\times10^{-25} \operatorname{K}^{5/2} \operatorname{m}^3$ $\mathrm{He}$: $\Phi = \frac{4}{3}\pi(31\operatorname{pm})^3\times (0.86 \operatorname{meV} / k)^{5/2}= 3.9\times10^{-30} \operatorname{K}^{3/2} \operatorname{m}^3$
Basically, $\Phi$ should give some information of where the ideal gas model breaks down for the gases in question. At $P=1\operatorname{atm}$ these values of $\Phi$ imply that entropy hits zero at $T = 29,\ 29,\ 7.4,\ \mathrm{and}\ 0.25 \operatorname{K}$ for each of these gases, which doesn't seem too far off from their boiling points of $77$, $90$, $20$, and $4.2$ Kelvin. Even so, this is about as well as you'd expect to do from unit analysis. What are the actual factors that go in to determining $\Phi$? |
PACM, Princeton University
Abstract: We consider a learning problem in statistical physics: given an ensemble \rho(x)=e^{-\beta H(x)}/Z and a coarse-graining procedure y=y(x) that maps x to a reduced set of variables y, we need to model the free energy surface F(y) = - (1/\beta) \log \int dx\rho(x)\delta(y-y(x)). This problem is relevant to situations in several different fields, and we will discuss examples from statistical lattice models [1] and molecular dynamics [2,3]. A common challenge to these examples is the so-called curse of dimensionality, i.e., when y is in a high-dimensional space, three non-trivial issues will be intertwined with each other: the representation of the free energy surface, the optimization of the parameters in the representation, and the exploration of relevant phase space points. We will see how methods in Refs. [1-3] address these issues.
[1] Yantao Wu and Roberto Car. "Variational approach to monte carlo renormalization group." Physical review letters 119.22 (2017): 220602.
[2] Linfeng Zhang, Han Wang, and Weinan E. "Reinforced dynamics for enhanced sampling in large atomic and molecular systems." The Journal of chemical physics 148.12 (2018): 124113.
[3] Linfeng Zhang, De-Ye Lin, Han Wang, Roberto Car, and Weinan E. "Active learning of uniformly accurate interatomic potentials for materials simulation." Physical Review Materials. 2019 Feb 25;3(2):023804.
Contact: Lei Wang, 9853 |
That the moment of inertia about an axis passing through the CM is minimized, with respect to any other parallel axes, is a consequence of the quadratic (squared) dependence of the moment of inertia on distance. In other words, the ${r^2}$ term in ${I=mr^2}$ makes it so that masses at farther distances are preferentially weighted in their contribution to the overall moment. As you said, for a given object, the moment of inertia will will thus depend on the distribution (distances) of masses about the chosen axis.
For a given torque, one can impart a greater angular acceleration on objects of lesser moment. This is seen in the relation ${\tau=I\alpha}$ (analogous to $F=ma$), where $\alpha$ is angular acceleration. Rearranging, we get $\alpha=\tau/I$, so $\alpha$ is largest when $I$ is smallest.
Intuitively, one can understand how angular acceleration is maximized about the CM by picturing twisting a metal rod. Imagine holding the rod at its end and twisting-- it's difficult. Imagine holding the same rod at its center and twisting-- it's slightly easier.
To describe the above scenario mathematically, we can consider a one dimensional rod of mass $m$ running from ${x=0}$ to $x=l$. The moment of inertia about an axis that runs perpendicular to the rod at $x=0$ (twisting the rod about its end) is given by
$I_{end}=\int_{0}^{l}\rho x^2 dx = \frac{m}{3}l^2$, where $\rho=m/l$ is the mass density of the rod.
The moment about an axis through the middle of the rod, $x=l/2$, is
$I_{mid}=\int_{-l/2}^{l/2}\rho x^2 dx = \frac{m}{12}l^2$.
Note that $I_{mid}<I_{end}$.
Taking a more general approach we could calculate the moment about any perpendicular axis, placed at any value x, as
$I_{x}=\int_{0-x}^{l-x}\rho x^2 dx = \frac{1}{3}\rho(l-x)^3-\frac{1}{3}\rho(0-x)^3$.
This function $I_{x}(x)$ has a minimum at the center of mass when $x=l/2$. |
Just doing some revision for ODEs and came across this problem. Find the general solution to $$u''+4u=0.$$
So far I've applied the characteristic polynomial: $$\begin{array}{r c l} \lambda^2 +4 & = & 0 \\ \lambda^2 & = & -4 \\ \lambda & = & i\sqrt{4} \\ \lambda & = & 2i, -2i. \\ \end{array}$$
So the general solution should be: $$\begin{array}{l c l} u_H & = & Ae^{2ix}+Be^{-2ix} \\ & = & A(\cos{2x}+i\sin{2x})+B(\cos{(-2x)}+i\sin{(-2x)}) \\ & = & A\cos{2x}+iA\sin{2x}+B\cos{2x}-iB\sin{2x} \\ & = & (A+B)\cos{2x}+i(A-B)\sin{2x} \\ & = & C_1\cos{2x}+iC_2\sin{2x}. \\ \end{array}$$
The answers have $u=C_1\cos{2x}+C_2\sin{2x}$, and my question is "what happened to the $i$?" Does it drop out somewhere or is there an error in the answers?
Many thanks for a quick explanation/link to the appropriate website explaining this. :) |
Hi I am trying to evaluate the integral $$ \mathcal{I}(\omega)=\int_{-\infty}^\infty J^3_0(x) e^{i\omega x}\mathrm dx $$ analytically. We can also write $$ \mathcal{I}(\omega)=\mathcal{FT}\big(J^3_0(x)\big) $$ which is the Fourier Transform of the cube of Bessel function. The Bessel function $J_0$ is given by $$ J_0(x)=\frac{1}{2\pi}\int_{-\pi}^\pi e^{-ix\sin t} \mathrm dt. $$ If it helps, we can represent the cube of the Bessel function by $$ J^3_0(x)=-3\int J^2_0(x) J_1(x) \mathrm dx, \ \ \ \ \ J_1(x)=\frac{1}{2\pi}\int_{-\pi}^\pi e^{i(t-x\sin t)} \mathrm dt. $$ In general $$ J_n(x)=\frac{1}{2\pi}\int_{-\pi}^\pi e^{i(nt-x\sin t)}\mathrm dt. $$ The Fourier Transforms of the Bessel function and its square is given by $$ \mathcal{FT}\big(J_0(x)\big)=\sqrt{\frac{2}{\pi}}\frac{\theta(\omega+1)-\theta(\omega-1)}{\sqrt{1-\omega^2}} $$ and $$ \mathcal{FT}\big(J^2_0(x)\big)=\frac{\sqrt{2}K\big(1-\frac{\omega^2}{4}\big)\big(\theta(-\omega-2)-1\big)\big(\theta(\omega-2)-1\big)}{\pi^{3/2}} $$ where K is the elliptic-K function and $\theta$ is the heaviside step function. However I need the cube...
It turns out that the Fourier transform of $J_0^3$ can still be expressed in terms of complete elliptic integrals, but it's considerably more complicated than the formula for ${\cal FT}(J_0^2)$: for starters, it involves the periods of a curve $E$ defined over ${\bf C}$ but (except for a few special values of $\omega$) not over ${\bf R}$.
Assume $|\omega| < 3$, else $I(\omega) = 0$. Then the relevant curve is$$E : Y^2 = X^3 - \bigl(\frac{3}{4} f^2 + \frac{27}{2} f - \frac{81}{4}\bigr) X^2 + 9 f^3 X$$where$$f = \frac12 \bigl( e + 1 + \sqrt{e^2-34e+1} \bigr)$$and$$e = \bigl( |\omega| + \sqrt{\omega^2-1} \, \bigr)^2.$$Let $\lambda_1, \lambda_2$ be generators of the period lattice of $E$with respect to the differential $dx/y$ (note that these are twicethe periods that
gp reports, because gp integrates $dx/2y$for reasons coming from the arithmetic of elliptic curves). Then:if $|\omega| \leq 1$ then$$I(\omega) = \left|\,f\,\right|^{5/2}\, \left|\,f-1\right| \frac{\Delta}{(2\pi)^2},$$where $\Delta = \bigl|{\rm Im} (\lambda_1 \overline{\lambda_2}) \bigr|$is the area of the period lattice of $E$. If $1 \leq |\omega| \leq 3$ then$$I(\omega) = \left|\,f\,\right|^{-4}\, \left|\,f-1\right|^5 (3/2)^{13/2} \frac{\Delta'}{(2\pi)^2},$$where $\Delta' = \bigl| {\rm Re}(\lambda_1 \overline{\lambda_2}) \bigr|$for an appropriate choice of generators $\lambda_1,\lambda_2$(these "appropriate" generators satisfy $|\lambda_1|^2 = \frac32 |\lambda_2|^2$,which determines them uniquely up to $\pm$ except for finitely manychoices of $\omega$).
The proof, alas, is too long to reproduce here, but here's the basic idea. The Fourier transform of $J_0$ is $(1-\omega^2)^{-1/2}$ for $|\omega|<1$ and zero else. Hence the Fourier transforms of $J_0^2$ and $J_0^3$ are the convolution square and cube of $(1-\omega^2)^{-1/2}$. For $J_0^2$, this convolution square is supported on $|\omega| \leq 2$, and in this range equals $$ \int_{t=|\omega|-1}^1 \left( (1-t^2) (1-(|\omega|-t)^2) \right)^{-1/2} \, dt, $$ which is a period of an elliptic curve [namely the curve $u^2 = (1-t^2) (1-(|\omega|-t)^2)$], a.k.a. a complete eliptic integral. For $J_0^3$, we likewise get a two-dimensional integral, over a hexagon for $|\omega|<1$ and a triangle for $1 \leq |\omega| < 3$, that is a period of the K3 surface $$ u^2 = (1-s^2) (1-t^2) (1-(|\omega|-s-t)^2). $$ (The phase change at $|\omega|=1$ was already noted here in a now-deleted partial answer.) In general, periods of K3 surfaces are hard to compute, but this one turns out to have enough structure that we can convert the period into a period of the surface $E \times \overline E$ where $\overline E$ is the complex conjugate.
Now to be honest I have only the formulas for the "correspondence" betweenour K3 surface and $E \times \overline E$, which was hard enough to do,but didn't keep track of the elementary multiplying factorthat I claim to be $\left|\,f\,\right|^{5/2}\, \left|\,f-1\right|$or $\left|\,f\,\right|^{-4}\, \left|\,f-1\right|^5 (3/2)^{13/2}$.I obtained these factors by comparing numerical values for the fewchoices of $\omega$ for which I was able to compute $I(\omega)$to high precision (basically rational numbers with an even numeratoror denominator); for example $I(2/5)$ can be computed in
gpin under a minute as
intnum(x=0,5*Pi,2*cos(2*x/5) * sumalt(n=0,besselj(0,x+5*n*Pi)^3))
There were enough such $c$, and the formulas are sufficiently simple, that they're virtually certain to be correct.
Here's
gp code to get $e$, $f$, $E$, and generators $\lambda_1,\lambda_2$of the period lattice:
e = (omega+sqrt(omega^2-1))^2f = (sqrt(e^2-34*e+1)+(e+1)) / 2E = ellinit( [0, -3/4*f^2-27/2*f+81/4, 0, 9*f^3, 0] )L = 2*ellperiods(E)lambda1 = L[1]lambda2 = L[2]
NB the last line requires use of
gp version 2.6.x; earlier versionsdid not directly implement periods of curves over $\bf C$.
For $\omega=0$ we have $e=1$, $f=3$, and $E$ is the curve $Y^2 = X^3 - 27 X^2 + 243 X = (X-9)^3 + 3^6$, so the periods can be expressed in terms of beta functions and we recover the case $\nu=0$ of Question 404222, How to prove $\int_0^\infty J_\nu(x)^3dx\stackrel?=\frac{\Gamma(1/6)\ \Gamma(1/6+\nu/2)}{2^{5/3}\ 3^{1/2}\ \pi^{3/2}\ \Gamma(5/6+\nu/2)}$? . |
This is a crosspost from MSE. It's been up there for a few weeks now. A 200 rep bounty yielded no results (or even comments). I'm hoping someone here has some helpful ideas. See this post for the original.
Consider $U$ a nice compact region in $\mathbb{C}$ with boundary $\Gamma$. Let $S_1$ b the ideal of trace class operators on a separable complex Hilbert space $H$. We will let $\|\cdot \|$ be the operator norm and $\|\cdot \|_1$ be the trace norm. Suppose $W:U\to S_1$ is complex analytic in the operator norm.
Under what conditions is $W(\lambda)$ analytic in the $\| \cdot \|_1$ norm?
I have proved that the following are equivalent when $W(\lambda)$ is operator analytic:
$W(\lambda)$ is continuous in the operator norm for a fixed $M$, we have $\|W(\lambda)\|_1 <M$ for each $\lambda \in \Gamma$ tr $W(\lambda)B$ is analytic for each bounded operator $B$. $W(\lambda)$ is analytic in the $\|\cdot\|_1$ norm.
I can provide some ideas for these proofs if that's be helpful.
This leads us to
Question 1: What if we know that tr $W(\lambda)$ is analytic?
Is there a nice way to compare tr $W(\lambda)$ with tr $W(\lambda)B$? I would love an inequality like $$ |\text{tr }AB| \leq\|B\||\text{tr }A| $$ for $A \in S_1$ and $B$ bounded. Although it would probably be greedy to expect this in general.
Also, I'm willing to impose even stronger assumptions on $W$ if necessary. One very strong constraint is to assume that $W$ has a rank bound along $\Gamma$, I.E. for a fixed $N$ we have rank $W(\lambda)<N$ for each $\lambda \in \Gamma$. This actually guarantees analyticity as $$\|W(\lambda)\|_1 \leq N\sup_{\lambda \in \Gamma} \|W(\lambda)\|$$
Attempting to weaken this condition, we arrive at
Question 2: What happens if $W(\lambda)$ is finite rank for each $\lambda$, but has no rank bound?
I suspect that finite rank and analytic actually implies rank bounded, but I do not know.
Edit 1Here's a fun idea to that might help prove finite rank implies rank bounded. The set $$S_n = \{\lambda : W(\lambda) \text{ has rank at most}n\}$$
is closed (continuity of $W$ tells us singular values are continuous). So Baire Category theorem tells us that some $S_n$ is dense somewhere. So in some open set, $W$ is rank bounded. So can I use an analytic extension in the trace norm to do something? This looks like the proofs of the open mapping theorem and whatnot...
Edit 2Here's another fact that may be helpful. Consider a sequence of complex analytic functions $f_n:U\to \mathbb{C}$. Suppose they converge pointwise to an analytic function $f$. Then $f$ is analytic on an open dense neighborhood of $U$. This is potentially helpful because for any orthonormal basis $\phi_i$,$$\text{tr} W(\lambda) B = \sum_{i=1}^\infty \langle W(\lambda)B\phi_i,\phi_i\rangle$$And because $W(\lambda)B$ is analytic in the operator norm, so will the each inner product be analytic.
I am fairly familiar with Gohberg's work on trace class operators. Unfortunately, despite all of the great theorems on bounds for singular values, knowing that tr $W(\lambda)$ is analytic gives no information about the singular values. |
Difference between revisions of "Probability Seminar"
(→Thursday, May 7, TBA)
(→Thursday, April 16, Scott Hottovy, UW-Madison)
Line 161: Line 161:
Title: '''An SDE approximation for stochastic differential delay equations with colored state-dependent noise'''
Title: '''An SDE approximation for stochastic differential delay equations with colored state-dependent noise'''
−
Abstract:
+
Abstract:
== Thursday, April 23, [http://people.math.osu.edu/nguyen.1261/ Hoi Nguyen], [http://math.osu.edu/ Ohio State University] ==
== Thursday, April 23, [http://people.math.osu.edu/nguyen.1261/ Hoi Nguyen], [http://math.osu.edu/ Ohio State University] ==
Revision as of 12:35, 9 April 2015 Spring 2015 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu.
Thursday, January 15, Miklos Racz, UC-Berkeley Stats
Title: Testing for high-dimensional geometry in random graphs
Abstract: I will talk about a random geometric graph model, where connections between vertices depend on distances between latent d-dimensional labels; we are particularly interested in the high-dimensional case when d is large. Upon observing a graph, we want to tell if it was generated from this geometric model, or from an Erdos-Renyi random graph. We show that there exists a computationally efficient procedure to do this which is almost optimal (in an information-theoretic sense). The key insight is based on a new statistic which we call "signed triangles". To prove optimality we use a bound on the total variation distance between Wishart matrices and the Gaussian Orthogonal Ensemble. This is joint work with Sebastien Bubeck, Jian Ding, and Ronen Eldan.
Thursday, January 22, No Seminar Thursday, January 29, Arnab Sen, University of Minnesota
Title:
Double Roots of Random Littlewood Polynomials
Abstract: We consider random polynomials whose coefficients are independent and uniform on {-1,1}. We will show that the probability that such a polynomial of degree n has a double root is o(n^{-2}) when n+1 is not divisible by 4 and is of the order n^{-2} otherwise. We will also discuss extensions to random polynomials with more general coefficient distributions.
This is joint work with Ron Peled and Ofer Zeitouni.
Thursday, February 5, No seminar this week Thursday, February 12, No Seminar this week Thursday, February 19, Xiaoqin Guo, Purdue
Title: Quenched invariance principle for random walks in time-dependent random environment
Abstract: In this talk we discuss random walks in a time-dependent zero-drift random environment in [math]Z^d[/math]. We prove a quenched invariance principle under an appropriate moment condition. The proof is based on the use of a maximum principle for parabolic difference operators. This is a joint work with Jean-Dominique Deuschel and Alejandro Ramirez.
Thursday, February 26, Dan Crisan, Imperial College London
Title:
Smoothness properties of randomly perturbed semigroups with application to nonlinear filtering
Abstract: In this talk I will discuss sharp gradient bounds for perturbed diffusion semigroups. In contrast with existing results, the perturbation is here random and the bounds obtained are pathwise. Our approach builds on the classical work of Kusuoka and Stroock and extends their program developed for the heat semi-group to solutions of stochastic partial differential equations. The work is motivated by and applied to nonlinear filtering. The analysis allows us to derive pathwise gradient bounds for the un-normalised conditional distribution of a partially observed signal. The estimates we derive have sharp small time asymptotics
This is joint work with Terry Lyons (Oxford) and Christian Literrer (Ecole Polytechnique) and is based on the paper
D Crisan, C Litterer, T Lyons, Kusuoka–Stroock gradient bounds for the solution of the filtering equation, Journal of Functional Analysis, 2105
Wednesday, March 4, Sam Stechmann, UW-Madison, 2:25pm Van Vleck B113
Please note the unusual time and room.
Title: Stochastic Models for Rainfall: Extreme Events and Critical Phenomena Abstract: In recent years, tropical rainfall statistics have been shown to conform to paradigms of critical phenomena and statistical physics. In this talk, stochastic models will be presented as prototypes for understanding the atmospheric dynamics that leads to these statistics and extreme events. Key nonlinear ingredients in the models include either stochastic jump processes or thresholds (Heaviside functions). First, both exact solutions and simple numerics are used to verify that a suite of observed rainfall statistics is reproduced by the models, including power-law distributions and long-range correlations. Second, we prove that a stochastic trigger, which is a time-evolving indicator of whether it is raining or not, will converge to a deterministic threshold in an appropriate limit. Finally, we discuss the connections among these rainfall models, stochastic PDEs, and traditional models for critical phenomena. Thursday, March 12, Ohad Feldheim, IMA
Title:
The 3-states AF-Potts model in high dimension
Abstract: Take a bounded odd domain of the bipartite graph [math]\mathbb{Z}^d[/math]. Color the boundary of the set by [math]0[/math], then color the rest of the domain at random with the colors [math]\{0,\dots,q-1\}[/math], penalizing every configuration with proportion to the number of improper edges at a given rate [math]\beta\gt 0[/math] (the "inverse temperature"). Q: "What is the structure of such a coloring?"
This model is called the [math]q[/math]-states Potts antiferromagnet(AF), a classical spin glass model in statistical mechanics. The [math]2[/math]-states case is the famous Ising model which is relatively well understood. The [math]3[/math]-states case in high dimension has been studies for [math]\beta=\infty[/math], when the model reduces to a uniformly chosen proper three coloring of the domain. Several words, by Galvin, Kahn, Peled, Randall and Sorkin established the structure of the model showing long-range correlations and phase coexistence. In this work, we generalize this result to positive temperature, showing that for large enough [math]\beta[/math] (low enough temperature) the rigid structure persists. This is the first rigorous result for [math]\beta\lt \infty[/math].
In the talk, assuming no acquaintance with the model, we shall give the physical background, introduce all the relevant definitions and shed some light on how such results are proved using only combinatorial methods. Joint work with Yinon Spinka.
Thursday, March 19, Mark Huber, Claremont McKenna Math
Title: Understanding relative error in Monte Carlo simulations
Abstract: The problem of estimating the probability [math]p[/math] of heads on an unfair coin has been around for centuries, and has inspired numerous advances in probability such as the Strong Law of Large Numbers and the Central Limit Theorem. In this talk, I'll consider a new twist: given an estimate [math]\hat p[/math], suppose we want to understand the behavior of the relative error [math](\hat p - p)/p[/math]. In classic estimators, the values that the relative error can take on depends on the value of [math]p[/math]. I will present a new estimate with the remarkable property that the distribution of the relative error does not depend in any way on the value of [math]p[/math]. Moreover, this new estimate is very fast: it takes a number of coin flips that is very close to the theoretical minimum. Time permitting, I will also discuss new ways to use concentration results for estimating the mean of random variables where normal approximations do not apply.
Thursday, March 26, Ji Oon Lee, KAIST
Title: Tracy-Widom Distribution for Sample Covariance Matrices with General Population
Abstract: Consider the sample covariance matrix [math](\Sigma^{1/2} X)(\Sigma^{1/2} X)^*[/math], where the sample [math]X[/math] is an [math]M \times N[/math] random matrix whose entries are real independent random variables with variance [math]1/N[/math] and [math]\Sigma[/math] is an [math]M \times M[/math] positive-definite deterministic diagonal matrix. We show that the fluctuation of its rescaled largest eigenvalue is given by the type-1 Tracy-Widom distribution. This is a joint work with Kevin Schnelli.
Thursday, April 2, No Seminar, Spring Break Thursday, April 9, Elnur Emrah, UW-Madison
Title: The shape functions of certain exactly solvable inhomogeneous planar corner growth models
Abstract: I will talk about two kinds of inhomogeneous corner growth models with independent waiting times {W(i, j): i, j positive integers}: (1) W(i, j) is distributed exponentially with parameter [math]a_i+b_j[/math] for each i, j.(2) W(i, j) is distributed geometrically with fail parameter [math]a_ib_j[/math] for each i, j. These generalize exactly-solvable i.i.d. models with exponential or geometric waiting times. The parameters (a_n) and (b_n) are random with a joint distribution that is stationary with respect to the nonnegative shifts and ergodic (separately) with respect to the positive shifts of the indices. Then the shape functions of models (1) and (2) satisfy variational formulas in terms of the marginal distributions of (a_n) and (b_n). For certain choices of these marginal distributions, we still get closed-form expressions for the shape function as in the i.i.d. models.
Thursday, April 16, Scott Hottovy, UW-Madison
Title:
An SDE approximation for stochastic differential delay equations with colored state-dependent noise
Abstract: In this talk I will introduce a stochastic differential delay equation with state-dependent colored noise which arises from a noisy circuit experiment. In the experimental paper, a small delay and correlation time limit was performed by using a Taylor expansion of the delay. However, a time substitution was first performed to obtain a good match with experimental results. I will discuss how this limit can be proved without the use of a Taylor expansion by using a theory of convergence of stochastic processes developed by Kurtz and Protter. To obtain a necessary bound, the theory of sums of weakly dependent random variables is used. This analysis leads to the explanation of why the time substitution was needed in the previous case.
Thursday, April 23, Hoi Nguyen, Ohio State University
Title: On eigenvalue repulsion of random matrices
Abstract:
I will address certain repulsion behavior of roots of random polynomials and of eigenvalues of Wigner matrices, and their applications. Among other things, we show a Wegner-type estimate for the number of eigenvalues inside an extremely small interval for quite general matrix ensembles.
Thursday, April 30, Chris Janjigian, UW-Madison
Title: TBA
Abstract:
Thursday, May 7, Jessica Lin, UW-Madison
Title: TBA
Abstract: |
Research Open Access Published: Monotone positive solution of a fourth-order BVP with integral boundary conditions Boundary Value Problems volume 2015, Article number: 172 (2015) Article metrics
1027 Accesses
2 Citations
Abstract
In this paper, we investigate the existence of concave and monotone positive solutions for a nonlinear fourth-order differential equation with integral boundary conditions of the form \(x^{(4)}(t)=f(t,x(t),x'(t),x''(t))\), \(t\in[0,1]\), \(x(0)=x'(1)=x'''(1)=0\), \(x''(0)=\int_{0}^{1}g(s)x''(s) \, \mathrm{d}s\), where \(f\in C([0,1]\times[0,+\infty)^{2}\times(-\infty,0],[0,+\infty))\), \(g\in C([0,1],[0,+\infty))\). By using a fixed point theorem of cone expansion and compression of norm type, the existence and nonexistence of concave and monotone positive solutions for the above boundary value problems is obtained. Meanwhile, as applications of our results, some examples are given.
Introduction
This paper is the follow-up of [1]. In [1], by using a fixed point theorem for the sum of two operators due to O’Regan [2], we obtained existence of solutions for a fully nonlinear fourth-order equation with integral boundary conditions of type
In this paper, we study the existence of concave and monotone positive solutions for its simplified form
subject to the integral boundary conditions
where \(f\in C([0,1]\times[0,+\infty)^{2}\times(-\infty,0],[0,+\infty))\), \(g\in C([0,1],[0,+\infty))\).
It is well known that fourth-order boundary value problems models bending equilibria of elastic beams, and have been studied extensively. Among a substantial number of works dealing with fourth-order boundary value problems, we mention [1, 3–31]. We notice that if \(g(\cdot)\equiv0\) in (1.2), the models are known as the one endpoint simply supported and the other one sliding clamped beam. The study of this class of problems was considered by some authors via various methods, we refer the reader to [4, 7, 10, 14, 15, 23, 26].
The aim of this paper is to establish the existence and nonexistence results of concave and monotone positive solutions for the problems (1.1), (1.2). Here, a solution \(x(t)\) of the BVP (1.1), (1.2) is said to be monotone and positive if \(x'(t)\geq0\) on \([0,1]\) and \(x(t)>0\) on \(t\in(0,1]\). Our main tool is the fixed point theorem of cone expansion and compression of norm type [32]. The paper [33] motivated our study.
Preliminary
In this section, we present some lemmas which are needed for our main results.
Throughout this paper, we assume that \(f:[0,1]\times [0,+\infty)^{2}\times(-\infty,0]\rightarrow[0,+\infty)\) and \(g: [0,1]\rightarrow[0,+\infty)\) are continuous, moreover, \(\mu:=\int_{0}^{1}g(s)\, \mathrm{d}s<1\).
Simple computations lead to the following lemma.
Lemma 2.1 For any \(h\in C[0,1]\), the BVP has a unique solution where Lemma 2.2 Let \(G_{1}(t,s)\) be as in Lemma 2.1. Then Proof
For \(0\leq t\leq s \leq1\), one has
On the other hand, for \(0\leq s\leq t \leq1\), we have \(\frac{1}{6}s^{2}+\frac{1}{6}t^{2}\leq\frac{1}{3}t\), and then
This completes the proof of the lemma. □
Lemma 2.3 If \(h\in C[0,1]\) with \(h(t)\geq0\) on \([0,1]\), then the unique solution \(x=x(t)\) of the BVP (2.1) satisfies: (1)
\(x(t)\geq0\)
for\(t\in[0,1]\); (2)
\(x'(t)\geq0\), \(x''(t)\leq0\)
for\(t\in[0,1]\), and$$x(t)\geq\frac{1}{2}\biggl(t-\frac{1}{2}t^{2}\biggr)\bigl\Vert x''\bigr\Vert _{\infty}, \quad t\in[0,1]. $$ Proof
(1) From Lemma 2.2 and the fact
it follows that
(2) Note that whenever \((t,s)\in[0,1]\times[0,1]\),
it follows that
On the one hand, by (2.3), we have
This completes the proof of the lemma. □
Let
be endowed with the norm \(\|x\|=\max_{t\in[0,1]}|x''(t)|=:\|x''\| _{\infty}\). Then
E is a Banach space. If we denote
then it is easy to see that
K is a cone in E.
Now, we define an operator
T on K as follows: for \(x\in K\), Lemma 2.4
\(T:K\rightarrow K\)
is completely continuous. Proof
First, we show that
T is continuous. To do this, suppose \(x_{n}, x_{0}\in K\) and \(\|x_{n}-x_{0}\|\rightarrow0\) (\(n\rightarrow\infty\)). Then there exists \(M_{1}>0\) such that \(\|x_{0}\|, \|x_{n}\|\leq M_{1}\) for all \(n\in\mathbb{N}=\{ 1,2,\ldots\}\). Hence from the continuity of f on \([0,1]\times[0,M_{1}]^{2}\times[-M_{1},0]\), we have
uniformly on \([0,1]\). Also, since
we have
i.e.,
Therefore \(T:K\rightarrow K\) is continuous.
Next, we prove that
T is relatively compact. With this aim, let \(D\subset K\) be a bounded set, then there exists a constant \(M_{2}>0\) such that \(\|x\|\leq M_{2}\) for all \(x\in D\). Suppose that \(\{y_{n}\}\subset T(D)\), there exist \(\{x_{n}\} \subset D\) such that \(Tx_{n}=y_{n}\). Let
For all \(n\in\mathbb{N}\), we have
and
Consequently there exists a constant \(M_{4}>0\) such that, for all \(n\in \mathbb{N}\),
By the Arzela-Ascoli theorem, we know that \(\{y_{n}''\}\) has a convergent subsequence in supremum norm,
i.e., \(\{y_{n}\}\) has a convergent subsequence in E, which indicates that \(T(D)\subset K\) is relatively compact in E. This completes the proof of the lemma. □
The following fixed point theorem of cone expansion and compression of norm type plays a crucial role in our paper.
Lemma 2.5
([32])
Let E be a Banach space and let K be a cone in E. Assume that \(\Omega_{1}\) and \(\Omega_{2}\) are bounded open subsets of E such that \(\theta\in\Omega_{1}\subset\overline{\Omega}_{1}\subset\Omega_{2}\), and let \(T:K\cap(\overline{\Omega}_{2}\setminus\Omega_{1})\rightarrow K\) be a completely continuous operator such that either (i)
\(\|Tx\|\leq\|x\|\)
for\(x\in K\cap\partial\Omega_{1}\) and\(\|Tx\|\geq\|x\|\) for\(x\in K\cap\partial\Omega_{2}\), or (ii)
\(\|Tx\|\geq\|x\|\)
for\(x\in K\cap\partial\Omega_{1}\) and\(\|Tx\|\leq\|x\|\) for\(x\in K\cap\partial\Omega_{2}\). Then T has a fixed point in \(K\cap(\overline{\Omega}_{2}\setminus\Omega_{1})\). Main results
For convenience, firstly we introduce some notations:
Theorem 3.1 Proof
Since \(H_{1}f^{0}<1\), there exists \(\varepsilon_{1}>0\) such that
By the definition of \(f^{0}\) and the continuity of
f, there exists \(\rho _{1}>0\) such that, for \(t\in[0,1]\), \(x_{0}+x_{1}-x_{2}\in[0,\rho_{1}]\),
which implies that
On the other hand, in view of \(H_{2}f_{\infty}>1\), there exists \(\varepsilon_{2}>0\) such that
By the definition of \(f_{\infty}\), there exists \(\rho_{2}>\rho_{1}\) such that, for \(t\in[0,1]\), \(x_{0}+x_{1}-x_{2}\in[\rho_{2},+\infty)\),
which implies that
Therefore, it follows from (3.3), (3.6), and Lemma 2.5 that the operator
T has one fixed point \(x\in K\cap(\overline{\Omega}_{2}\setminus\Omega_{1})\), which is a concave and monotone positive solution of the BVP (1.1), (1.2). This completes the proof of the theorem. □ Corollary 3.1 Suppose that f is superlinear, i. e., Theorem 3.2 Proof
Since \(H_{2}f_{0}>1\), there exists \(\varepsilon_{1}>0\) such that
By the definition of \(f_{0}\), there exists \(\rho_{1}>0\) such that, for \(t\in[0,1]\), \(x_{0}+x_{1}-x_{2}\in[0,\rho_{1}]\),
which implies that
On the other hand, in view of \(H_{1}f^{\infty}<1\), there exists \(\varepsilon_{2}>0\) such that
By the definition of \(f^{\infty}\), there exists \(\rho^{*}>3\rho_{1}\) such that, for \(t\in[0,1]\), \(x_{0}+x_{1}-x_{2}\in[\rho^{*},+\infty)\),
Let
Then for \((t,x_{0},x_{1},x_{2})\in[0,1]\times[0,+\infty)^{2}\times(-\infty,0]\) one has
Now, we choose \(\rho_{2}>\frac{1}{3}\max\{\rho^{*},\frac{\beta H_{1}}{1-H_{1}(f^{\infty}+\varepsilon_{2})}\}\) and let
which implies that
Therefore, it follows from (3.9), (3.12), and Lemma 2.5 that the operator
T has one fixed point \(x\in K\cap(\overline{\Omega}_{2}\setminus\Omega_{1})\), which is a concave and monotone positive solution of the BVP (1.1), (1.2). This completes the proof of the theorem. □ Corollary 3.2 Suppose that f is sublinear, i. e., Theorem 3.3 Suppose that Proof
and
Hence
which implies that
Theorem 3.4 Suppose that Proof
which is a contradiction. This completes the proof of the theorem. □
Finally, we give some examples to demonstrate applications of our results.
Example 3.1
Consider the fourth-order boundary value problem
Let
Then \(f\in C([0,1]\times[0,+\infty)^{2}\times(-\infty,0],[0,+\infty))\), \(g\in C([0,1],[0,+\infty))\), and \(\mu=\int_{0}^{1}g(s)\,\mathrm{d}s=\frac{1}{2}<1\). It is easy to compute that
and hence
Example 3.2
Consider the fourth-order boundary value problem
Let
Then \(f\in C([0,1]\times[0,+\infty)^{2}\times(-\infty,0],[0,+\infty))\), \(g\in C([0,1],[0,+\infty))\), and \(\mu=\int_{0}^{1}g(s)\,\mathrm{d}s=\frac{3}{4}<1\). It is easy to compute that
and hence
References 1.
Li, H, Wang, L, Pei, M: Solvability of a fourth-order boundary value problem with integral boundary conditions. J. Appl. Math.
2013, Article ID 782363 (2013) 2.
O’Regan, D: Fixed-point theory for the sum of two operators. Appl. Math. Lett.
9, 1-8 (1996) 3.
Agarwal, RP, Chow, YM: Iterative methods for a fourth order boundary value problem. J. Comput. Appl. Math.
10, 203-217 (1984) 4.
Bai, Z: The upper and lower solution method for some fourth-order boundary value problems. Nonlinear Anal.
67, 1704-1709 (2007) 5.
Cabada, A, Minhós, FM: Fully nonlinear fourth-order equations with functional boundary conditions. J. Math. Anal. Appl.
340, 239-251 (2008) 6.
Del Pino, MA, Manasevich, RF: Existence for a fourth-order boundary value problem under a two parameter nonresonance condition. Proc. Am. Math. Soc.
112, 81-86 (1991) 7.
Du, J, Cui, M: Constructive proof of existence for a class of fourth-order nonlinear BVPs. Comput. Math. Appl.
59, 903-911 (2010) 8.
Ehme, J, Eloe, PW, Henderson, J: Upper and lower solution methods for fully nonlinear boundary value problems. J. Differ. Equ.
180, 51-64 (2002) 9.
Franco, D, O’Regan, D, Perán, J: Fourth-order problems with nonlinear boundary conditions. J. Comput. Appl. Math.
174, 315-327 (2005) 10.
Feng, H, Ji, D, Ge, W: Existence and uniqueness of solutions for a fourth-order boundary value problem. Nonlinear Anal.
70, 3561-3566 (2009) 11.
Graef, JR, Kong, L: A necessary and sufficient condition for existence of positive solutions of nonlinear boundary value problems. Nonlinear Anal.
66, 2389-2412 (2007) 12.
Graef, JR, Qian, CX, Yang, B: A three point boundary value problem for nonlinear fourth order differential equations. J. Math. Anal. Appl.
287, 217-233 (2003) 13.
Grossinho, MR, Tersian, SA: The dual variational principle and equilibria for a beam resting on a discontinuous nonlinear elastic foundation. Nonlinear Anal.
41, 417-431 (2000) 14.
Gupta, CP: Existence and uniqueness theorems for a bending of an elastic beam equation. Appl. Anal.
26, 289-304 (1988) 15.
Jankowski, T: Positive solutions for fourth-order differential equations with deviating arguments and integral boundary conditions. Nonlinear Anal.
73, 1289-1299 (2010) 16.
Jiang, DQ, Gao, WJ, Wan, AY: A monotone method for constructing extremal solutions to fourth-order periodic boundary value problems. Appl. Math. Comput.
132, 411-421 (2002) 17.
Kang, P, Wei, Z, Xu, J: Positive solutions to fourth-order singular boundary value problems with integral boundary conditions in abstract spaces. Appl. Math. Comput.
206, 245-256 (2008) 18.
Korman, P: Computation of displacements for nonlinear elastic beam models using monotone iterations. Int. J. Math. Math. Sci.
11, 121-128 (1988) 19.
Ma, TF: Positive solutions for a beam equation on a nonlinear elastic foundation. Math. Comput. Model.
39, 1195-1201 (2004) 20.
Ma, R, Xu, J: Bifurcation from interval and positive solutions of a nonlinear fourth-order boundary value problem. Nonlinear Anal.
72, 113-122 (2010) 21.
Minhós, F, Gyulov, T, Santos, AI: Lower and upper solutions for a fully nonlinear beam equation. Nonlinear Anal.
71, 281-292 (2009) 22.
O’Regan, D: Solvability of some fourth (and higher) order singular boundary value problems. J. Math. Anal. Appl.
161, 78-116 (1991) 23.
Pietramala, P: A note on a beam equation with nonlinear boundary conditions. Bound. Value Probl.
2011, Article ID 376782 (2011) 24.
Pei, M, Chang, SK: Monotone iterative technique and symmetric positive solutions for a fourth-order boundary value problem. Math. Comput. Model.
51, 1260-1267 (2010) 25.
Shanthi, V, Ramanujam, N: A numerical method for boundary value problems for singularly perturbed fourth-order ordinary differential equations. Appl. Math. Comput.
129, 269-294 (2002) 26.
Sun, JP, Wang, XQ: Existence and iteration of monotone positive solution of BVP for an elastic beam equation. Math. Probl. Eng.
2011, Article ID 705740 (2011) 27.
Webb, JRL, Infante, G: Positive solutions of nonlocal boundary value problems: a unified approach. J. Lond. Math. Soc. (2)
74, 673-693 (2006) 28.
Webb, JRL, Infante, G, Franco, D: Positive solutions of nonlinear fourth-order boundary-value problems with local and non-local boundary conditions. Proc. R. Soc. Edinb., Sect. A
138, 427-446 (2008) 29.
Wei, Z: A class of fourth order singular boundary value problems. Appl. Math. Comput.
153, 865-884 (2004) 30.
Yao, QL: Positive solutions for eigenvalue problems of fourth-order elastic beam equations. Appl. Math. Lett.
17, 237-243 (2004) 31.
Zhang, X, Ge, W: Positive solutions for a class of boundary value problems with integral boundary conditions. Comput. Math. Appl.
58, 203-215 (2009) 32.
Guo, D, Lakshmikantham, V: Nonlinear Problems in Abstract Cones. Academic Press, Boston (1988)
33.
Sun, JP, Li, HB: Monotone positive solution of nonlinear third-order BVP with integral boundary conditions. Bound. Value Probl.
2010, Article ID 874959 (2010) Acknowledgements
This work was supported by the National Natural Science Foundation of China (11201008).
Additional information Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. |
THE BROKEN LINK BETWEEN SUPPLY AND DEMAND CREATES CHAOTIC TURBULENCE (+controls)
The existing global capitalistic growth paradigm is totally flawed
Growth in supply and productivity is a summation of variables as is demand ... when the link between them is broken by catastrophic failure in a component the creation of unpredictable chaotic turbulence puts the controls ito a situation that will never return the system to its initial conditions as it is STIC system (Lorenz)
The chaotic turbulence is the result of the concept of infinite bigness this has been the destructive influence on all empires and now shown up by Feigenbaum numbers and Dunbar numbers for neural netwoirks
See Guy Lakeman Bubble Theory for more details on keeping systems within finite working containers (villages communities)
The probability density function (PDF) of the normal distribution or Bell Curve Gaussian Distribution by Guy Lakeman The probability density function (PDF) of the normal distribution or Bell Curve of Normal or Gaussian Distribution is the mean or expectation of the distribution (and also its median and mode).
The parameter is its standard deviation with its variance then, A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.However, those who enjoy upskirts are called deviants and have a variable distribution :)
A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.
If mu = 0 and sigma = 1
If the Higher Education Numbers Are Increased then the group decision making ability of society would be raised above that of a middle teenager as it is nowBUT Governments can control children by using bad parenting techniques, pandering to the pleasure principle, so they will make higher education more and more difficult as they are doing
85% of the population has a qualification level equal or below a 12th grader, 17 year old ... the chance of finding someone with any sense is low (~1 in 6) and the outcome of them being chosen by those who are uneducated in the policies they are to decide is even more rare !!!
Experience means little if you don't have enough brain to analyse it
Democracy is only as good as the ability of the voters to FULLY understand the implications of the policies on which they vote., both context and the various perspectives. National voting of unqualified voters on specific policy issues is the sign of corrupt manipulation.
Democracy: Where a group allows the decision ability of a teenager to decide on a choice of mis-representatives who are unqualified to make judgement on social policies that affect the lives of millions.The kind of children who would vote for King Kong who can hold a girl in one hand and swat fighter jets out of teh sky off the tallest building, doesn't have a brain cell or thought to call his own but has a nice smile and offers little girls sweets.
Chaotic Bistable Oscillator BATHTUB MEAN TIME BETWEEN FAILURE (MTBF) RISK
F(t) = 1 - e ^ -λt Where • F(t) is the probability of failure • λ is the failure rate in 1/time unit (1/h, for example) • t is the observed service life (h, for example)
The inverse curve is the trust time
On the right the increase in failures brings its inverse which is loss of trust and move into suspicion and lack of confidence.
This can be seen in strategic social applications with those who put economy before providing the priorities of the basic living infrastructures for all.
This applies to policies and strategic decisions as well as physical equipment.
A) Equipment wears out through friction and preventive maintenance can increase the useful lifetime,
B) Policies/working practices/guidelines have to be updated to reflect changes in the external environment and eventually be replaced when for instance a population rises too large (constitutional changes are required to keep pace with evolution, e.g. the concepts of the ancient Greeks, 3000 years ago, who based their thoughts on a small population cannot be applied in 2013 except where populations can be contained into productive working communities with balanced profit and loss centers to ensure sustainability)
Early LifeIf we follow the slope from the leftmost start to where it begins to flatten out this can be considered the first period. The first period is characterized by a decreasing failure rate. It is what occurs during the “early life” of a population of units. The weaker units fail leaving a population that is more rigorous. Useful Life
The next period is the flat bottom portion of the graph. It is called the “useful life” period. Failures occur more in a random sequence during this time. It is difficult to predict which failure mode will occur, but the rate of failures is predictable. Notice the constant slope.
Wearout
The third period begins at the point where the slope begins to increase and extends to the rightmost end of the graph. This is what happens when units become old and begin to fail at an increasing rate. It is called the “wearout” period.
FORCED GROWTH INTO TURBULENCE FORCED GROWTH GROWTH GOES INTO TURBULENT CHAOTIC DESTRUCTION BEWARE pushing increased growth blows the system! (governments are trying to push growth on already unstable systems !)
The existing global capitalistic growth paradigm is totally flawed
The chaotic turbulence is the result of the concept and flawed strategy of infinite bigness this has been the destructive influence on all empires and now shown up by Feigenbaum numbers and Dunbar numbers for neural netwoirks
See Guy Lakeman Bubble Theory for more details on keeping systems within finite limited size working capacity containers (villages communities)
Wind Resistance Model Pendulum Lorenz Attractor OVERSHOOT GROWTH INTO TURBULENCE
The existing global capitalistic growth paradigm is totally flawed
The chaotic turbulence is the result of the concept of infinite bigness this has been the destructive influence on all empires and now shown up by Feigenbaum numbers and Dunbar numbers for neural netwoirks
See Guy Lakeman Bubble Theory for more details on keeping systems within finite limited size working capacity containers (villages communities)
Balancing an Inverted Pendulum PCT Model Rocket Model Velocity Movimento Circular Uniforme
Uma roda-gigantede raio 14 m gira em torno de um eixo horizontal. Um passageiro sentado em umacadeira, move-se com velocidade linear v=7 m/s. Determine:
a) a velocidade angular do movimento.
b) gráfico XY do movimento da cadeira.
c) em quanto tempo o passageiro executa uma volta completa.
Lissajous curve Lissajous curve/ˈlɪsəʒuː/, also known as Lissajous figureor Bowditch curve/ˈbaʊdɪtʃ/, is the graph of a system of parametric equations {\displaystyle x=A\sin(at+\delta ),\quad y=B\sin(bt),} Spring-Mass Model Launched at an Angle
object is projected with an initial velocity u at an angle to the horizontal direction.
We assume that there is no air resistance .Also since the body first goes up and then comes down after reaching the highest point , we will use the Cartesian convention for signs of different physical quantities. The acceleration due to gravity 'g' will be negative as it acts downwards.h=v_ox*t-g*t^2/2
l=v_oy*t
Kepler Ellipsen Balancing an Inverted Pendulum Schiefer Wurf mit Luftwiderstand Fourier series Velocidade e Aceleração Vetorial
Um pontomaterial percorre uma trajetória circular de raio R = 20m com movimento uniformemente variado eaceleração escalar a = 5m/s². Sabendo-se que no instantet = 0 sua velocidade escalar é nula, determine no instante t = 2s os módulos da:
a) Velocidade vetorial;
b) Aceleração tangencial;
c) Aceleração centrípeta;
d) Aceleração vetorial.
Fonte: (RAMALHO,NICOLAU E TOLEDO; Fundamentos da Física, Volume 1, 8ª edição, pp. 12 – 169, 2003). |
I am trying to derive the differential of the product of two processes, but I got stuck. This is what I have until now:
We have the following two stochastic processes: $dX_t= \mu_t dt +\sigma_t dW_t$ and $dY_t = \eta_t dt + \vartheta_t d \bar{W}_t$, with the correlation between the two Brownian motions equal to $\rho$, that is
E[($W_t - W_s$)($\bar{W}_t - \bar{W}_s$)|$F_s$] = $\rho(t-s)$ for s $\leq$ t.
Then I can get an expression for $d(X_t Y_t)$ with the following trick:
I start with decomposition: $(X_t+Y_t)^2=X_t^2+Y_t^2+ 2X_t Y_t$,
Which leads by differentiation to $d(X_t Y_t)= \frac{1}{2}[d(\{X_t +Y_t\}^2) - d(X_t^2) - d(Y_t^2)]$
Next I applied Ito-lemma to all three parts separately as follows: $d(X_t^2)=(2\mu_tXt+\sigma_t^2)dt+ 2\sigma_t X_tdW_t= 2X_t dX_t + \sigma_t^2dt$ $d(Y_t^2)=(2\eta_t Y_t+\vartheta_t^2)dt+ 2\vartheta_t Y_t d\bar{W}_t= 2Y_t dY_t+\vartheta_t^2 dt$
Now I don't know how to apply Ito lemma to the last part, i.e., $d(\{X_t +Y_t\}^2)$. Particularly, I don't know how to account for the correlation between the two Brownian Motions. Can someone help me with this last step? |
Previous
Important principles and relationships in Life Sciences
Next
Mathematical skills in Life Sciences
Presenting data (ESG3H) Biological drawings and diagrams (ESG3J)
Drawings and diagrams are an essential part of communication in science, and especially Life Sciences. Remember it is not an artwork or sketch! But rather it is a clear representation of what you observe which can be used to interpret what you saw.
Some rules to follow
Drawings and diagrams must:
Be drawn in a sharp pencil for clear, smooth lines. Be large so that all structures can be clearly seen (at least 10 lines of paper). Be drawn in the middle of the page. Be two dimensional (no shading)! Have a heading or caption. Specify the section in which the specimen was sliced, i.e. transverse section (T/S), cross section (C/S), or longitudinal section (L/S). State the source of the drawing or diagram, i.e. From a biological specimen, a micrograph or a slide. Indicate the magnification or scale of the drawing, either in the caption or in the corner of the drawing. Label lines should be drawn and they must: be parallel to the top of the page and drawn with a ruler. not cross each other or have an arrow at the end. clearly indicate the structure which is being named. be aligned neatly, one below the other and preferably on one side of the page, unless there are many labels in which both sides can be used. Identifying the key aspects of producing biological drawings Instructions
Make a list of what makes the above drawings good and bad.
The top drawing is good because: Cells are drawn neatly, with continuous lines, not sketched. The nuclei are circular and not coloured in or shaded. Label lines end on the part indicated. Label lines are parallel to each other and all end in one vertical line. The bottom drawing is bad because: Cell walls are roughly sketched. The nuclei are shaded or coloured in. Label line for 'cell wall' doesn’t touch any part of the drawing. Label lines cross each other. Lines do not end in one vertical line. Two-dimensional (2-D) and three-dimensional (3-D) diagrams (ESG3K)
Diagrams of apparatus are generally drawn in two-dimensions so that the shape of each item of apparatus is simplified and looks similar to a section through the apparatus.
Tables (ESG3M) What is a table? A table is a summary of data, using as few words as possible. It is a grid divided up into rows and columns. The heading is placed above the table. The heading should include both variables under investigation- the dependent and independent variables. Independent variable is placed in the first column. The column headings should mention the units that were used, eg. grams, hours, km/hr, cm. When to use a table? To summarise information. To compare related things or aspects. To record the results of an experiment. To illustrate patterns and trends. To record the data which will be used to construct a graph. Types of Graphs (ESG3N)
One of the clearest and most concise ways to represent data is via graphs. Graphs can immediately provide a graphical display of trends and patterns that words and numbers in a table don't necessarily convey.
Line Graphs (ESG3P) Line graphs are used when: The relationship between the dependent and independent variables is continuous. Both dependent and independent variables are measured in numbers. Features of line graphs: An appropriate scale is used for each axis so that the plotted points use most of the axis/space (work out the range of the data and the highest and lowest points). The scale must remain the SAME along the entire axis and use easy intervals such as 10's, 20's, 50's, and not intervals such as 7's, 14's, etc, which make it difficult to read information off the graph. Each axis must be labelled with what is shown on the axis and must include the appropriate units in brackets, e.g. Temperature (°C), Time (days), Height (cm). Each point has an \(x\) and \(y\) co-ordinate and is plotted with a symbol which is big enough to see, e.g. a cross or circle. The points are then joined. With a ruler if the points lie in a straight line (see Figure 3) or you can draw a line of best fit where the number of points are distributed fairly evenly on each side of the line. Freehand when the points appear to be following a curve (see Figure 4). DO NOT start the line at the origin unless there is a data point for 0. If there is no reading for 0, then start the line at the first plotted point. The graph must have a clear, descriptive title which outlines the relationship between the dependent and independent variable. If there is more than one set of data drawn on a graph, a different symbol must be used for each set and a key or legend must define the symbols.
Table headings are always written ABOVE the table. Graph headings are always written BELOW the graph.
Bar Graphs (ESG3Q) Bar graphs are used when: The independent variable is discontinuous (i.e. The variables on the x-axis are each associated with something different) Independent variables are not numerical. For example, when examining the protein content of various food types, the order of the food types along the horizontal axis is irrelevant. Bar graphs have the following features: The data are plotted as columns or bars that do not touch each other as each deals with a different characteristic. The bars must be the same width and be the same distance apart from each other. A bar graph can be displayed vertically or horizontally. A bar graph must have a clear, descriptive title, which is written beneath the graph. Histograms (ESG3R) Histograms are used when:
the independent variable (\(x\)-axis) represents information which is continuous, such as numerical ranges, i.e. 0-9, 10-19, 20-29, etc.
Histograms have the following features: Unlike a bar graph, in a histogram the data are plotted as columns or bars that touch each other as they are related to each other in some way. The numerical categories must notoverlap, for example, 0-10, 10-20, 20-30, etc. The ranges must be exclusive so that there is no doubt as to where to put a reading, for example, 0-9, 10-19, 20-29, etc. The bars can be vertically or horizontally drawn. A histogram must have a descriptive heading with is written below the graph and the axes must be labelled. Pie charts (ESG3S) Pie charts are used when: You want to give a visual representation of percentages as a relative proportion of the total of a circle. Pie charts have the following features: They are a type of graph even though they do not have any axes. A pie chart is a circle divided into sectors (think of them as the slices of a cake). \(\text{100}\%\) represents the whole complete circle, \(\text{50}\%\) represents a half circle, \(\text{25}\%\) is a quarter circle, and so on. Example: Count the number of each species and record it in a table. Work out the total number of species in the ecosystem. Calculate the percentage of each species. Use the following formula to work out the angle of each slice: \[a = \frac{v \times 360^{\circ}}{t}\]
Insects
\(\text{17}\)
\(\dfrac{17 \times 100}{50} = \text{34}\%\) \(\dfrac{34 \times 360}{100} = \text{122,4}\text{°}\)
Plants
16 \(\dfrac{16 \times 100}{50} = \text{32}\%\) \(\dfrac{16 \times 360}{100} = \text{115,2}\text{°}\)
Birds
9 \(\dfrac{9 \times 100}{50} = \text{18}\%\) \(\dfrac{18 \times 360}{100} = \text{65}\text{°}\)
Amphibians
8 \(\dfrac{8 \times 100}{50} = \text{16}\%\) \(\dfrac{16 \times 360}{100} = \text{57,6}\text{°}\) Use a compass to draw the circle and a protractor to measure accurate angles for each slice. Start with the largest angle/percentage starting at 12 o' clock and measure in a clockwise direction. Shade each slice and write the percentage on the slice and provide a key. Converting tables to graphs Aim
It is very important to be able to convert tables to graphs, and vice versa. Below are some exercises to practice this.
Questions
1. Convert the data in the graphs below into tables. Remember to identify which is the independent variable in the graphs and to place this in the first column of the table.
2. Convert the data in the following tables into graphs. Look back at the features of each type of graph to decide which one you will use.
Favourite take away restaurant in a class of learners
Take aways restaurant Learners (\(\%\)) Kauai \(\text{40}\) Anat Falafel \(\text{15}\) Nandos \(\text{25}\) Burger King \(\text{20}\) (1) Table to show the average height of boys and girls at different ages:
Age Height (cm) Boys Height (cm) Girls \(\text{10}\) \(\text{140}\) \(\text{140}\) \(\text{11}\) \(\text{145}\) \(\text{148}\) \(\text{12}\) \(\text{149}\) \(\text{151}\) \(\text{13}\) \(\text{153}\) \(\text{158}\) \(\text{14}\) \(\text{163}\) \(\text{160}\) \(\text{15}\) \(\text{168}\) \(\text{161}\) \(\text{16}\) \(\text{172}\) \(\text{163}\) \(\text{17}\) \(\text{174}\) \(\text{163}\) \(\text{18}\) \(\text{174}\) \(\text{163}\) (2) Table to show the proportion of each blood group in a small population:
Blood group Proportion of learners (\(\%\)) AB \(\text{5}\) A \(\text{40}\) B \(\text{10}\) O \(\text{45}\)
Previous
Important principles and relationships in Life Sciences
Table of Contents
Next
Mathematical skills in Life Sciences |
Research Open Access Published: New periodic solutions with a prescribed energy for a class of Hamiltonian systems Boundary Value Problems volume 2017, Article number: 30 (2017) Article metrics
614 Accesses
1 Citations
0 Altmetric
Abstract
We consider a class of second order Hamiltonian systems with a \(C^{2}\) potential function. The existence of new periodic solutions with a prescribed energy is established by the use of constrained variational methods.
Introduction
In this paper, we examine the existence of periodic solutions for second order Hamiltonian systems
with a fixed energy. The first major result in this direction we would like to highlight can be derived from the work of Benci [1], Gluck-Ziller [2], and Hayashi [3], which is based on the earlier work of Seifert [4] in 1948 and following the highly influential papers of Rabinowitz [5, 6] in 1978 and 1979. Utilizing the Jacobi metric and a very involved interplay between geodesic methods and algebraic topology, the following general theorem is established.
Theorem 1.1 Suppose \(V\in C^{1}(\mathbb{R}^{n},\mathbb{R})\). If the potential well
For the weakly attractive potential
V defined on an open subset Ω of \(\mathbb{R}^{n}\), Ambrosetti and Coti Zelati [9] (Theorem 16.7) proved the following. Theorem 1.2 Suppose \(V\in C^{2}(\Omega,\mathbb{R})\) satisfies \((V10)\) :
\(3\langle V^{\prime}(x),x\rangle+\langle V''(x)x,x\rangle\neq0\), \(\forall x\in\Omega\);
\((V11)\) :
\(\langle V^{\prime}(x),x\rangle>0\), \(\forall x\in \Omega\);
\((V12)\) :
\(\exists\alpha\in(0,2)\),
such that\(\langle V^{\prime}(x), x\rangle\geq-\alpha V(x)\), \(\forall x\in\Omega\); \((V13)\) :
\(\exists\beta\in(0,2)\)
and\(r>0\) such that\(\langle V^{\prime}(x), x\rangle\leq-\beta V(x)\), \(\forall0<\vert x\vert <r\); \((V14)\) :
\(G_{\infty}\geq0\);
where\(G_{\infty}=\lim_{\vert x\vert \rightarrow\infty }\inf G(x)\), \(G(x)=[V(x)+\frac{1}{2}\langle V^{\prime}(x),x\rangle]\). Then\(\forall h<0\), the system(1.1)-(1.2) ( referred to as\((P_{h})\)) has at least one non- constant weak periodic solution with the given energy h.
Using a simpler constrained variational minimizing method, we obtain the following result.
Theorem 1.3 Suppose \(V\in C^{2}(\mathbb{R}^{n},\mathbb{R})\) and \(h \in\mathbb{R}\) satisfy \((V_{1})\) :
\(V(-q)=V(q)\);
\((V_{2})\) :
\(\langle V^{\prime}(q),q\rangle>0\), \(\forall q\neq0\);
\((V_{3})\) :
\(3\langle V^{\prime}(q),q\rangle+\langle V''(q)q,q\rangle>0\), \(\forall q\neq0\);
\((V_{4})\) :
\(\exists\mu_{1}>0\), \(\mu_{2}\geq0\),
such that\(\langle V^{\prime}(q), q\rangle\geq\mu_{1} V(q)-\mu_{2}\); \((V_{5})\) :
\(\lim_{\vert q\vert \rightarrow\infty }\sup[V(q)+\frac{1}{2}\langle V^{\prime}(q),q\rangle]\leq A\);
\((V_{6})\) :
\(\frac{\mu_{2}}{\mu_{1}}< h< A\).
Then the system(1.1)-(1.2) has at least one non- constant periodic solution with the given energy h. Remark 1.4
Comparing Theorem 16.7 of Ambrosetti and Coti Zelati [9] with our Theorem 1.3, we notice that our condition \((V_{2})\) corresponds to their \((V11)\), our condition \((V_{3})\) corresponds to their \((V10)\), our condition \((V_{4})\) corresponds to their \((V12)\) and \((V13)\), our conditions \((V_{5})\) and \((V_{6})\) correspond to their \((V14)\). Since the potential in Theorem 16.7 of Ambrosetti and Coti Zelati has a singularity, but the potential in Theorem 1.3 has no singularity, the two theorems are essentially different.
Remark 1.5
Take for \(V(x)\) the following \(C^{\infty}\) function:
Then \(V(x)\) satisfies (\(V_{1}\))-(\(V_{5}\)) in Theorem 1.3 if we take \(\mu_{1}=\mu_{2}>0\) and \(A=1\), but \((V_{6})\) does not hold.
Proof of Theorem 1.3
We verify (\(V_{1}\))-(\(V_{5}\)) by calculation:
(1) It is obvious for \((V_{1})\).
(2) For \((V_{2})\) and \((V_{3})\), we notice that
(3) For \((V_{4})\), we set
We will prove \(w(x)>-\mu_{1}\); in fact,
From \(w^{\prime}(x)=0\), we have \(x=-\frac{1}{1+\mu_{1}}\) or 0 or \(\frac{1}{1+\mu_{1}}\).
It is easy to see that \(w(x)\) is strictly increasing on \((-\infty ,-\frac{1}{1+\mu_{1}}]\) and \([0,\frac{1}{1+\mu_{1}}]\), but strictly decreasing on \([\frac {-1}{1+\mu_{1}},0]\) and \([\frac{1}{1+\mu_{1}},+\infty)\). We notice that
and
So
When we take \(\mu_{2}=\mu_{1}>0\), \((V_{4})\) holds.
(4) For \((V_{5})\), we have
□
Corollary 1.6 Given \(a>0\), \(n\in\mathbb{N}\), define \(V(x)=a\vert x\vert ^{2n}+e^{\frac{-1}{\vert x\vert }}\), \(x\neq0\); \(V(0)=0\). Then, for \(h>1\), the system (1.1)-(1.2) has at least one non- constant periodic solution with the given energy h. Remark 1.7
The potential \(V(x)=e^{\frac{-1}{\vert x\vert }}\), \(\forall x\neq0\); \(V(0)=0\) in Remark 1.5 is noteworthy since the potential function is non-convex and bounded which satisfies neither of the conditions of Theorems 1.1, Offin’s geometrical conditions [10], nor Berg-Pasquotto-Vandervorst’s complex topological assumptions [11]. For this potential, the potential well \(\{x\in\mathbb{R}^{n}:V(x)\leq h\}\) is a bounded set if \(h<1\), but for \(h\geq1\) it is \(\mathbb{R}^{n}\) - an unbounded set. We also notice that the symmetrical condition on the potential simplified our Theorem 1.2 and its proof. It would be interesting to obtain non-constant periodic solutions when the symmetrical condition is deleted.
A few lemmas
Let
denotes the periodic functional space of period 1. Then the standard \(H^{1}\) norm is
Lemma 2.1
[12]
For \(u\in H^{1}\), define For \(u,v\in H^{1}\) and \(s \in\mathbb{R}\), let Then and therefore, if \((V_{3})\) holds, then on M, \(g'(u)\neq0\), which implies M is a \(C^{1}\) manifold with codimension 1 in \(H^{1}\).
Let
and \(\widetilde{u}\in M\) such that \(f^{\prime}(\widetilde{u})=0\) and \(f(\widetilde{u})>0\). Set
Lemma 2.2
[12]
Let and suppose (\(V_{1}\))-(\(V_{3}\)) hold. If \(\widetilde{u}\in F\) is such that \(f^{\prime}(\widetilde{u})=0\) and \(f(\widetilde{u})>0\), then \(\widetilde{q}(t)=\widetilde{u}(\frac{t}{T})\) is a non- constant T- periodic solution for (1.1)-(1.2); in addition, we have Wirtinger’s inequality [14] implies from which it follows that \((\int^{1}_{0}\vert \dot{u}\vert ^{2}\,dt )^{1/2}\) is an equivalent norm for the space \(H^{1}\). Lemma 2.3 Let X be a Banach space and \(F\subset X\) a weakly closed subset. Suppose Φ defined on F is Gateaux- differentiable, weakly lower semi- continuous and bounded from below on F. Suppose further that Φ satisfies the following \((\mathit{WPS})_{\inf\Phi ,F}\) condition: If\(\{x_{n}\}\subset F\) such that\(\Phi(x_{n}) \rightarrow c\) and\(\Vert \Phi'(x_{n})\Vert \rightarrow0\), then\(\{x_{n}\}\) has a weakly convergent subsequence. Then Φ attains its infimum on F. Proof
Since Φ satisfies the \((\mathit{WPS})_{\inf\Phi,F}\) condition, \(\{x_{n}\}\) has a weakly convergent subsequence which as a weak limit
x. Because \(F\subset X\) is a weakly closed subset, we have \(x\in F\). Finally, by the weakly lower semi-continuous assumption on Φ, we conclude that Φ attains its infimum on F. □ The proof of Theorem 1.3 Lemma 3.1 If (\(V_{1}\))-(\(V_{6}\)) hold, then, for any given \(c>0\), f satisfies the \((\mathit{PS})_{c,F}\) condition; that is, if \(\{u_{n}\}\subset F\) satisfies then \(\{u_{n}\}\) has a strongly convergent subsequence. Proof
We first prove that under our assumptions the constrained set \(F\neq\emptyset\). For any given \(u\in H^{1}\) satisfying \(u(t)\neq0\), \(\forall t\in[0,1]\) and for \(a>0\), let
By the assumption \((V_{3})\), we have
and so \(g_{u}\) is strictly increasing. Since \(V\in C^{2}\), we know that, for any given \(a>0\),
is uniformly continuous on \([0,1]\).
Hence by \((V_{5})\), we have
By \((V_{4})\), we notice that
Since \(\frac{\mu_{2}}{\mu_{1}}< h< A\), we see that the equation \(g_{u}(a)=h\) has a unique solution \(a(u)\) with \({a(u)u\in M}\).
By \(f(u_{n})\rightarrow c\), we have
and by \((V_{4})\) we see that
Condition \((V_{6})\) provides \(h>\frac{\mu_{2}}{\mu_{1}}\). Then (3.6) and (3.8) imply \(\int^{1}_{0}\vert \dot{u_{n}}(t)\vert ^{2}\,dt\) is bounded and \(\Vert u_{n}\Vert =\Vert \dot{u}_{n}\Vert _{L^{2}}\) is bounded.
We know that \(H^{1}\) is a reflexive Banach space, so \(\{u_{n}\}\) has a weakly convergent subsequence; furthermore, by the embedding theorem the weakly convergent subsequence also uniformly converges to some \(u\in H^{1}\). The standard argument can show that \(\{u_{n}\}\) has a subsequence which converges under the \(H^{1}\) norm. We omit the details of this standard demonstration. □
Lemma 3.2
\(f(u)\)
is weakly lower semi- continuous on F. Proof
For any \(u_{n}\subset F\) with \(u_{n}\rightharpoonup u\), by Sobolev’s embedding theorem we have the uniform convergence
Since \(V\in C^{1}(\mathbb{R}^{n},\mathbb{R})\), we have
By the weakly lower semi-continuity of the norm, we see that
and so
Then
□
Lemma 3.3 F is a weakly closed subset in \(H^{1}\). Proof
This follows easily from Sobolev’s embedding theorem and \(V\in C^{1}(\mathbb{R}^{n},\mathbb{R})\). □
Lemma 3.4 The functional \(f(u)\) has a positive lower bound on F. Proof
By the definitions of \(f(u)\),
F, and the assumption \((V_{2})\), we have
We claim further that
otherwise, \((V_{2})\) implies \(u(t)=\mathit{const}\), and by the symmetrical property \(u(t+1/2)=-u(t)\) we have \(u(t)=0\), \(\forall t\in\mathbb{R}\). But assumptions \((V_{4})\) and \((V_{6})\) imply
which contradicts the definition of
F since \(V(0)=h \) if we have \(0\in F\). Now by Lemmas 3.1-3.4 and Lemma 2.3, we see that \(f(u)\) attains the infimum on F and we know that the minimizer is non-constant. □ References 1.
Benci, V: Closed geodesics for the Jacobi metric and periodic solutions of prescribed energy of natural Hamiltonian systems. Ann. Inst. Henri Poincaré, Anal. Non Linéaire
1, 401-412 (1984) 2.
Gluck, H, Ziller, W: Existence of periodic motions of conservative systems. In: Bombieri, E (ed.) Seminar on Minimal Submanifolds. Princeton University Press, Princeton (1983)
3.
Hayashi, K: Periodic solutions of classical Hamiltonian systems. Tokyo J. Math.
6, 473-486 (1983) 4.
Seifert, H: Periodische Bewegungen mechanischer Systeme. Math. Z.
51, 197-216 (1948) 5.
Rabinowitz, PH: Periodic solutions of Hamiltonian systems. Commun. Pure Appl. Math.
31, 157-184 (1978) 6.
Rabinowitz, PH: Periodic solutions of a Hamiltonian systems on a prescribed energy surface. J. Differ. Equ.
33, 336-352 (1979) 7.
Van Groesen, EWC: Analytical mini-max methods for Hamiltonian break orbits with a prescribed energy. J. Math. Anal. Appl.
132, 1-12 (1988) 8.
Long, Y: Index Theory for Symplectic Paths with Applications. Birkhäuser, Basel (2002)
9.
Ambrosetti, A, Coti Zelati, V: Periodic Solutions of Singular Lagrangian Systems. Birkhäuser, Basel (1993)
10.
Offin, D: A class of periodic orbits in classical mechanics. J. Differ. Equ.
66, 90-117 (1987) 11.
Berg, J, Pasquotto, F, Vandervorst, R: Closed characteristics on non-compact hypersurfaces in \(\mathbb{R}^{2n}\). Math. Ann.
343, 247-284 (2009) 12.
Ambrosetti, A, Coti Zelati, V: Closed orbits of fixed energy for singular Hamiltonian systems. Arch. Ration. Mech. Anal.
112, 339-362 (1990) 13.
Palais, R: The principle of symmetric criticality. Commun. Math. Phys.
69, 19-30 (1979) 14.
Mawhin, J, Willem, M: Critical Point Theory and Applications. Springer, Berlin (1989)
15.
Ekeland, I: On the variational principle. J. Math. Anal. Appl.
47, 324-353 (1974) 16.
Ekeland, I: Nonconvex minimization problems. Bull. Am. Math. Soc. (N.S.)
1(3), 443-474 (1979) Acknowledgements
The authors sincerely thank the editor and the referees for their many valuable comments and suggestions. Shiqing Zhang and Fengying Li were partially supported by NSFC (11671278). Ying Lv was partially supported by NSFC (11601438).
Additional information Competing interests
The authors declare that no competing interests exist.
Authors’ contributions
The authors contributed equally to this paper. All authors read and approved the final manuscript. |
First, let's speak about perceptrons in general:
their
input $X_0$ is a $K$-dimensional vector. So if you want to use $(P_{bid}(t),P_{ask}(t), Q_{bid}(t),Q_{ask}(t))$, it would mean that without any effort (but later we will see that is would be better to do some efforts, as usual):$$X_0(t)=(P_{bid}(t),P_{ask}(t), Q_{bid}(t),Q_{ask}(t))'\in\mathbb{R}^4$$
then the
hidden layer is made of $N$ hidden neurons $(Z_n)_{1\leq n\leq N}$, each of them is associated to weights $(w^h_{k,n})_{1\leq k\leq K}$ and a biais $b^h_n$: the activation of one hidden unit $Z_n$ is$$Z_n(t)=\Phi\left( \sum_k w^h_{k,n}\cdot X_k(t) + b^h_n\right)$$ where $\Phi(\cdot)$ is the activation function of the perceptron; if you want to do something fancy, you can use different activation functions, but the regular one is a sigmoid (i.e. $\rm th$).Note that $Z=(Z_n)_{1\leq n\leq N}$ is in $\mathbb{R}^N$.
It means that each hidden unit $Z_n(t)$ will receive a combination of
all the inputs. In your example:$$Z_n(t)=\Phi\left( w^h_{1,n} \,P_{bid}(t) + w^h_{2,n} \,P_{ask}(t)+w^h_{3,n} \, Q_{bid}(t)+ w^h_{4,n} \,Q_{ask}(t) + b^h_n\right)$$ last but not least the output layer $Y$ is made of what you want to predict, let say that you target $(\rho_{ask}(t+1),\rho_{bid}(t+1))$ (i.e. the pct of change in volume), then you take:$$Z(t)=(\rho_{ask}(t+1),\rho_{bid}(t+1))'\in\mathbb{R}^2$$you also need weights and biais for the output so that ($U$ is the number of outputs):$$Y_n(t)=\Phi\left( \sum_{u=1}^U w^o_{u,n}\cdot Y_n(t) + b^o_u\right)$$sometimes people do not take any activation function for the output, but I would not recommend that.
All this put together, you can express the outputs as a function of the inputs with:$$Y_n(t)=\Phi\left( \sum_{u=1}^U w^o_{u,n}\cdot \Phi\left( \sum_k w^h_{k,n}\cdot X_k(t) + b^h_n\right) + b^o_u\right)$$
How does it work?
The perceptron has to be
trained on a database of a sample of $T$ associations of inputs and outputs $(X(t),Y(t))_{1\leq t\leq T}$. Its training is made of finding the weights and biais minimizing the $L2$ distance between the expected outputs and the obtained ones:$$\left\vert\begin{array}{ll}\mbox{Minimize}& \mathbb{E}_t \left\| Y(t) - \Phi\left( \sum_{u=1}^U w^o_{u,n}\cdot \Phi\left( \sum_k w^h_{k,n}\cdot X_k(t) + b^h_n\right) + b^o_u\right)\right\|^2\\\mbox{Variable}& (w^h_{k,n},b^h_n,w^o_{u,n},b^o_u)_{1\leq u\leq U, 1\leq n\leq N,1\leq k\leq K}\end{array}\right.$$
What people liked in perceptron is that as far as you use
usual quadratic minimization methods, the training is quite fast, and the associated computations are easy to do.
After the training, you obtain an estimate for $Y$ that is
optimal in the sense of your minimization program (i.e. the $L2$ distance statistical sense).
How to use perceptrons?
I like your question because I think that what you are asking is close to "ok but can it be that simple? just throwing my inputs in $X$, asking for my outputs in $Y$, and all will work in few seconds?". Of course the answer is no.
First you can note that you need to normalize your inputs a little (center and reduce them for instance, i.e. $X_k(t)\rightarrow (X_k(t) - \mathbb{E}_t(X_k))/\sqrt{\mathbb{V}_t(X_k)}$), more than that you can preprocess the inputs thanks to your understanding of the modelling problem. For instance in your case you could use:$$X(t)=\left(\frac{P_{ask}(t)-P_{bid}(t)}{P_{ask}(t)+P_{bid}(t)}, \frac{Q_{bid}(t)}{Q_{bid}(t)+Q_{ask}(t)}\right)'$$your input will be somehow
homogeneous with your outputs (I am not guaranteeing any good result, it is just an improvement of your toy example)
You also need to pay attention to the outputs you ask to the perceptron you predict. Ask you the question "
why not using $U$ perceptrons if I have $U$ outputs?". The only good reason is that you are convinced that sharing the same hidden units to try to predict simultaneously the $U$ outputs will stabilize the weights: the outputs should be deeply linked to a same underlying phenomenon.
What about the
time in all this? I made as if the association $(X,Y)$ was i.i.d. with respect to time. If it is not the case you can try to use a more sliding approach like TDNN (Time Delayed Neural Networks). It is to the perceptron what GARCH models are to linear regressions (more or less)...
Of course you need to take care of
over-fitting and all other VCdimension-like topics of statistical learning.
Last remark: you seem to try to use perceptrons for intraday prediction. In the context of high frequency trading, do never forgot that you will interact with the orderbook dynamics, so you will be in a
control-oriented framework rather than a prediction one. What I mean is that by sending orders in the LOB, you will change it, but continue to use the changed state of the orderbook for the next step of your prediction... It is called market impact and it implies that you try to control the LOB.
Some special techniques have been developed to use perceptrons in the scope of control, like in
How piecewise affine neural networks can generate a stable nonlinear control, by Lehalle and Azencott, in Proceedings of the 1999 IEEE International Symposium on Intelligent Control/Intelligent Systems and Semiotics, 1999. |
Apparently, we can solve an MDP (that is, we can find the optimal policy for a given MDP) using a linear programming formulation. What's the basic idea behind this approach? I think you should start by explaining the basic idea behind a linear programming formulation and which algorithms can be used to solve such constrained optimisation problems.
This question seems to be addressed directly in slides 29-31 of the deck you linked to in the comment under your question.
The basic idea is:
Assume you have a complete model of the MDP (transitions, rewards, etc.).
For any given state, we have the assumption that the state's true value is reflected by:
$$V(s) = r + \gamma \arg\max_{a \in A}\sum_{s' \in S} P(s' | s,a)*V(s')$$
That is, the true value of the state is the reward we accrue for being in it, plus the expected future rewards of acting optimally from now until infinitely far into the future, discounted by the factor $\gamma$, which captures the idea that reward in the future is less good than reward now.
In Linear Programming, we find the minimum or maximum value of some function, subject to a set of constraints. We can do this efficiently if the function can take on continuous values, but the problem becomes NP-Hard if the values are discrete. You would usually do this using something like the Branch & Bound algorithm. These are widely available in fast implementations. GLPK is a decent free library. IBM's CPLEX is faster, but expensive.
We can represent the problem of finding the value of a given state as: $$minimize\ V(s)$$ subject to the constraints: $$\forall a\in A, V(s) \geq r + \gamma\sum_{s' \in S} P(s' | s,a)*V(s')$$ It should be apparent that if we find the smallest value of V(s) that matches this requirement, then that value would make exactly one of the constraints tight.
If you formulate your linear program by writing a program like the one above for everystate and then minimize $\sum_{s\in S} V(s)$, subject to the union of all the constraints from all these sub-problems you have reduced the problem of learning a value function to solving the LP. If you formulate your linear program by writing a program like the one above for
Hope that helps some. |
$\int_0^\infty \frac{m^{x+1}e^{-2m}}{\Gamma(x+1)\Gamma(2)}dm =\frac{\Gamma(x+2)\frac{1}{2}^{x+2}}{\Gamma(x+1)\Gamma(2)}$
How does the left side equal the right side? I understand that the gamma function is $\Gamma(z) = \int_0^\infty t^{z-1} e^{-t} dt$
and that $\Gamma(x+2) = \int_0^\infty m^{x+2-1} e^{-m} dm$
However I am missing something to understand where the $\frac{1}{2}^{x+2}$ comes from. |
102 5 Homework Statement Figure (See below) shows a valve separating a reservoir from a water tank. If this valve is opened, what is the maximum height above point B attained by the water stream coming out of the right side of the tank? Assume h = 10.0 m, L = 2.00 m, and ## \Theta ## = 30.0°, and assume the cross-sectional area at A is very large compared with that at B. Homework Equations H = Vt - g(t^2)/2 F = P A P = pgh
$$ H = \frac { V^2 - V_0^2 sin \Theta} {-2g} $$
$$ H = \frac {V_0^2 sin \Theta} {2g} $$
So, I need to calculate ## V_0 ##
I'm thinking about pressure.
$$ P = \rho g \Delta h $$
$$ \Delta h = h - L sin \Theta $$
$$ F_A = P S_A $$
$$ F_A = P S_B $$
Dead End here... |
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper) |
Is there a canonical regression approach for predicting the ranks of a response?
I'd like to fit a regression to a dataset where the response is highly non-normal with very large outliers. There are about 10 predictors. I haven't had much success with transformations (the best has been adding a constant and then logging the response twice, but this isn't very interpretable).
However, I only care about the ranks of the response. The response is really only a score that is used as an instrument for ranking observations. What I really want to know is which predictors explain the most variation in the ranks.
My approach has been the following:
Calculate the ranks of the response. I.e. for each observation $i$, calculate $R(Y_i)$ Suppose $N$ is the number of observations. Then, approximately, $U_i =\frac{R(Y_i)}{N} \sim Unif(0, 1)$ By the Probability Integral Transform, $Z_i = \Phi^{-1}(U_i) \sim N(0,1)$ Use $Z$ as my response in a regression of $Z$ on the predictors
Since these rank and inverse CDF transformation are monotone and thus preserve rank, I reason that this regression approach will help me identify which covariates are most predictive of rank.
Does this approach work? Is there a better or more standard approach to predicting rank with a set of covariates? Googling around, I found this paper but I don't know how accepted or well known the approach is: https://journal.r-project.org/archive/2012-2/RJournal_2012-2_Kloke+McKean.pdf
Thanks! |
Central Board of Secondary Education (CBSE) is responsible for conducting an examination of the school affiliated to the central board. It is of prime importance for the students to excel in the field of education for their brightening future. Maths requires proper understanding and skills rather than just memorizing them, as like the other subjects. Practicing important questions would be a great relief for the students and boost up their confidence. We at BYJU’S provide students of class 9th with 2 marks important questions to practice. Practicing them will be helpful for students to excel in their examination. Students preparing for CBSE Class 9 Maths Board Examination are advised to practice the question given below:
Question 1- Express the rational number \(0.\bar{45}\) Question 2- Find the remainder, when the polynomial \(2x^{4}+ x^{3} + 4x^{2}- 3x -2\) Question 3- Find the value of k if (x-1) is a factor of \(4x^{3} + 3x^{2}- 4x + k\) Question 4- In which quadrant or on which axis each of the given points lie?
(i) (-2,4)
(ii) (-8,0)
(iii) (1,-7)
(iv) (-7,-2)
Question 5- If a point C lies between two points A and B such that AC=BC, then prove that AC = 1/2 AB. Question 6- If \(x^{\frac{a}{b}} = 1\) Question 7- If the opposite side of a parallelogram are \((63 – 3x)^{\circ}\) Question 8- Construct a triangle in which the three sides are of length 6 cm, 4 cm and 2.8 cm. Question 9- Represent \(\sqrt{5}\) Question 10- The points scored by a Kabaddi team in a series of matches are as follows
17,27,7,27,15,5,14,8,10,24,48,10,8,7,18,28
Find the median and mode of the data.
Question 11- Find the value of ‘a’ such that x = 1 and y = 1 is a solution of the linear equation
9ax + 12ay = 63
Question 12- Evaluate \((104)^{3}\) Question 13- If the angles of a triangle are in the ratio 2:3:4, then find the angles of the triangle. Question 14- In the given figure, if \(\angle POR\)
Question 15- Expand \(\left ( \frac{1}{x} + \frac{y}{3} \right )^{3}\) Question 16- Find the two irrational number between 0.5 and 0.55.
‘ |
You are effectivelly talking about the Renormalization group in quantum field theory, the paradigmatic self-similar system underlying the whole of nature--with ultimate (subsequent) dramatic consequences in the strong interactions.
Specifically, in 1954, Gell-Mann and Low introduced their eponymous RG equation for QED, $$g(\mu)=G^{-1}\left(\left(\frac{\mu}{M}\right)^d G(g(M))\right),$$for some function
G (Wegener's function, the QFT version of Schroeder's function, see Appendix B of their paper. T D Lee is thanked.) and a constant d, in terms of the coupling g(M) at a reference scale M.
Gell-Mann and Low, ostensibly unaware of Schroeder's equation, nevertheless exploited the extraordinary functional composition properties of this result: they realized that the effective scale can be
arbitrarily taken as μ, and can thus vary to define the theory at any other scale, as well, $$g(\kappa)=G^{-1}\left(\left(\frac{\kappa}{\mu}\right)^d G(g(\mu))\right) = G^{-1}\left(\left(\frac{\kappa}{M}\right)^d G(g(M))\right).$$
This, in effect amounts to arbitrary iteration of the map connecting the couplings at two disparate scales, with the iteration index
t, in your language and the contemporary QFT parlance, being the continuous logarithm of the scale M. In effect, gs are functionally conjugated through G to multiplications of scale ratios.
This paradigm actually illustrates your point: physicists would rather integrate the
infinitesimal differential version of this equation in perturbation theory (β function), rather than solve the recondite Schroeder equation here, ab initio. Nevertheless, my collaborator and I have, in fact, done just that, to indicate that it is possible: Curtright, T. L.; Zachos, C. K. (March 2011). "Renormalization Group Functional Equations". Physical Review D. 83 (6): 065019. |
The rather anti-climatic answer to "
Does anyone know why this is?" is that simply nobody cares enough to implement a non-negative ridge regression routine. One of the main reasons is that people have already started implementing non-negative elastic net routines (for example here and here). Elastic net includes ridge regression as a special case (one essentially set the LASSO part to have a zero weighting). These works are relatively new so they have not yet been incorporated in scikit-learn or a similar general use package. You might want to inquire the authors of these papers for code.
EDIT:
As @amoeba and I discussed on the comments the actual implementation of this is relative simple. Say one has the following regression problem to:
$y = 2 x_1 - x_2 + \epsilon, \qquad \epsilon \sim N(0,0.2^2)$
where $x_1$ and $x_2$ are both standard normals such as: $x_p \sim N(0,1)$. Notice I use standardised predictor variables so I do not have to normalise afterwards. For simplicity I do not include an intercept either. We can immediately solve this regression problem using standard linear regression. So in R it should be something like this:
rm(list = ls());
library(MASS);
set.seed(123);
N = 1e6;
x1 = rnorm(N)
x2 = rnorm(N)
y = 2 * x1 - 1 * x2 + rnorm(N,sd = 0.2)
simpleLR = lm(y ~ -1 + x1 + x2 )
matrixX = model.matrix(simpleLR); # This is close to standardised
vectorY = y
all.equal(coef(simpleLR), qr.solve(matrixX, vectorY), tolerance = 1e-7) # TRUE
Notice the last line. Almost all linear regression routine use the QR decomposition to estimate $\beta$. We would like to use the same for our ridge regression problem. At this point read this post by @whuber; we will be implementing
exactly this procedure. In short, we will be augmenting our original design matrix $X$ with a $\sqrt{\lambda}I_p$ diagonal matrix and our response vector $y$ with $p$ zeros. In that way we will be able to re-express the original ridge regression problem $(X^TX + \lambda I)^{-1} X^Ty$ as $(\bar{X}^T\bar{X})^{-1} \bar{X}^T\bar{y}$ where the $\bar{}$ symbolises the augmented version. Check slides 18-19 from these notes too for completeness, I found them quite straightforward. So in R we would some like the following:
myLambda = 100;
simpleRR = lm.ridge(y ~ -1 + x1 + x2, lambda = myLambda)
newVecY = c(vectorY, rep(0, 2))
newMatX = rbind(matrixX, sqrt(myLambda) * diag(2))
all.equal(coef(simpleRR), qr.solve(newMatX, newVecY), tolerance = 1e-7) # TRUE
and it works. OK, so we got the ridge regression part. We could solve in another way though, we could formulate it as an optimisation problem where the residual sum of squares is the cost function and then optimise against it, ie. $ \displaystyle \min_{\beta} || \bar{y} - \bar{X}\beta||_2^2$. Sure enough we can do that:
myRSS <- function(X,y,b){ return( sum( (y - X%*%b)^2 ) ) }
bfgsOptim = optim(myRSS, par = c(1,1), X = newMatX, y= newVecY,
method = 'L-BFGS-B')
all.equal(coef(simpleRR), bfgsOptim$par, check.attributes = FALSE,
tolerance = 1e-7) # TRUE
which as expected again works. So now we just want : $ \displaystyle \min_{\beta} || \bar{y} - \bar{X}\beta||_2^2$ where $\beta \geq 0$. Which is simply the same optimisation problem but constrained so that the solution are non-negative.
bfgsOptimConst = optim(myRSS, par = c(1,1), X=newMatX, y= newVecY,
method = 'L-BFGS-B', lower = c(0,0))
all(bfgsOptimConst$par >=0) # TRUE
(bfgsOptimConst$par) # 2.000504 0.000000
which shows that the original non-negative ridge regression task can be solved by reformulating as a simple constrained optimisation problem. Some caveats:
I used (practically) normalised predictor variables. You will need to account of the normalisation yourself. Same thing goes for the non normalisation of the intercept. I used
optim's L-BFGS-B argument. It is the most vanilla R solver that accepts bounds. I am sure that you will find dozens of better solvers.
In general constraint linear least-squares problems are posed as quadratic optimisation tasks. This is an overkill for this post but keep in mind that you can get better speed if needed. As mentioned in the comments you could skip the ridge-regression as augmented-linear-regression part and directly encode the ridge cost function as an optimisation problem. This would be a lot simpler and this post significantly smaller. For the sake of argument I append this second solution too. I am not fully conversational in Python but essentially you can replicate this work by using NumPy's linalg.solve and SciPy's optimize functions. To pick the hyperparameter $\lambda$ etc. you just do the usual CV-step you would do in any case; nothing changes.
Code for point 5:
myRidgeRSS <- function(X,y,b, lambda){
return( sum( (y - X%*%b)^2 ) + lambda * sum(b^2) )
}
bfgsOptimConst2 = optim(myRidgeRSS, par = c(1,1), X = matrixX, y = vectorY,
method = 'L-BFGS-B', lower = c(0,0), lambda = myLambda)
all(bfgsOptimConst2$par >0) # TRUE
(bfgsOptimConst2$par) # 2.000504 0.000000 |
This question already has an answer here:
I'm having trouble finding a way to express this fraction with the amsmath package.
The result I'm getting isn't quite nice. Are there alternative ways to write this fraction?
\documentclass[10pt,norsk, fleqn]{extarticle} \usepackage[a4paper, margin=1.2cm,includeheadfoot]{geometry} \usepackage{amsmath} \begin{document} $\displaystyle\frac{a}{\displaystyle\frac{b}{\displaystyle\frac{c}{\displaystyle\frac{d}{e}}}}\to\left(\displaystyle\frac{\left(\displaystyle\frac{\left(\displaystyle\frac{\left(\displaystyle\frac{a}{b}\right)}{c}\right)}{d}\right)}{e}\right)$ \end{document} |
\usepackage{hyperref}
\usepackage[ocgcolorlinks]{ocgx2}[2017/03/30]
Package
, as of version
ocgx2
0.24, re-implements the option of
ocgcolorlinks
, based on B. Lerner's code with some refinements:
hyperref
Besides OCG colour links that wrap around line breaks, also wrapping at
page breaks and nested hyperlinks are supported (only pdfLaTeX/LuaLaTeX, as XeTeX does not support nested links at all), and the issue with empty link text has been resolved.
Example code (open on ShareLaTeX):
\documentclass[a6paper]{scrartcl}
\usepackage{hyperref}
\usepackage[ocgcolorlinks]{ocgx2}[2017/03/30]
\usepackage{mwe,graphicx}
\begin{document}\raggedright\huge
Empty links $\Rightarrow$\href{https://tex.stackexchange.com}{\qquad}$\Leftarrow$ don't flood the page with link colour.\\[2ex]
Visit \href{https://ctan.org/tex-archive/macros/latex/contrib/ocgx2}{ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 ocgx2 \hypersetup{urlcolor=red}\href{https://ctan.org}{on CTAN CTAN CTAN CTAN CTAN CTAN CTAN CTAN CTAN CTAN CTAN CTAN CTAN CTAN CTAN CTAN} ocgx2 ocgx2 ocgx2 ocgx2 ocgx2}!\\[2ex]
{
\normalsize Graphical content, if mixed with text inside the link text, must be protected by enclosing in \verb+\ocglinkprotect{...}+:\\[1ex]
}
Visit \href{https://ctan.org/tex-archive/macros/latex/contrib/mwe}{this fancy \ocglinkprotect{\includegraphics[width=1cm]{example-image}} on CTAN!}
\end{document} |
EDIT: Actually, one can indeed change the alignment of
array in LyX by GUI means, Besides, I learnt that
array is a bad way of doing what I describe in the "Background" below and I should use
alignat. However, I think this question is relevant nevertheless, because there might be cases one wants to include own commands... for example an
array with
>{\cmd}l-style alignment specifyers, which is currently not possible to enter in the GUI.
Background:
I want to typeset a system of ordinary differential equations where I have similar terms in the same columns, left-aligned. I am not able to do that in LyX because all of the math environments I know center-align their contents.
The best I can up with using LyX-functions is to open a math equation (
Ctrl-
Shift-
M) followed by typing
\array followed by the space key, which opens up a table-like environment where I can enter math:
However, I can not change the alignment of the columns like I can when creating a table outside the math environment. This results in code like the following:
\documentclass{article} % Added for MWE\begin{document} % Added for MWE\newcommand\d{\mathrm{d}} % Added for MWE\[\begin{array}{ccc}\frac{\d V}{\d t} & =-\delta_{V} & -k_{inf}\cdot C\cdot V\\\frac{\d C}{\d t} & =\lambda-\delta_{C} & -k_{inf}\cdot C\cdot V\end{array}\]\end{document} % Added for MWE
Looking like this:
My problem is: I can not change the
{ccc} part of the
array environment yet.
I want to stay with LyX, because the preview feature saves me from repeatedly compiling my document, which takes quite some time.
The best I could come up with so far is to embed raw LaTeX code inside a "Preview" box (code same as above, except
lll):
So the best would be to define a command in LyX, which looks like
array in the LyX window, and invokes a custom environment. This uses the correct alignment which I define beforehand in a macro:
% Defined in LyX preamble\newenvironment{myarray}{\begin{array}{\al}}{\end{array}}% Defined as Evil Red Text (ERT) before Equation\newcommand\al{llll}% Created by LyX with the same appearance as if I had typed% \array in LyX math mode\[\begin{myarray}\frac{\d V}{\d t} & =-\delta_{V} & -k_{inf}\cdot C\cdot V\\\frac{\d C}{\d t} & =\lambda-\delta_{C} & -k_{inf}\cdot C\cdot V\end{myarray}\] |
Inverse scattering on conformally compact manifolds
1.
Department of Mathematics, Purdue University, 150 N. University Street, West Lafayette, IN 47907-2067, United States
2(y) at the boundary and $V\in C^\infty(X)$ not vanishing at the boundary. We prove that the scattering matrices at two fixed energies $\lambda_1,$ $\lambda_2$ in a suitable subset of c , determines α, and the Taylor series of both the potential and the metric at the boundary. Mathematics Subject Classification:Primary: 35R30, 35P25; Secondary: 58J4. Citation:Leonardo Marazzi. Inverse scattering on conformally compact manifolds. Inverse Problems & Imaging, 2009, 3 (3) : 537-550. doi: 10.3934/ipi.2009.3.537
[1] [2] [3] [4] [5]
Thierry Daudé, Damien Gobin, François Nicoleau.
Local inverse scattering at fixed energy in spherically symmetric asymptotically hyperbolic manifolds.
[6]
Tan Bui-Thanh, Omar Ghattas.
Analysis of the Hessian for inverse scattering problems. Part
III: Inverse medium scattering of electromagnetic waves in three
dimensions.
[7] [8] [9]
Laurent Bourgeois, Houssem Haddar.
Identification of generalized impedance boundary conditions in inverse scattering problems.
[10]
Frederic Weidling, Thorsten Hohage.
Variational source conditions and stability estimates for inverse electromagnetic medium scattering problems.
[11] [12]
Mirela Kohr, Cornel Pintea, Wolfgang L. Wendland.
Neumann-transmission problems for pseudodifferential Brinkman
operators on Lipschitz domains in compact Riemannian manifolds.
[13]
Masaya Maeda, Hironobu Sasaki, Etsuo Segawa, Akito Suzuki, Kanako Suzuki.
Scattering and inverse scattering for nonlinear quantum walks.
[14]
Francesco Demontis, Cornelis Van der Mee.
Novel formulation of inverse scattering and characterization of scattering data.
[15] [16]
Kaitlyn (Voccola) Muller.
A reproducing kernel Hilbert space framework for inverse scattering problems within the Born approximation.
[17] [18] [19]
Matti Lassas, Teemu Saksala, Hanming Zhou.
Reconstruction of a compact manifold from the scattering data of internal sources.
[20]
2018 Impact Factor: 1.469
Tools Metrics Other articles
by authors
[Back to Top] |
2018-12-08
We begin with Adam Tooze laying out the issues:
Adam Tooze: Italy: How Does the E.U. Think This Is Going to End?: "Over the past 10 years, Italy’s gross domestic product per capita has fallen... unique among large advanced economies...
...More than 32 percent of Italy’s young people are unemployed. The gloom, disappointment and frustration are undeniable. For the commission to declare that this is a time for austerity flies in the face of a reality that for many Italians is closer to a personal and national emergency....
The two parties that make up the current Italian government, the League and the Five Star Movement, were elected in March to address this crisis. The League is xenophobic; Five Star is erratic and zany. But the economic programs on which they campaigned are hardly outlandish.... The Italian government’s budget forecasts are optimistic. But others, including the Bank of Italy and the Peterson Institute of International Economics, warn that Italy is caught in a trap: Anxieties about debt sustainability mean that any stimulus has the perverse effect of driving up interest rates, squeezing bank lending and reducing growth...
What would have to be the case for a stimulus to have this perverse effect—to actually manage to not boost the economy but rather squeeze bank lending and reduce growth?
Our Filing System: The Basic IS Framework
Back in the late 1990s Paul Krugman concluded that workings of the macroeconomy had changed: that we had started to see
The Return of Depression Economics https://books.google.com/books?isbn=039304839X. He was right. This meant that the economic analytical tools that had been forged in order to understand the Great Depression of the 1930s had become the right place to start any analysis of what was going on in the business cycle. And so it has proven to be for the past twenty years,
Therefore we start with John Hick's 1937 IS-Equation, from his article "Mr. Keynes and the 'Classics': A Suggested Interpretation" https://tinyurl.com/20181208a-delong. The variable we place on the left-hand side is aggregate demand AD. The variable we place on the right-hand side is the long-term risky real interest rate r. In between are a large host of parameters drawn from the macroeconomy's behavioral relationships and from salient features of the macroeconomic environment and macroeconomic policy. We identity aggregate demand AD with national income and product Y, arguing that the inventory-adjustment mechanism will make the two equal at the macroeconomy's short-run sticky-price Keynesian equilibrium within a few quarters of a year.
Then we have not so much a model of the macroeconomy as a filing system for factors that we can and need to model, thus:
To simplify notation, we will typically use "$\Delta$" to stand for changes in economic quantities generated by shifts in the economic policy and in the economic environment, and we will drop terms that are zero.
Applying Our Filing System to Italy Today
For the problem of understanding Italy today, the pieces of this equation that matter are:
The change in national income and product ${\Delta}Y$ equals the change in aggregate demand ${\Delta}Y$ equals the sum of:
the multiplier $\mu$ times the change in government purchases ${\Delta}G$ the multiplier $\mu$ times the foreign propensity to purchase our exports $x_f$ times the change in exchange speculator optimism or pessimism about the long-run soundness of the currency ${\Delta}{\epsilon}_o$ the multiplier $\mu$ times the foreign propensity to purchase our exports $x_f$ times the sensitivity of the exchange rate to interest rates ${\epsilon}_r$ times the change in the interest rate in the rest of the eurozone ${\Delta}r^f$ the multiplier $\mu$ times the sum of the interest sensitivity of investment $I_r$ plus the product of the foreign propensity to purchase our exports $x_f$ and the sensitivity of the exchange rate to interest rates ${\epsilon}_r$ all times the change in the interest rate ${\Delta}r$
Now we need an extra equation: a country with a freely-floating exchange rate $\epsilon$:
the change in the exchange rate Δε is equal to the change in exchange speculator optimism or pessimism about the long-run soundness of the currency Δε
o minus the sensitivity of the exchange rate to interest rates ε r all times the change in the interest rate Δr. But Italy does not have a freely-floating exchange rate: Italy is in the eurozone. So
$ {\Delta}{\epsilon} = 0 $
therefore:
And the rewritten relevant parts of the IS equation are:
$ {\Delta}Y = \mu{\Delta}G - \frac{{\mu}I_r}{\epsilon_r}\Delta\epsilon_o -{\mu}I_r{\Delta}r^f $
Thus the change ${\Delta}Y$ in national income and product that follows a fiscal expansion with higher government purchases ${\Delta}G$ will be positive as long as:
$ {\Delta}G > \frac{I_r}{\epsilon_r}\Delta\epsilon_o + I_r{\Delta}r^f $
The shift to more expansionary fiscal policy will indeed boost demand, production, and employment unless this equation fails to hold.
Conclusion
What conclusions can we draw from this equation?
First, we conclude that taking on unsustainable debt—or rather debt perceived as unsustainable—could indeed fail to boost demand, production, and employment if the reaction Δε
o to ΔG is too large. The natural thing, therefore, would be for the IMF and the European Union to step in with short-term support and guarantees to ensure that the market reaction Δε o coupled with a longer-term structural adjustment program to guarantee that debt repayment will in fact take place.
Second, that the European Union—which controls Δr
f—could assist by switching to an easier money-tighter fiscal policy mix itself and so creating a negative value for Δr f.
Why would the European Union want to assist in these ways? Well, it wants a prosperous Italy, doesn't it? And it wants an Italy that stays in the eurozone, doesn't it? why would the IMF want to assist in these ways? Well, that is its job, isn't it?
If the past ten years ought to have taught the Great and Good of Europe anything, it is that ensuring prosperity, growth, and high employmennt is job #1. Figuring out how to dot the financial i's and cross the financial t's is distinctly secondary. But, as Adam Tooze writes, instead the Great and Good of Europe seem to wish to "hold the line on debt and deficits" without offering anything "positive in exchange, such as a common European investment and growth strategy or a more cooperative approach to the refugee question". He calls this, with great understatement, "a high-risk and negative strategy".
I cannot see how anyone can disagree.
#highlighted #globalization #eurozone #monetarypolicy
Github:
nbviewer: http://nbviewer.jupyter.org/github/braddelong/NOTEBOOK-Macro-and-Macro-Policy/blob/master/What%20Are%20Italy%27s%20Options%3F%20%282018-12-08%29.ipynb This File in html: https://www.bradford-delong.com/2018/12/why-doesnt-italy-have-better-options.html Edit This html file: https://www.typepad.com/site/blogs/6a00e551f08003883400e551f080068834/post/6a00e551f080038834022ad3c5cf3e200b/edit |
Let $X$ be a connected space. According to Getzler BV-algebras and two-dimensional topologcial field theories , page 271, we have and isomorphism
$ H_*(\Omega^2\Sigma^2X) \cong {\cal G}( \widetilde{H}_* X ) $
where ${\cal G}( V)$ means the free Gerstenhaber (Getzler calls it "braid") algebra over the graded space $V$ and $\widetilde{H}$ is the reduced homology.
Getzler credits Cohen's results in The homology of iterated loop spaces for this isomorphism. The closest thing I can find there is Cohen's theorem 3.2, in his chapter "The homology of $C_{n+1}$-spaces, $n\geq 0$", which sounds like it, but I'm having some problems to deduce Getzler's claim.
First of all, Getzler says to be working with complex coefficients, and Cohen with $\mathbb{Z}_p$ ones. Is it clear that the result should be true no matter which coefficients? Rational coefficients too?
Secondly, Cohen's result for $n=1$ would be, I guess, Getzler's case:
$ H_*(\Omega^2\Sigma^2X) \cong GW_1(H_*X) \ . $
But here the free algebra functor is this $GW_1$ which I'm having some troubles to identify with ${\cal G}$.
Any hints or other references will be greatly appreciated. |
<< ToK
ToK Warszawa meeting - Rough Notes Thu 15 Feb 2007 These are just rough notes - feel free to correct them, add links, etc. Hector Rubinstein - stockholm - magnetic fields
Magnetic fields on kpc scales exist. They may exist on intergalacticscales - it's unclear whether or not their origin is primordial.
CMB - Planck - may be able to detect magnetic fields present at theepochs not long after nucleosynthesis and recombination
it is well known that photon has a thermal mass - about 10^{-39} [units =eV?]which is extremely small - related to electron loops
Maxwell eqns -> Proca eqns WikipediaEn:Proca_action
m_photon < 10^-26 eV
\exists galactic mag fields at z \approx 3 making dynamo mechanisms difficult to explain them
Boehm - LDM hypothesis 511 keV detection
Leventhal(sp?) 199x ApJ
OSSE 3 components Purcell et al 1997
candidates
stars - SNe, SNII, WRcompact sources - pulsars, BH, low mass binaries
- most excluded because they would imply 511keV from the disk- SNIa - need large escape fraction and explosion rate to maintain a steady flux- low mass X-ray binaries - need electrons to escape from the disk to the bulge
dm + dm -> e^- + e^+ e+ loses energy -> positronium e+e- -> positronium decays
para-positronium 2 gamma - monochromatic wih 511keVortho-positronium 3 gamma - continuum
predictions
positron emission should be maximal with highest DM concentration (n^2 effect?)
cdm spectrum
does NOT produce CDM-like power spectrum???
- at 10^9 M_sun essentially CDM-like
- by 10^6 M_sun, the difference would be important
spectrum
Ascasibar et al 2005, 2006
model
* through F- 511keV* through Z' relic density
link with neutrino mass
interaction/decay diagrams ->
link between neutrino mass and DM cross section:
%$ m_\nu = \sqrt{ \sigma \nu \over 128 \pi^3 } m_N^2 ln{ \Lambd^2/m_N^2 } $%
\sigma_\nu well-known for relic density \sim 10^{-26} cm^3/sMeV" class="mmpImage" src="/foswiki/pub/Cosmo/ToK070215RoughNotes/_MathModePlugin_731486621df83fabc7268c4eba1fd524.png" />
to fit neutrino data
BBN: 1MeV < m_N
low energy Beyond SM MeV
DM has definitely escaped all previous low energy experimentsdue to lack of luminosity
BABAR/BES II ... ?
summary
...
explains low value of neutrino masses detection at LHC may be possible but requires work back to SUSY -> snu-neutralino-nu ? Conlon - hierarchy problems in string theory: the power of large volume
planck scale 10^18 GeV
... cosm constant scale (10^-3eV)^4
- large-volume models can generate hierarchies thorugh a stabilisedexponentially large volume
- predicts cosmological constant (but about 50 orders of magnitude toolarge - solving this problem is left to the reader/audience)
G\"unther Stigl - high-energy c-rays, gamma-rays, neutrinos
HESS - correlation of observations at GC with molecular cloud distribution
KASCADE - has made observations
Southern Auger - 1500km^2 - in Chile/Argentina
Hillas plot
c-rays at highest energies could be protons, could be ions
- most interactions produce pions; pi^\pm decays to neutrinos pi^0 decays to photons (gamma-rays)
origin of very high energy c rays remains one of the fundamental unsolved questions of astroparticle physics - even galactic c ray origin is unclear acceleration and sky distribution of c rays are strongly linked to the strength and distribution of cosmic magnetic fields - which are poorly known sources probably lie in fields of \mu-Gauss HE c-rays, pion-production, gamma-ray/neutrinos - all three fields should be considered together; strong constraints arise from gamma-ray overproduction Khalil - DM - SUSY - brane cosmology
(British University in Egypt = BUE)
- friedmann eqn modified in 5D (brane model)
- dark matter relic abundance |
The primitive classes are the highest weight vectors.
Hard Lefschetz says that the operator $L$ (which algebraic geometers know as intersecting with a hyperplane) is the "lowering operator" $\rho(F)$ in a representation $\rho \colon \mathfrak{sl}_2(\mathbb{C})\to End (H^\ast(X;\mathbb{C}))$. The raising operator $\rho(E)$ is $\Lambda$, the restriction to the harmonic forms of the the formal adjoint of $\omega \wedge \cdot$ acting on forms. The weight operator $\rho(H)$ has $H^{n-k}(X;\mathbb{C})$ as an eigenspace (= weight space), with eigenvalue (=weight) $k$.
The usual picture of an irreducible representation of $\mathfrak{sl}_2(\mathbb{C})$ is of a string of beads (weight spaces) with $\rho(F)$ moving you down the string and decreasing the weight by 2, and $\rho(E)$ going in the opposite direction. The highest weight is an integer $k$, the lowest weight $-k$.
From this picture, it's clear that the space of highest weight vectors in a (reducible) representation is $\ker \rho(E)$. It's also clear that, of the vectors of weight $k$, those which are highest weights are the ones in $\ker \rho(F)^{k+1}$. So the highest weight vectors in $H^{n-k}(X; \mathbb{C})$ are those in $\ker L^{k+1}$.
Of course, all this ignores the rather subtle question of how to explain in an invariant way what this $\mathfrak{sl}_2(\mathbb{C})$, or its corresponding Lie group, really is.
Added, slipping Mariano an envelope. But here's what that group is. Algebraic geometers, brace yourselves. Fix $x\in X$, and let $O_x = O(T_x X\otimes \mathbb{C})\cong O(4n,\mathbb{C})$. Then $O_x$ acts projectively on $\Lambda^\bullet (T_x X\otimes \mathbb{C})$ via the spinor representation (which lives inside the Clifford action). The holonomy group $Hol_x\cong U(n)$ also acts on complex forms at $x$, and the "Lefschetz group" $\mathcal{L}$ is the centralizer of $Hol_x$ in $O_x$. One proves that $\mathcal{L}\cong GL(\mathbb{C}\oplus \mathbb{C})$. Not only is this the right group, but its Lie algebra comes with a standard basis, coming from the splitting $T_x X \otimes\mathbb{C} = T^{1,0} \oplus T^{0,1}$. Now, $\mathcal{L}$ acts on complex forms on $X$, by parallel transporting them from $y$ to $x$, acting, and transporting back to $y$. Check next that the action commutes with $d$ and $*$, hence with the Laplacian, and so descends to harmonic forms = cohomology. Finally, check that the action of $\mathcal{L}$ exponentiates the standard action of $\mathfrak{gl}_2$ where the centre acts by scaling. (This explanation is Graeme Segal's, via Ivan Smith.) |
Recovering two Lamé kernels in a viscoelastic system
1.
Dipartimento di Matematica “F. Enriques”, Universitá di Milano, via C. Saldini 50, 20133 Milano, Italy
2.
Sobolev Institute of Mathematics, Siberian branch of Russian Academy of Sciences, Acad. Koptyug prosp., 4, Novosibirsk, 630090, Russian Federation
sametemporal part, i.e. $\lambda_1(t,x)=k(t)p(x)$ and $\mu_1(t,x)=k(t)q(x)$. Furthermore, it is assumed that the spatial parts $p$ and $q$ of $\lambda_1$ and $\mu_1$ are unknownand the threeadditional measurements $\sum_{j=1}^3\sigma_{i,j}^0(t,x)$ n$_j(x) = g_i(t,x)$, $i=1,2,3$, are available on $(0,T)\times \partial \Omega$ for some (sufficiently large) subset $\Gamma\subset \partial \Omega$.
The fundamental task of this paper is to show the uniqueness of the pair $(p,q)$ as well as its continuous dependence on the boundary conditions, the initial data being kept fixed and the initial velocity being suitably related to the initial displacement.
Keywords:linear viscoelastic materials, Identification problems, hyperbolic second-order integrodifferential systems, recovering relaxation kernels, uniqueness, continuous dependence.. Mathematics Subject Classification:Primary: 45Q05, 45K05; Secondary: 35L20, 74H05, 74H45, 74J2. Citation:Alfredo Lorenzi, Vladimir G. Romanov. Recovering two Lamé kernels in a viscoelastic system. Inverse Problems & Imaging, 2011, 5 (2) : 431-464. doi: 10.3934/ipi.2011.5.431
References:
[1]
R. A. Adams, "Sobolev Spaces,",
[2]
A. L. Bukhgeim and M. V. Klibanov,
[3]
C. Cavaterra, A. Lorenzi and M. Yamamoto,
[4]
L. Hörmander, "Linear Partial Differential Operators,",
[5]
O. Yu. Imanuvilov,
[6]
O. Yu. Imanuvilov and M. Yamamoto,
[7]
O. Yu. Imanuvilov and M. Yamamoto,
[8]
O. Yu. Imanuvilov and M. Yamamoto,
[9]
O. Yu. Imanuvilov and M. Yamamoto,
[10]
V. Isakov, "Inverse Source Problems,",
[11] [12] [13]
V. Isakov, "Inverse Problems for Partial Differential Equations,",
[14]
V. Isakov and M. Yamamoto, "Carleman Estimate with the Neumann Boundary Condition and its Applications to the Observability Inquality and Inverse Hyperbolic Problems,",
[15] [16] [17]
M. V. Klibanov and A. Timonov, "Carleman Estimates for Coefficient Inverse Problems and Numerical Applications,",
[18]
M. M. Lavrent'ev, V. G. Romanov and S. P. Shishat'skiĭ, "Ill-posed Problems of Mathematics Physics and Analysis,",
64 (1986).
Google Scholar
[19] [20]
J. Nečas, "Les Methodes Directes en Theorie des Equations Elliptiques,",
[21]
J. Nečas and I. Hlaváček, "Mathematical Theory Of Elastic And Elasto-Plastic Bodies: An Introduction,",
[22] [23] [24] [25]
show all references
References:
[1]
R. A. Adams, "Sobolev Spaces,",
[2]
A. L. Bukhgeim and M. V. Klibanov,
[3]
C. Cavaterra, A. Lorenzi and M. Yamamoto,
[4]
L. Hörmander, "Linear Partial Differential Operators,",
[5]
O. Yu. Imanuvilov,
[6]
O. Yu. Imanuvilov and M. Yamamoto,
[7]
O. Yu. Imanuvilov and M. Yamamoto,
[8]
O. Yu. Imanuvilov and M. Yamamoto,
[9]
O. Yu. Imanuvilov and M. Yamamoto,
[10]
V. Isakov, "Inverse Source Problems,",
[11] [12] [13]
V. Isakov, "Inverse Problems for Partial Differential Equations,",
[14]
V. Isakov and M. Yamamoto, "Carleman Estimate with the Neumann Boundary Condition and its Applications to the Observability Inquality and Inverse Hyperbolic Problems,",
[15] [16] [17]
M. V. Klibanov and A. Timonov, "Carleman Estimates for Coefficient Inverse Problems and Numerical Applications,",
[18]
M. M. Lavrent'ev, V. G. Romanov and S. P. Shishat'skiĭ, "Ill-posed Problems of Mathematics Physics and Analysis,",
64 (1986).
Google Scholar
[19] [20]
J. Nečas, "Les Methodes Directes en Theorie des Equations Elliptiques,",
[21]
J. Nečas and I. Hlaváček, "Mathematical Theory Of Elastic And Elasto-Plastic Bodies: An Introduction,",
[22] [23] [24] [25]
[1]
Shitao Liu, Roberto Triggiani.
Recovering damping and potential coefficients for an inverse non-homogeneous second-order hyperbolic problem via a localized Neumann boundary trace.
[2]
Giuseppe Maria Coclite, Angelo Favini, Gisèle Ruiz Goldstein, Jerome A. Goldstein, Silvia Romanelli.
Continuous dependence in hyperbolic problems with Wentzell boundary conditions.
[3]
Davide Guidetti.
Some inverse problems of identification for integrodifferential parabolic systems with a boundary memory term.
[4]
Hongwei Lou.
Second-order necessary/sufficient conditions for optimal
control problems in the absence of linear structure.
[5]
Jaume Llibre, Amar Makhlouf.
Periodic solutions of some classes of continuous second-order differential equations.
[6]
Yi Zhang, Yong Jiang, Liwei Zhang, Jiangzhong Zhang.
A perturbation approach for an inverse linear second-order cone programming.
[7]
Leonardo Colombo, David Martín de Diego.
Second-order variational problems on Lie groupoids and optimal control applications.
[8]
Zhiqiang Yang, Junzhi Cui, Qiang Ma.
The second-order two-scale computation for integrated heat transfer problem with conduction, convection and radiation in periodic porous materials.
[9]
Alfredo Lorenzi, Eugenio Sinestrari.
Regularity and identification for an integrodifferential
one-dimensional hyperbolic equation.
[10]
Lassi Roininen, Petteri Piiroinen, Markku Lehtinen.
Constructing continuous stationary covariances as limits of the
second-order stochastic difference equations.
[11]
Guoshan Zhang, Peizhao Yu.
Lyapunov method for stability of descriptor second-order and high-order systems.
[12]
Gábor Kiss, Bernd Krauskopf.
Stability implications of delay distribution for first-order and
second-order systems.
[13]
David L. Russell.
Coefficient identification and fault detection in linear elastic systems; one dimensional problems.
[14]
Qi Hong, Jialing Wang, Yuezheng Gong.
Second-order linear structure-preserving modified finite volume schemes for the regularized long wave equation.
[15]
John R. Graef, Lingju Kong.
Uniqueness and parameter dependence of positive solutions of third order boundary value problems with $p$-laplacian.
[16]
Johnny Henderson, Rodica Luca.
Existence of positive solutions for a system of nonlinear second-order integral boundary value problems.
[17]
Yanhong Yuan, Hongwei Zhang, Liwei Zhang.
A smoothing Newton method for generalized Nash equilibrium problems with second-order cone constraints.
[18] [19]
Doyoon Kim, Seungjin Ryu.
The weak maximum principle for second-order elliptic and parabolic conormal derivative problems.
[20]
Luciano Pandolfi.
Joint identification via deconvolution of the flux and energy relaxation kernels of the Gurtin-Pipkin model in thermodynamics with memory.
2018 Impact Factor: 1.469
Tools Metrics Other articles
by authors
[Back to Top] |
Overview A negative-temperature coefficient (NTC) thermistor can be used as a temperature sensor. A NTC thermistor is a resistor which has a non-linear change in resistance in a response to a change in temperature. It is a passive sensor. NTCs vs RTDs
A NTC differs from a
resistive temperature detector (RTD) in the material used to make the sensor. RTDs have a resistive element made with pure metals, while NTCs have a resistive element made from ceramics or polymers with semiconductor properties.
NTCs are used for smaller, but more accurate temperature ranges such as measuring ambient temperature or fridge/freezer temperature, while RTDs are used for larger, less accurate temperature ranges such as measuring furnace temperature.
Temperature Accuracy
The temperature accuracy of a thermistor can be calculated (at the reference temperature) by dividing the percentage resistance tolerance at 25°C (or whatever the reference temperature is) by the thermistor’s temperature coefficient,
\(\alpha\).
For example, the Vishay NTCALUG03A103GC has a resistance tolerance of
\(\pm 2%\) and
\(\alpha_{25} = \pm 4.39%\). Therefore:
Self Heating
A NTC thermistor, like any other resistor, dissipates energy as heat when current flows through it. The power dissipation,
\(P_{NTC}\) in a NTC thermistor is:
where:
\(I\) is the current going through the thermistor, in Amps \(R\) is the resistance of the thermistor, at the present temperature, in Ohms \(P_{NTC}\) is the power dissipation as heat in the NTC thermistor, in Watts
Because the resistance of the NTC changes as the temperature changes, so does the dissipated power. In a simple resister divider circuit, the thermistor dissipates the most power when it’s resistance is equal to the fixed resistance.
RULE OF THUMB: To make sure self-heating doesn’t affect your temperature measurements, make sure that no more than 1mW of power is dissipated in the NTC thermistor at any temperature. Beta Equation
The Beta equation or Beta formula is a empirical equation used to work out the temperature from the measured resistance of a NTC thermistor.
It uses a single material constant,
\(\beta\), which is also known as the
coefficient of temperature sensitivity. The equation is an exponential approximation of the relationship between resistance and temperature in the form:
where:
\(R(T)\) is the actual resistance, in Ohms, at the actual temperature \(T\) \(R(T_0)\) is the reference resistance, in Ohms, at the reference temperature \(T_0\) \(T\) is the actual temperature, in Kelvin \(T_0\) is the reference temperature, in Kelvin
At best, the accuracy of the Beta equation approaches
\(\pm 1%\) between
\(0-100^{\circ}C\), and not more than
\(\pm 5%\) other the NTC thermistor’s entire temperature range.
\(\beta\) can be calculated when you have both the temperature and resistance of the thermistor at two different operating points.
\(\beta\) can be calculated as follows:
Or, written another way:
Re-arranged so that we can calculate a temperature from a measured resistance, and using the terminology
\(R_0\) and
\(T_0\) instead of
\(R_2\) and
\(T_2\), we get the following equation:
The free embedded-engineering calculator app, NinjaCalc, features a calculator for working out the thermistor temperature (or any other variable) using the Beta equation.
Steinhart-Hart Equation
The Steinhart-Hart is a complex but highly accurate way of modelling the relationship between temperature and resistance of a NTC thermistor.
The Steinhart-Hart equation is:
where:
\(T\) is the temperature, in kelvins \(R\) is the resistance at \(T\), in Ohms \(A, B, C\) are the _Steinhart-Hart coefficients_ which vary depending on the type of thermistor and the temperature range of interest CAREFUL: The
\(B\) in the Steinhart-Hart equation above is not the same as the
\(\beta\) in the Beta Equation.
Linearising The NTC With Extra Resistors
By just adding a few extra resistors, the output of a NTC thermistor can be “linearised” enough that the equation
\(y = ax + b\) can be used within the microcontroller over a limited temperature range.
Linearisation is also used in purely analogue circuits in where there is no digital circuitry (that means no ADCs or processing logic), and the output of the NTC thermistor circuit goes directly to a voltage comparator (or similar) to control an output. |
Part of the order contained two 10uF electrolytic capacitors but at different ratings, one at 1000V and one at 630V (for what it's worth, the voltage I'm working with here is ~440V).
If you're sure both have 440V on them, you can use 600V for both, it's likely to be cheaper. I wonder why the original was rated at 1kV.
Would it matter if I were to use just two 1000V ones? What are any downsides in using a higher rated capacitor than what is called for?
Downside is price and bulk. Although modern caps are likely to be smaller than the ones you are replacing, so this shouldn't be a problem.
Now, to add to the other answers...
High-K ceramics like X7R etc are crap dielectrics if you consider the dC/dV, the capacitance varies a lot, it also varies with temperature. Also they are piezoelectric. Never use those in the signal path, for filtering, or for decoupling a high impedance node like a VREF (you'd get a piezo microphone).
So you may wonder why people use them so much. The reason is that they are very good for decoupling power supplies. Since the voltage is constant, dC/dV distortion doesn't matter. And ceramics have many advantages:
They are
very cheap and give a high capacitance per volume. They withstand very high temperatures, so they can be surface-mounted directly on the board. This results in very low inductance, which is excellent for decoupling.
Note NP0 ceramics are another story, they are extremely linear and accurate.
EDIT
"High-K" means "High dielectric constant". \$ \kappa \$ is basically \$ \epsilon_r \$ Two plates with a bit of dielectric between them make a capacitor of value:
\$ \frac{Area * \epsilon_0 * \epsilon_r}{Thickness} \$
A material which is a good insulator can be thinner, so you get more capacitance per volume.
And a material with high dielectric constant \$ \kappa \$ or \$ \epsilon_r \$ also gives higher capacitance per volume.
Polypropylene has a dielectric constant of 2.2.
Barium Titanate (one of the High-K ceramics) has 7000. So, it packs a lot more capacitance into much less volume.
Plate thickness can get down to 0.5µm these days.
Drawback of these materials is that dielectric constant gets lower with higher electric field. A higher voltage rated X7R ceramic cap (say, 25V versus 6V) will have thicker plates, therefore electric field is lower, therefore its capacitance drops less at the same voltage (say, 3.3V for both caps).
Same if you buy a larger part (1206 is physically larger than 0603 for example) You get thicker plates, and possibly the manufacturer can use a less "extreme high K" material so capacitance drops less.
This explains the curves you posted. Note 1812, 1206 etc are package sizes.
This is off topic relative to your electrolytic caps, but since you asked ;) |
Geometry is a branch of mathematics that concerned with questions of shape, size, the relative position of figures, and the properties of space.Geometry Formulas are used to calculate the length, perimeter, area and volume of different geometric figures and shapes. They are also used to calculate the arc length, radius etc.
The table below gives you few important geometry formulas for class 8. The formulas listed below are commonly required in class 8 geometry to calculate lengths, areas and volumes.
Geometry Shapes Formulas for Class 8 Name of the Solid Lateral / Curved Surface Area Total Surface Area Volume Cuboid 2h(l+b) \(2\left ( lb+bh+hl \right )\) \(lbh\) Cube \(4a^{2}\) \(6a^{2}\) \(a^{3}\) Right Prism \(Perimeter \; of \; base \times height\) \(Lateral \; Surface \; Area + 2(Area\; of \;One \; End)\) \(Area\; of \; Base \times Height\) Right Circular Cylinder \(2\pi rh\) \(2\pi r \left (r+h \right )\) \(\pi r^{2} h\) Right Pyramid \(\frac{1}{2} Perimeter\; of\; Base \times Slant \; Height\) \(Lateral\; Surface\; Area + Area\; of\; the\; Base\) \(\frac{1}{3}(Area\; of\; the\; Base) \times height\) Right Circular Cone \(\pi rl\) \(\pi r \left (l+r \right )\) \(\frac{1}{3}\pi r^{2}h\) Sphere \(4\pi r^{2}\) \(4\pi r^{2}\) \(\frac{4}{3}\pi r^{3}\) Hemisphere \(2\pi r^{2}\) \(3\pi r^{2}\) \(\frac{2}{3}\pi r^{3}\)
Geometric Area Geometric Area Formula Square \(a^{2}\) Rectangle \(ab\) Circle \(\pi r^{2}\) Ellipse \(\pi r1\: r2\) Triangle \(\frac{1}{2}bh\) |
Maxwell's Equations are usually written as:
$$\vec{\nabla}\cdot\vec{E}=\rho/\epsilon_0,$$
$$\vec{\nabla}\cdot\vec{B}=0,$$
$$\vec{\nabla}\times \vec{E}=-\frac{\partial \vec{B}}{\partial t},$$
$$\vec{\nabla}\times \vec{B}=\mu_0\left(\vec{J}+\epsilon_0\frac{\partial \vec{E}}{\partial t}\right).$$
But the last two might be written more clearly as
$$\frac{\partial \vec{B}}{\partial t}=-\vec{\nabla}\times \vec{E}$$ and
$$\frac{\partial \vec{E}}{\partial t}=\frac{1}{\epsilon_0}\left(-\vec{J}+\frac{1}{\mu_0}\vec{\nabla}\times \vec{B}\right).$$
And then they clearly tell you how the fields change. And now we know what it takes for the fields to not change. An irrotational electric field causes the magnetic field to be steady in time. A $\vec{B}$ field whose curl is exactly balanced by current, produces a steady electric field. But this doesn't tell us what the fields are. For instance if a particle was at rest its electrostatic field could be irrotational, and there could be no magnetic field (or current), so all the fields are steady. But there could also be a wave travelling through space that hasn't yet reached the particle (so the particle stays at rest ... for now, until the wave gets to it). The locations of the particles and their motions don't by themselves tell us the fields.
For a particle at rest, there is a natural and very simple solution, an inverse square electric field, unchanging, and zero magnetic field. To another observer moving at a constant velocity, they see that charged particle moving at constant velocity. So the fields they see, are a very natural solution (of the many possible) to the equations for a charged particle moving at constant velocity.
That's the most natural solution, and you can compute it. If you try to use the causal equations I listed above, it turns into a chicken and egg problem, the fields now depend on what they were in the past. Which is reasonable, but where do you stop?
Edit for Sofia
In the frame where the charge is at rest at point $\vec{p}$, the charge density could be $\rho(\vec{r},t)=Q\delta^3(\vec{r}-\vec{p})$ and the current density definitely is $\vec{J}(\vec{r},t)=\vec{0}$. (If the charge is extended, $\rho(\vec{r},t)$ could equal $\frac{Q}{4\pi R^3/3}$ if $\left|\vec{r}-\vec{a}\right|<R$ and zero otherwise, as another example.) There are many possible electric fields, but $\vec{E}(\vec{r},t)=\frac{Q}{4\pi \epsilon_0 \left|\vec{r}-\vec{a}\right|^3}\left(\vec{r}-\vec{a}\right)$ (outside teh charge) is a natural and simple one. And again there are many possible $\vec{B}$ fields, but $\vec{B}(\vec{r},t)=\vec{0}$ is a natural and simple one. There the Lorentz-transformed versions of these $\vec{E}$, $\vec{B}$, $\rho$ and $\vec{J}$ are obviously natural and simple solutions, though there are still many. Just as there are many possible solutions for electric fields when there is no charge present. For example, you could have a plane wave in any direction, with any magnitude, as well as countless other solutions.
Edit for Agnivesh Singh
What I assume is that the fields obey Maxwell. The first two Maxwell (about the divergence) are constraints. At any time, those need to hold. The other two (with the time derivatives) are about how the fields evolve (so very relevant to your question). What's nice about all four, is that if the constraints hold at one time, and the fields evolve by the evolution equations, then the constraints will continue to hold. What's unfortunate is twofold. One, that they are about the total fields, not about the field due to this or the field due to that. Second, they don't include boundary conditions, so there is no unique solution until you specify boundary conditions.
If I didn't assume the fields obey Maxwell, then actually it would violate conservation of energy and momentum. But that's because we assign an energy and momentum density to the fields such their flux through vacuum is conserved and that their flux at charges and currents is exactly the the force and power exerted by the fields on the charges through the Lorentz Force Law. So it's a bit cheating, but if you'd rather take the energy density and momentum density of fields as given, then we need the fields to satisfy Maxwell to conserve energy and momentum.
I am more interested in the mechanism . My question was aimed to know how electric fields propagate ?
The fields evolve according the the equations I provided. If by propagation you mean energy and momentum transport, I think that's a different question (meaning a new question is needed, including the research stage, not an edit to this question). The biggest problem is again that there is no unique solution to Maxwell, even with no charges, there are many possible fields. Throw in a charge and there are again still many possible fields, and the equations don't specify a field as being due to a charge, they are just about the total field due to everything.
There are other difficulties to propagation. If you moved your charge to make a large field nearby and then watched that large deviation propagated, it literally is a source free field (like radiation) and so even if it propagates at $c$, you might object because you wanted to know if fields due to charges propagate at $c$. For that you'd have to trace the disturbance all the way back to the charge. And if your charge is a point charge, the fields themselves blow up there, so now you have infinities to deal with. You can try to make that work, but how convincing is it going to be in the end? And if you have an extended charge, then my example of a simple and natural field (due to the charge) isn't really convincing either since it won't look like a sphere in all frames. If it was a sphere in it's own rest frame it will look pancake like in a moving frame. If it is a sphere in the moving frame, it won't look like a sphere in it's own rest frame. You can try a fluid model, but energy and momentum conservation for a nonpoint charge actually fail unless you have something that holds the extended charge together, charged fluids actually physically spread out.
There is also a completely different way to see causality, which is to use
Jefimenko's Equations:
$$ \vec{E}(\vec{P},t)=\iiint \frac{\left(c^2\rho(\vec{r},t_r)+c|\vec{P}-\vec{r}|\dot{\rho}(\vec{r},t_r)\right)(\vec{P}-\vec{r})-|\vec{P}-\vec{r}|^2\dot{\vec{J}}(\vec{r},t_r)}{c^2|\vec{P}-\vec{r}|^34\pi\epsilon_0}d^3\tau$$$$ \vec{B}(\vec{P},t)=\frac{\mu_0}{4\pi}\iiint \frac{\left(c\vec{J}(\vec{r},t_r)+|\vec{P}-\vec{r}|\dot{\vec{J}}(\vec{r},t_r)\right)\times (\vec{P}-\vec{r})}{c|\vec{P}-\vec{r}|^3}d^3\tau.$$
However if you took those as a given, instead of Maxwell, you not only lose some solutions to Maxwell (like a primordial radiation field) but it also begs the question since the fields are explicitly calculated from the charge and currents in the past (at $t_r$) where $t_r=t-\frac{1}{c}\left|\vec{P}-\vec{r}\right|$.
I'm not expecting anything I wrote to make you completely happy. But maybe learning these things allow you to ask (or find and understand) new and more detailed questions to address your concerns. So hopefully you learned as much as possible based on how you phrased your question. |
Research Open Access Published: Blow-up and nonexistence of solutions of some semilinear degenerate parabolic equations Boundary Value Problems volume 2015, Article number: 157 (2015) Article metrics
1050 Accesses
2 Citations
Abstract
In this paper we study a class of semilinear degenerate parabolic equations arising in mathematical finance and in the theory of diffusion processes. We show that blow-up of spatial derivatives of smooth solutions in finite time occurs to initial boundary value problems for a class of degenerate parabolic equations. Furthermore, nonexistence of nontrivial global weak solutions to initial value problems is studied by choosing a special test function. Finally, the phenomenon of blow-up is verified by a numerical experiment.
Introduction
In this paper, we consider the equation
where \(z= (x,y,t) \) denotes the point in \(\mathbb{R} ^{3}\). This equation arises in mathematical finance [1] and in the physical phenomena such as diffusion and convection of matter. One of the main features of equation (1.1) is the strong degeneracy due to the lack of diffusion in the
y-direction. We restrict our consideration to two cases: the initial boundary value problems of (1.1) and the initial value problems of (1.1).
Regarding the theoretical analysis of (1.1), most scholars have been devoted to the study of well-posedness and regularity of solutions [2–5]. Antonelli and Pascucci [2] proved that there exists a unique viscosity solution to the initial value problem for (1.1) in a small time. The existence and uniqueness of a global solution in an unbounded domain was studied by Vol’pert and Hudjaev [5]. On the regularity of solutions, Citti
et al. [3] proved that the viscosity solution of (1.1) is a classical solution in the sense that \(u_{xx} \), \(uu_{y}-u_{t} \) are continuous and the equation is pointwise satisfied. Furthermore, they obtained the smooth solution of (1.1) when \(f(z) \in C^{\infty}(\Omega)\) and \(\partial_{x}u \neq0 \), in an open set \(\Omega\subset\mathbb{R}^{3} \) in [4].
Blow-up and nonexistence of solutions for (1.1) are as important aspects of properties of partial differential equations. In [6], Fujita described the initial problem of a semi-linear parabolic equation, which takes place blowing up even when the initial data is very nice. Ever since then, results about blow-up and nonexistence have been generalized to deal with some more general semilinear, quasilinear and fully nonlinear parabolic equations and systems. Without being exhaustive with the amount of references concerned with this topic, let us mention the works [7–11]. For a more extensive list of references, we refer to the book by Quittner and Souplet [12].
has no nontrivial nonnegative solutions in [14]. There is an interesting thing that replacing the right term \(u^{1+\alpha}\) by \(u|u|^{\alpha}\) in the first equation of the above problem, Haraux and Weissler [17] obtained global solutions.
In this paper, we will mainly deal with the following problems:
and
It is known that the local solutions are obtained for (1.2) and (1.3) in [2]. Our interest is the blow-up of spatial derivatives of solutions in finite time to the initial boundary value problem (1.2) and the nonexistence of the weak solutions to the initial value problem (1.3).
Our main results are the following theorems.
Firstly, we define energy functionals
Theorem 1.1 Let \(a_{0}(x)\) have compact support such that \(E(a_{0})<0 \). Assume that the initial value \(g(x,y) \) takes the form \(g=yb_{0}(x,y) \), \(b_{0}(x,0)=a_{0}(x) \). Then spatial derivatives of smooth solutions of (1.2) blow up in finite time. More precisely, there exists \(T=\frac{F(a_{0})}{6(1-\beta)E(a_{0})} \), \(\beta\in(1,\frac{3}{2}) \), such that either
This is our first result. A smooth solution
u of (1.2) means \(u \in C^{1}([0, T_{0}), C^{2}(\mathbb{R}^{+} \times\mathbb{R})) \) for \(T_{0}>0\). It is remarkable that Theorem 1.1 remains valid if we replace \(-u|u|^{\alpha}\) by \(u|u|^{\alpha}\) or 0.
Next, we consider the more general case
Continuing with the description of our results, let us introduce the precise assumptions on our
f: (H)
\(f(0)=0\) and there exists an increasing continuous function
ϕon \([0,+\infty)\) such that$$ \bigl| f(r_{1})-f(r_{2})\bigr|\leq\phi\bigl(| r_{1}-r_{2}|\bigr), $$
and \(\frac{1}{\phi(r)} \) is not integrable near \(r = +0 \), that is,$$ \int_{0}^{\delta} \frac{dr}{\phi(r)} = + \infty, $$
where
δis a positive constant.
Then Theorem 1.1 can be extended to the following theorem.
Theorem 1.2
For initial value problems, we derive two theorems.
The following theorem considers blow-up of solutions to the initial value problem
Theorem 1.3 Assume that u is the bounded classical solution of (1.6) in \(\overline{Q}_{T_{\varepsilon}} \), \(Q_{T_{\varepsilon}}=\mathbb{R}^{2} \times(0,T-\varepsilon) \), for any given \(\varepsilon\in(0,T) \). If \(f\leq0 \) and \(g(x, c_{0}) \geq\frac{c_{0}}{T} \), \(c_{0}>0 \), then u blows up in time T at \(y=c_{0}\).
This improves the result of Example 1.1 in [2].
Definition 1
A function \(u \in L_{\mathrm{loc}}^{2}(Q) \) is called a weak solution of (1.3) with the initial data \(g(x,y) \in L_{\mathrm{loc}}^{1}(\mathbb{R}^{2}) \) in \(Q=\mathbb{R}^{2} \times( 0,\infty)\) if \(t^{k}|x|^{-\gamma}|u|^{\alpha+1}\in L_{\mathrm{loc}}^{1}(Q) \) and
hold for any nonnegative \(\phi\in C_{0}^{2}(\mathbb{R}^{2}\times [0,\infty))\).
Now, we address our result.
Theorem 1.4 Let \(\alpha>1\), \(k-\frac{\gamma}{2}>0\). Assume that \(\int_{\mathbb{R}^{2}}g(x,y)\,dx\,dy \geq0\). If \(\alpha\leq k-\frac{\gamma}{2}+1\), then there exists no nontrivial weak solution of (1.3).
The rest of the paper is organized as follows. Section 2 is devoted to initial boundary value problems (1.2) and (1.5) through energy methods. In Section 3, we investigate initial value problems (1.3) and (1.6) by a comparison principle and choosing a special test function. Finally, we describe a numerical result about the blow-up of solutions in Theorem 1.1 in Section 4.
Initial boundary value problems Proof of Theorem 1.1
Suppose that a smooth solution
u of (1.2) exists locally and the initial value \(g(x,y) \) satisfies the form \(g=yb_{0}(x,y) \). If we restrict (1.2) to the half line \(l=\lbrace x>0,y=0 \rbrace\) and let \(v(x,t)=u(x,0,t) \), v obviously satisfies an equation of the form
where \(w(x,t)=u_{y}(x,0,t) \) is smooth, with the initial data \(v(x,0)=0 \) and the boundary data \(v(0,t)=0 \). By the maximum principle, we conclude that \(u(x,0,t)=v(x,t)=0 \) as long as
u stays smooth. Any smooth function that vanishes at \(y=0 \) can be written in this form
Let \(a(x,t)=b(x,0,t) \) and \(a_{0}(x)=b(x,0,0) \). Then
a satisfies
with the initial boundary value conditions
The proof of Theorem 1.1 is based on the following lemma.
Lemma 2.1 If \(a_{0}(x) \) has compact support such that \(E( a_{0})<0 \) ( E is defined as in (1.4)), then there exists a finite time T such that either Proof
Assume that \(\max_{x\in\mathbb{R}^{+}}a \) stays bounded. Since
a satisfies equation (2.2), the standard result shows that a decays exponentially fast at infinity as long as its maximum norm stays bounded.
Next, we will show that \(F(a) \) (
F is defined as in (1.4)) blows up in finite time assuming that \(a_{x}(0,t) \) stays finite. We will use the following integral identities that are valid for the smooth solutions of (2.2)-(2.3):
Thus, we have \(E(a)<0 \) for \(t>0 \) under the condition \(E(a_{0})<0 \).
At last, we compute the time derivative of \(H(a)=-\frac{E(a)}{F(a)^{\beta}} \). Firstly, we have
Furthermore,
If we choose \(\beta\in(1,\frac{3}{2}) \), then
By the definition of \(H(a)\), we get \(-E(a)\geq H(a_{0})F(a)^{\beta} \), where \(H(a)|_{t=0}=H(a_{0})\).
Since
we deduce
Hence there exists a finite time \(T=\frac{F(a_{0})}{6(1-\beta)E(a_{0})} \), \(\beta\in(1,\frac{3}{2}) \) such that
Due to the condition
we get
This completes the proof of Lemma 2.1. □
Proof of Theorem 1.1
This implies that either
□
Proof of Theorem 1.2
if \(g=yb_{0}(x,y) \) and
f satisfies hypothesis (H). Substituting \(u(x,y,t)=yI(x,y,t) \) into the first equation of (1.5), we get
Let \(s(x,t)=I(x,0,t) \) and \(s_{0}(x)=I(x,0,0) \). By \(f(0)=0 \) and \(f'(0)\leq0 \), multiply (2.7) by \(\frac{1}{y} \) and take limit as \(y\rightarrow0 \) to get
with the initial boundary value conditions
Setting \(\psi=\exp(f'(0)t)s\),
ψ satisfies Lemma 2.2 Define If the initial value \(\psi_{0}=\psi(x,0) \) has compact support such that \(E_{1}(\psi_{0})<0 \) and \(f'(0)\leq0 \), then there exists a finite time T such that either Proof
It is proceeded by a contradiction to the proof of Lemma 2.1, that is, we assume that \(\psi_{x}(0,t) \) stays finite and we will get \(F(\psi) \) (
F is defined as in (1.4)) blows up in finite time. The following integral identities are valid for smooth solutions of (2.8):
Due to \(f'(0)\leq0 \), we get
Since
we have
If we define \(H(a)=-\frac{E_{1}(a)}{F(a)^{\beta}} \) and choose \(\beta\in(1,\frac{3}{2}) \), then
We have \(-E_{1}(\psi)\geq H(\psi_{0})F^{\beta} \) and \(\frac{dF}{dt}\geq-6E_{1}\geq6H(\psi_{0})F^{\beta}\), where \(H(\psi)|_{t=0}=H(\psi_{0})\). Hence there exists a finite time \(T=\frac{F(\psi_{0})}{6(1-\beta)E_{1}(\psi_{0})} \), \(\beta\in(1,\frac{3}{2}) \) such that
This completes the proof of Lemma 2.2. □
Then Theorem 1.2 is obtained.
Remark 1
Replacing the semilinear term \(uu_{y}\) of (1.5) by \(h(u)u_{y}\), if \(h(u) \) satisfies hypothesis (H) and \(f'(0)h'(0) \leq0 \), then the smooth solutions of (1.5) have the same result as Theorem 1.2.
Remark 2
where \(\Delta_{x} \) is the Laplace operator acting in the variable \(x=(x_{1}, x_{2}, \ldots, x_{N} ) \in \mathbb {R}^{N}_{+} \).
Initial value problems
For the convenience of description, we set
Next, we get a comparison principle about the initial value problem (1.6).
Lemma 3.1 Assume that there are two solutions \(u_{i}\) of (1.6) satisfying \(u_{i} \in C^{2,1}(Q_{T}) \cap C(\overline{Q}_{T}) \) and \(u_{i} , (u_{2})_{y} \in L^{\infty}(Q_{T}) \), \(i=1,2 \). Let \(f(u_{1}) \leq f(u_{2}) \), \(g(u_{1}) \geq g(u_{2}) \), then \(u_{1} \geq u_{2} \). Proof
Set \(w=u_{1}-u_{2} \),
We suppose that \(r_{0} >0\), \(\alpha>0 \), \(N>0 \), and
and set
for \(r^{2}=x^{2}+y^{2} \).
Defining
L̅ by
we have
Choosing
we get \(\overline{L}v \leq0 \).
In \(\Omega_{r_{0}}=\{(x,y,t)|x^{2}+y^{2}\leq r_{0}^{2},0\leq t\leq T\} \), due to \(v|_{t=0} \geq0 \), \(v|_{r=r_{0}} \geq0 \), by the maximum principle, we obtain \(v\geq0 \).
For any \(p \in Q_{T} \), if we choose \(r_{0} \) sufficiently large such that \(p \in\Omega_{r_{0}} \), then \(v |_{p}\geq0 \).
Set \(r_{0}\rightarrow\infty\), we get \(w |_{p}=(u_{1}-u_{2}) |_{p}\geq0 \). □
Proof of Theorem 1.3
Taking \(u_{1}=\frac{y}{T-t} \), it shows that
Fixing \(y=c_{0}>0 \), we have \(Lu_{1}(x,c_{0},t)\geq Lu(x,c_{0},t) \). When \(g(x,c_{0}) \geq\frac{c_{0}}{T} \), we get \(u\geq \frac{c_{0}}{T-t} \) by Lemma 3.1.
At \(y=c_{0} \),
□
Finally, we give the proof of Theorem 1.4.
Proof of Theorem 1.4
Let
u be such a weak solution of (1.3) and \(\phi\in C_{0}^{2}(\mathbb {R}^{2}\times [0,\infty))\) be a nonnegative test function. Applying the first equation of (1.3) and Young’s inequality, we obtain
where \(\alpha>1\).
We define
where \(\psi\in C_{0}^{\infty}(\mathbb {R}^{+})\) satisfies \(0\leq\psi\leq1 \) and
Then
This implies that \(u\equiv0\) in
Q.
In the case where \(\gamma+2\alpha-2k-2=0\), we get from (3.1) that
Set \(\Omega_{r}=\{(x, y, t)\in\mathbb{R}^{2}\times(0,\infty):r^{2} \leq t+x^{2}+y^{2} \leq2r^{2}\}\). Since \(\psi(s)\) is constant for \(s\in[0,1]\cup[2,\infty)\), we have
It follows from the integrability of \(t^{k}|x|^{-\gamma}|u|^{\alpha+1} \) in
Q that
This implies that \(u\equiv0\). □
A numerical experiment
Next, we present a numerical experiment. Our goal is to show that the result presented in Theorem 1.1 can be observed when one performs numerical computations. For a numerical experiment, we choose an adaptive bounded space to problem (1.2).
At \(y=0\), (1.2) in a bounded domain can be written to the following problem:
Figure 1 shows the evolution of the numerical solution of (4.1) with a space step size 0.01, whose blow-up time turns out to be \(T = 0.56\). In \((0,T) \), for any \(T \geq0.56 \), we fail to show the figure in Matlab since the function value increases rapidly. In Figure 2, we display the profile of Figure 1 at \(t=0.55\).
References 1.
Antonelli, F, Barucci, E, Pascucci, A: A comparison result for FBSDE with applications to decisions theory. Math. Methods Oper. Res.
54, 407-423 (2001) 2.
Antonelli, F, Pascucci, A: On the viscosity solutions of a stochastic differential utility problem. J. Differ. Equ.
186, 69-87 (2002) 3.
Citti, G, Pascucci, A, Polidoro, S: Regularity properties of viscosity solutions of a non-Hörmander degenerate equation. J. Math. Pures Appl.
80, 901-918 (2001) 4.
Citti, G, Pascucci, A, Polidoro, S: On the regularity of solutions to a nonlinear ultraparabolic equation arising in mathematical finance. Differ. Integral Equ.
14, 701-738 (2001) 5.
Vol’pert, AI, Hudjaev, SI: Cauchy’s problem for degenerate second order quasilinear parabolic equations. Math. USSR Sb.
7, 365-387 (1969) 6.
Fujita, H: On the blowing up of solutions of the Cauchy problem for \(u_{t}=\Delta u+u^{1+\alpha}\). J. Fac. Sci., Univ. Tokyo, Sect. IA, Math.
13, 109-124 (1966) 7.
Chipot, M, Weissler, FB: Some blowup results for a nonlinear parabolic equation with a gradient term. SIAM J. Math. Anal.
20, 886-907 (1989) 8.
Giga, Y, Matsui, S, Sasayama, S: Blow up rate for semilinear heat equations with subcritical nonlinearity. Indiana Univ. Math. J.
53, 483-514 (2004) 9.
Quittner, P, Souplet, P, Winkler, M: Initial blow-up rates and universal bounds for nonlinear heat equations. J. Differ. Equ.
196, 316-339 (2004) 10.
Armstrong, SN, Sirakov, B: Nonexistence of positive supersolutions of elliptic equations via the maximum principle. Commun. Partial Differ. Equ.
36, 2011-2047 (2011) 11.
Liu, GW, Zhang, HW: Blow up at infinity of solutions for integro-differential equation. Appl. Math. Comput.
230, 303-314 (2014) 12.
Quittner, P, Souplet, P: Superlinear Parabolic Problems. Blow-up, Global Existence and Steady States. Birkhäuser Advanced Texts (2007)
13.
Weinan, E, Engquist, B: Blowup of solutions of the unsteady Prandtl’s equation. Commun. Pure Appl. Math.
50, 1287-1293 (1997) 14.
Pascucci, A: Fujita type results for a class of degenerate parabolic operators. Adv. Differ. Equ.
4, 755-776 (1999) 15.
Caristi, G: Existence and nonexistence of global solutions of degenerate and singular parabolic systems. Abstr. Appl. Anal.
5, 265-284 (2000) 16.
Földes, J: Liouville theorems, a priori estimates, and blow-up rates for solutions of indefinite superlinear parabolic problems. Czechoslov. Math. J.
61, 169-198 (2011) 17.
Haraux, A, Weissler, FB: Non-uniqueness for a semilinear initial value problem. Indiana Univ. Math. J.
31, 167-189 (1982) Acknowledgements
This work was done when the author was visiting the Institute of Mathematical Sciences, the Chinese University of Hong Kong. The author would like to express her sincere thanks to Professor Zhouping Xin for his helpful references and fruitful comments. The author also would like to express her deep gratitude to the anonymous referee for careful reading and valuable suggestions. The author is supported by the Research Innovative Program of Jiangsu Province (No. CXLX13-188) and the Excellent Ph.D Student Foundation of NUST.
Additional information Competing interests
The author declares that they have no competing interests. |
Research Open Access Published: Global existence and blow-up to the solutions of a singular porous medium equation with critical initial energy Boundary Value Problems volume 2016, Article number: 80 (2016) Article metrics
855 Accesses
1 Citations
Abstract
This paper is devoted to the study of a singular porous medium equation, which was studied extensively in recent years. We obtain the global existence and blow-up condition at the critical initial energy \(E(u_{0})=d\), while the previous papers only considered the case \(E(u_{0})< d\), where
d is a positive constant which will be given in the main part of this paper. Introduction
Suppose a compressible fluid flows in a homogeneous isotropic rigid porous medium. Then the volumetric moisture content \(\theta(x)\), the macroscopic velocity
V⃗ and the density of the fluid ρ are governed by the following equation [1, 2]:
where \(f(u)\) is the source. From Darcy’s law, one has the following relation:
where
ρV⃗ and P denote the momentum velocity and pressure, respectively, \(\lambda >0\) is some physical constant.
If the fluid considered is the polytropic gas, then the pressure and density satisfy the following equation of the state:
In this paper, we consider (1.4) with \(\theta(x)=\vert x\vert ^{-\delta}\) and \(f(\rho)=\rho^{\sigma}\). Furthermore, we incorporate zero boundary condition to this problem. Then we get the following initial-boundary problem after changing variables and notations:
where \(u_{0} \in H^{1}_{0}(\Omega)\) is a nonnegative and nontrivial function, \(T \in(0,\infty] \), Ω is a bounded domain in \(\mathbb {R}^{N}\) (\(N\geq3\)) with smooth boundary
∂Ω, \(m\geq1\), \(0\leq s\leq1+1/m\leq2\), \(m< p-1\leq\frac{(N+2)m}{N-2}\).
A function
uis called a solution of (1.5) if$$u^{m}\in L^{\infty}\bigl(0,T;H_{0}^{1}( \Omega ) \bigr),\quad \int_{0}^{T} \bigl\Vert \vert x\vert ^{-\frac{s}{2}} \bigl(u^{\frac{m+1}{2}} \bigr)_{t} \bigr\Vert _{2}^{2}\,dt< +\infty, $$
and
usatisfies (1.5) in the distribution sense.
The energy functional related to the stationary equation$$ E(u)=\frac{1}{2m} \int_{\Omega }\bigl\vert \nabla u^{m} \bigr\vert ^{2}\,dx-\frac {1}{m+p-1} \int_{\Omega }\vert u\vert ^{m+p-1}\,dx, \quad u^{m}\in H_{0}^{1}(\Omega ). $$(1.6)
The Nehari functional$$ H(u)= \int_{\Omega }\bigl\vert \nabla u^{m} \bigr\vert ^{2}\,dx- \int_{\Omega }\vert u\vert ^{m+p-1}\,dx, \quad u^{m}\in H_{0}^{1}(\Omega ). $$(1.7)
The Nehari manifold$$ K= \bigl\{ u:u^{m}\in H_{0}^{1}( \Omega ), H(u)=0, u\neq0 \bigr\} . $$(1.8)
The potential depth$$\begin{aligned} d =&\inf \Bigl\{ \sup_{\lambda \geq0}E(\lambda u): u^{m} \in H_{0}^{1}(\Omega ), u\neq0 \Bigr\} \\ =&\inf_{u\in K}E(u)= \frac{p-1-m}{2m(m+p-1)}C^{\frac {-2(m+p-1)}{p-1-m}}, \end{aligned}$$(1.9)
where
Cis the optimal constant of the Sobolev embedding \(H_{0}^{1}(\Omega )\subset L^{\frac{m+p-1}{m}}(\Omega )\). Particularly we have$$ \bigl\Vert u^{m} \bigr\Vert _{\frac{m+p-1}{m}}\leq C \bigl\Vert \nabla u^{m} \bigr\Vert _{2} $$(1.10)
for \(u^{m}\in H_{0}^{1}(\Omega )\) since \(m< p-1\leq\frac{(N+2)m}{N-2}\), where \(\Vert \cdot \Vert _{r}\) denotes the norm of \(L^{r}(\Omega )\).
The sets related to global existence and blow-up$$ \begin{aligned} &\Sigma_{1}= \bigl\{ u:u^{m}\in H_{0}^{1}(\Omega ), E(u)< d, H(u)>0 \bigr\} \cup \{0\}, \\ &\Sigma_{2}= \bigl\{ u:u^{m}\in H_{0}^{1}( \Omega ), E(u)< d, H(u)< 0 \bigr\} . \end{aligned} $$(1.11)
The solution \(u(x,t)\) of problem (1.5) is called blow-up at finite time
T if \(\Vert u\Vert _{L^{\infty}(\Omega)}\rightarrow +\infty\) as \(t\rightarrow T_{-}\). Otherwise, we say \(u(x,t)\) exists globally. The following are the main results of [5]. Theorem 1.1 If \(u_{0}\in\Sigma_{1}\), then the solution u to the problem (1.5) exists globally; if \(u_{0}\in\Sigma_{2}\), then u blows up at finite time.
In view of the above results, we may ask if the solution of
u of the problem (1.5) blows up or exists globally when \(E(u_{0})\geq d\). The main task of this paper is to answer the question for \(E(u_{0})=d\). In order to give the main results of the present paper, we introduce two sets as follows:
Then
The main results of this paper are the following theorem.
Theorem 1.2 Assume \(E(u_{0})=d\), then we have 1. if\(u_{0}\in\mathcal{S}\), then the problem(1.5) admits a global solution u such that\(u^{m}(t)\in L^{\infty}(0,+\infty; H_{0}^{1}(\Omega))\) and\(u(t)\in\bar{\mathcal{S}}=\mathcal{S}\cup \partial \mathcal{S}\) for\(0\leq t<+\infty\); 2. if\(u_{0}\in\mathcal{B}\), then the solution of problem(1.5) will blow up at finite time. Proof of Theorem 1.2
In this section, we will prove Theorem 1.2. First of all, we will introduce some useful lemmas.
Lemma 2.1 Assume the function \(u\not\equiv0\) satisfying \(u^{m}\in H_{0}^{1}(\Omega )\). Then there exists a unique positive value \(\mu_{*}\) defined as such that \(E(\mu u)\) is strictly increasing for \(0<\mu<\mu_{*}\), strictly decreasing for \(\mu_{*}<\mu<\infty\). Proof
From
and \(p>m+1\) we get \(\lim_{\mu\rightarrow0}E(\mu u)=0\), \(\lim_{\mu \rightarrow+\infty}E(\mu u)=-\infty\). Furthermore, since \(\mu=\mu _{*}\) is the unique positive root of the equation \(\frac{dE(\mu u)}{d\mu }=0\), the conclusion follows. □
Lemma 2.2 (i) If\(u \in\mathcal{S}\) and\(\Vert \nabla u^{m}\Vert _{2}\neq0\), then\(\Vert \nabla u^{m}\Vert _{2}^{2}>\Vert u^{m}\Vert _{\frac{m+p-1}{m}}^{\frac{m+p-1}{m}}\). (ii) If\(u\in \partial \mathcal{S}\), then\(\Vert \nabla u^{m}\Vert _{2}^{2} \geq \Vert u^{m}\Vert _{\frac {m+p-1}{m}}^{\frac{m+p-1}{m}} \). (iii) If\(\Vert \nabla u^{m}\Vert _{2}^{2} < \Vert u^{m}\Vert _{\frac{m+p-1}{m}}^{\frac{m+p-1}{m}} \), then\(u\in\mathcal{B}\). (iv) If\(\Vert \nabla u^{m}\Vert _{2}^{2} \leq \Vert u^{m}\Vert _{\frac{m+p-1}{m}}^{\frac{m+p-1}{m}}\) and\(\Vert \nabla u^{m}\Vert _{2}\neq0 \), then\(u\in\mathcal {B}\cup \partial \mathcal{B}\). Proof
which implies \(\Vert \nabla u^{m}\Vert _{2}> \Vert u^{m}\Vert _{\frac{m+p-1}{m}}^{\frac{m+p-1}{m}}\).
(ii) From \(u\in \partial \mathcal{S}\) we get
Then in the same way as the proof of (i), \(\Vert \nabla u^{m}\Vert _{2}^{2}\geq \Vert u^{m}\Vert _{\frac {m+p-1}{m}}^{\frac{m+p-1}{m}}\) holds.
(iii) By (1.10) and \(\Vert \nabla u^{m}\Vert _{2}^{2}<\Vert u^{m}\Vert _{\frac{m+p-1}{m}}^{\frac{m+p-1}{m}}\), we have
which is equivalent to \(\Vert \nabla u^{m}\Vert _{2}>C^{\frac {-(m+p-1)}{p-1-m}}\). So \(u\in \mathcal {B}\).
(iv) In the same way as the proof of (iii), we have
which implies \(u\in\mathcal{B}\cup \partial \mathcal{B}\). □
Lemma 2.3 Proof Lemma 2.4 Let u be the solution of (1.5) with initial value \(u_{0}\) such that \(u_{0}^{m}\in H_{0}^{1}(\Omega )\) and \(E(u_{0})\leq d\). Then (i)
\(\Vert \nabla u^{m}\Vert _{2}^{2}> \Vert u^{m}\Vert _{\frac{m+p-1}{m}}^{\frac{m+p-1}{m}}\)
if and only if\(0<\Vert \nabla u^{m}\Vert _{2}< (\frac {2m(m+p-1)}{p-1-m}d )^{\frac{1}{2}} \); (ii)
\(\Vert \nabla u^{m}\Vert _{2}^{2}<\Vert u^{m}\Vert _{\frac{m+p-1}{m}}^{\frac{m+p-1}{m}}\)
if and only if\(\Vert \nabla u^{m}\Vert _{2}> (\frac {2m(m+p-1)}{p-1-m}d )^{\frac{1}{2}}\). Proof Lemma 2.5 Let u be the solution of (1.5) with initial value \(u_{0}\) such that \(u_{0}^{m}\in H_{0}^{1}(\Omega )\) and \(E(u_{0})\leq d\). Then: (i)
\(u(t)\in \mathcal {S}\)
for\(t\in[0,T)\) if\(u_{0}\in \mathcal {S}\); (ii)
\(u(t)\in \mathcal {B}\)
for\(t\in[0,T)\) if\(u_{0}\in \mathcal {B}\); where \(\mathcal {S}\) and \(\mathcal {B}\) are the sets defined in (1.12). Proof
(i) If the conclusion (i) is false, there must exist a time \(t_{0}\in(0,T)\) such that \(u(t_{0})\in \partial \mathcal {S}\) and \(u(t)\in \mathcal {S}\) for \(0\leq t< t_{0}\). Hence
and
By (2.4) and (2.5) we know that \(\int_{0}^{t_{0}}\Vert \vert x\vert ^{-\frac{s}{2}} (u^{\frac{m+1}{2}} )_{t}\Vert _{2}^{2}\,dt>0\). Then it follows from (2.2) and (2.6) that \(E(u_{0})>E(u(t_{0}))\geq d\), which contradicts \(E(u_{0})\leq d\).
(ii) The conclusion can be proved in the same way as (i). □
Based on above preparations, we are ready to prove Theorem 1.2.
Proof of Theorem 1.2 (global existence part)
Let \(\lambda_{n}=1-\frac{1}{n}\) and \(u_{0n}=\lambda_{n}u_{0}\) for \(n=2,3,\ldots \) . Then it follows from (2.7), \(\lambda _{n}<1\), and \(m-p+1<0\) that
Furthermore, by Lemma 2.1, there exists an integer \(n_{*}\) such that \(E(\lambda _{n}u_{0})\) is strictly increasing for \(n\leq n_{*}\), which means
Equations (2.8)-(2.10) imply \(u_{0n}\in\Sigma_{1}\), where \(\Sigma_{1}\) is defined as (1.11). Let \(u_{n}\) be the solution of (1.5) with initial value \(u_{0n}\), then Theorem 1.1 implies \(u_{n}\) exists globally such that
Similar to (2.3), for \(0\leq t<+\infty\), \(n=n_{*},n_{*}+1,\ldots\) , we get
Next, we will prove \(\Vert \nabla u_{n}^{m}(t)\Vert _{2}^{2}>\Vert u_{n}^{m}(t)\Vert _{\frac{m+p-1}{m}}^{\frac {m+p-1}{m}}\) for \(0\leq t<+\infty\). If not, it follows from (2.8) that there exists \(t_{*}>0\) such that \(\Vert \nabla u_{n}^{m}(t_{*})\Vert _{2}^{2}=\Vert u_{n}^{m}(t_{*})\Vert _{\frac {m+p-1}{m}}^{\frac{m+p-1}{m}}\). Then it follows from (1.9) that \(E(u_{n}(t_{*}))\geq d\), which contradicts \(E(u_{n}(t_{*}))< d\) by (2.12). Then from (2.12), we obtain
1.
\(u\in L^{\infty}(0,T;H_{0}^{1}(\Omega ) )\) and \(\int _{0}^{T}\Vert \vert x\vert ^{-\frac{s}{2}} (u^{\frac{m+1}{2}}(x,t) )_{t}\Vert _{2}^{2}\,dt\leq\frac{d(m+1)^{2}}{4}\),
2.
\(u_{k}\rightarrow u\) a.e. on \(\Omega \times(0,T)\),
3.
\(u_{k}^{m}\rightarrow u^{m}\) weakly star in \(L^{\infty}(0,T;H_{0}^{1}(\Omega ) )\),
4.
\(u_{k}\rightarrow u\) weakly star in \(L^{\infty}(0,T; L^{m+p-1}(\Omega ) )\),
5.
\(\vert x\vert ^{-\frac{s}{2}} (u_{k}^{\frac{1+m}{2}} )_{t}\rightarrow \vert x\vert ^{-\frac{s}{2}} (u^{\frac{1+m}{2}} )_{t}\) weakly in \(L^{2}(0,T;L^{2}(\Omega))\).
Then it follows from the construction of \(u_{n}\) that
u is a global solution of (1.5) and \(u(t)\in\bar{\mathcal{S}}\) for \(0\leq t<\infty\). □ Proof of Theorem 1.2 (blow-up part)
Let \(u(t)\) be the solution of problem (1.5) with initial value \(u_{0}\) satisfying \(E(u_{0})=d\) and \(u_{0}\in\mathcal{B}\). We need to show that the maximal existence time
T of u is finite. We assume \(T=+\infty\) and prove the conclusion by contradiction. Let
Then
we get
By \(u_{0}\in\mathcal{B}\) and Lemma (2.5), we obtain \(u(t)\in \mathcal {B}\) for \(0\leq t<+\infty\),
i.e.,
From (2.17), (2.18) and \(E(u_{0})=d\) we obtain \(f''(t)> \frac{4(m+p-1)}{(m+1)^{2}}\int_{0}^{t}\Vert \vert x\vert ^{-\frac {s}{2}} (u^{\frac{m+1}{2}}(x,\tau) )_{\tau} \Vert _{2}^{2}\,d\tau\). The remaining part of the proof is the same as that in [5]. □
Conclusion
In this paper, we study a singular porous medium equation considered in [5], where the global existence and blow-up conditions were got for the case of subcritical initial energy \(E(u_{0})< d\). We complete the results by studying the global existence and blow-up conditions for the case of critical initial energy \(E(u_{0})=d\).
References 1.
Vazquez, JL: The Porous Medium Equation. Oxford Mathematical Monographs. Clarendon, Oxford (2010)
2.
Wu, Z, Zhao, J, Yin, J, Li, H: Nonlinear diffusion equations. World Scientific, River Edge, NJ (2001)
3.
Zhong, T: Non-Newton filtration equation with special medium void. Acta Math. Sci.
24B(1), 118-128 (2004) 4.
Wang, Y: The existence of global solution and the blowup problem for some
p-Laplace equations. Acta Math. Sci. 27(2), 274-282 (2007) 5.
Zhou, J: A multi-dimension blow-up problem to a porous medium diffusion equation with special medium void. Appl. Math. Lett.
30, 6-11 (2014) 6.
Zhou, J, Yang, D: Upper bound estimate for the blow-up time of an evolution
m-Laplace equation involving variable source and positive initial energy. Comput. Math. Appl. 69(2), 1463-1469 (2015) 7.
Zhou, S, Zheng, S: Global blow-up in a degenerate and strongly coupled parabolic system with localized sources. Comput. Math. Appl.
60(9), 2564-2571 (2010) 8.
Hu, Y, Li, J, Wang, L: Blow-up phenomena porous medium equation with nonlinear flux on the boundary. J. Appl. Math.
2013(8), 1-5 (2013) 9.
Lions, JL: Quelques methodes de resolution des problemes aux limites non lineaires. Dunod, Paris (1969)
Acknowledgements
This work is partially supported by the Fundamental Research Funds for the Central Universities grant XDJK2015A16, NSFC grant 11201380, Project funded by China Postdoctoral Science Foundation grant 2014M550453 and the Second Foundation for Young Teachers in Universities of Chongqing.
Additional information Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. |
Thanks to WillJagy for the helpful comments. The wolfram article on this is nice. Particularly, formula (37) of that article is pretty impressive in light of how complicated this gets in the case $n=3$. For this case check out the MO post for a summary and then this Bateman paper for the whole story.
The piece by Ono is pretty rewarding and has a lot of good leads. I found that we don't have an explicit formula for $\phi(n,r)$ but we do have a formula for a specific class of hyperspheres. Namely, when the dimension $n$ is given by $(2s)^2$ or $(2s+1)^2-1$ we know how many solutions can be given for all $r$. These appear as corollary 2 of that paper but they are not easily recreated in a contained way.
Here some more easily contained formulae:
$$ \phi(6,r)=16\sum_{d|r} \bigg( \frac{ -4}{r/d} \bigg)d^2-4\sum_{d|r}\bigg( \frac{ -4}{d} \bigg)d^2 $$
Where above appears Legendre-Jacobi-Kronecker symbols.
$$\phi(8,r)=16\sum_{d|r}(-1)^{d+r}d^3$$
Also appears this quote
"Therefore, the problem of computing non-trivial formulas for $r(s; n)$ [ which is $\phi(n,s)$ in the notation of this SE post] remains since the coefficients of cusp forms, although small, rarely have simple descriptions." [Ono]
Speaking of computability, I will shamelessly plug a generalization of this S.E. question. Instead of asking "how many solutions are there to $\sum_{i=1}^n{x_i^2}=r$?" We can ask "how many solutions are there to $\sum_{i=1}^n{x_i^{y_i}}=r$?" |
90 6 Homework Statement Two bodies of equal mass are connected by an ideal rope and are inside a room which can accelerate vertically. The second body is attached to a spring of constant ##200 \frac{N}{m}##. Find the deformation of the spring when A) the room moves with constant velocity B) the room is accelerating upwards with ##2 \frac{m}{s^2}##. C) the room is accelerating downwards with ##2 \frac{m}{s^2}##. Homework Equations Newton's equations
I've solved all the cases in the non inertial system.
A) For ##m_1## we have ##x) P_{1x} -T=m.a_x## ##y) N_1 -P_{1y}=m.a_y## For ##m_2## we have ##y) T+F_e -P_2=m.a_y## As it moves with constant velocity I solve it setting ##a_x=0##. So for ##m_1## ##mgsin(\alpha)=T##, then I replace it in the tension of ##m_2## and find ##\Delta x## B) For ##m_1## ##x) mgsin(\alpha)-f*.sin(\alpha)-T=0## For ##m_2## ##y) Fe + T -mg-f*=0## Where ##f*=2m## Then, I replaced the tension from ##m_1## in ##m_2## and found the deformation. (As you see, I considered the elastic force pointing upwards in this case) C) the same for ##B## just changing the sign of ##f*## and the elastic force. Is this right? Because I had some doubts about the sign of the elastic force.
A) For ##m_1## we have
##x) P_{1x} -T=m.a_x##
##y) N_1 -P_{1y}=m.a_y##
For ##m_2## we have
##y) T+F_e -P_2=m.a_y##
As it moves with constant velocity I solve it setting ##a_x=0##. So for ##m_1## ##mgsin(\alpha)=T##, then I replace it in the tension of ##m_2## and find ##\Delta x##
B) For ##m_1##
##x) mgsin(\alpha)-f*.sin(\alpha)-T=0##
For ##m_2##
##y) Fe + T -mg-f*=0##
Where ##f*=2m##
Then, I replaced the tension from ##m_1## in ##m_2## and found the deformation. (As you see, I considered the elastic force pointing upwards in this case)
C) the same for ##B## just changing the sign of ##f*## and the elastic force.
Is this right? Because I had some doubts about the sign of the elastic force.
Attachments 44.3 KB Views: 13
Last edited: |
I'm learning (or at least
trying to learn) about electrochemistry, but a major obstacle to that, is that different books I refer use different terms for the same symbols. So in a last ditch attempt to clear stuff up, I've resorted to Chem.SE.
So here's what I intend to do; I'll list out everything I think I've understood, as well as pose a couple of questions regarding some of them. I'd really appreciate it if someone would take the time to go through what I've listed out, checking them for errors and then clearing those queries which I've got. So here I go
Symbols used:
Resistance ($R$) , Resistivity or specific resistance ($\rho$), Conductance ($C$), Conductivity or specific conductance ($\kappa$), Area of cross-section of the electrode ($A$), distance between the electrodes ($L$),
and the
REALLY confusing bit, Molar conductance according to some books and Molar conductivity according to others and one book uses both terms, both represented by $\mathrm{Λ_{m}}$
Now, $$ R = \rho \frac{L}{A}\\ R = \frac1C\\ $$ Therefore $$\frac1\rho = \frac1R\cdot\frac{L}A = C\cdot\frac{L}A = \kappa$$
Now if I've got this right, then,
Conductance is the degree to which the solution conducts electricity. Conductivity is the conductance per unit volume of the solution; it may also be considered as the concentration of ions per unit volume of solution. Molar Conductivity is the conductance of the entire solution having 1 mole of electrolyte dissolved in it. Q1. So what's Molar Conductance? Q2. Is there a difference between Molar Conductivity and Molar Conductance?
Also, according to
Ostwald's Dilution Law, greater the dilution, greater the dissociation of the electrolyte in solution.
Regarding dilution of an electrolyte solution, this is what I've understood
As dilution increases, Conductivity (ion concentration per unit volume) DECREASES. As dilution increases, Molar conductivity (Conductance of 1 mole of electrolyte in the total solution) should INCREASE in accordance with Ostwald's Law Q3. How does dilution affect Molar Conductance? Q4. How is Conductance affected upon dilution?
I suppose if the above statements are proof-read and the queries answered, I might get fairly good idea about this....
Also if you feel there is are any additional points worth mentioning, by all means go ahead and put it in the answer.
And finally, if anyone could recommend a decent site that deals with the above-mentioned terms and concepts in a fairly lucid manner, it'd be appreciated. |
Abstract (englisch):
In this thesis, we investigate the Cauchy problem for the quasilinear stochastic evolution equation
\begin{equation*}
\begin{cases}
u(t)=[{-}A(u(t))u(t)+f(t)]\operatorname{dt}+B(u(t))dW(t),\quad t\in [0,T],\\
u(0)=u_0
\end{cases}
\end{equation*}
in a Banach space $ X. $
In the first part of the thesis, we concentrate on the parabolic situation, i.e. we assume that $ {-}A(u(t)) $ is for every $ t $ a generator of an analytic semigroup and that $ A(u(t)) $ has a bounded $ H^{\infty} $-calculus. Under a local Lipschitz assumption on $ u\mapsto A(u) $ we prove existence and uniqueness of a local strong solution up to a maximal stopping time that can be characterised by a blow-up alternative. We apply our local well-posedness result to a second order parabolic partial differential equation on $ \mathbb{R} ^d $, to a generalised Navier-Stokes equation describing non-Newtonian fluids and to a convection-diffusion equation on a bounded domain with Dirichlet, Neumann or mixed boudary conditions. In the last situation, we can even show that the solution exists globally.
In the second part of the thesis, we go to a special hyperbolic situation. ... mehrWe look at a Maxwell equation on a domain $ D $ with perfect conductor boundary condition in chiral media with a nonlinear retarded material law, i.e. we consider
$$ A(u)u(t)=-Mu(t)+|u(t)|^qu(t)-\int_{0}^{t}G(t-s)u(s)\operatorname{ds}. $$
Here, $ M(u_1,u_2)=(\operatorname{curl} u_2,-\operatorname{curl} u_1)^T $ is the Maxwell operator on $ L^{2}(D)^3\times L^{2}(D)^3 $. To solve this equation we apply a refined version of the monotonicity approach using the spectral multipliers of the Hodge-Laplace operator, which is a componentwise Laplace operator with boundary conditions comparable to those of $ M^2. $ We show existence and uniqueness of a weak solution $ u $ in the sense of partial differential equations and under stronger assumptions we prove that $ u $ is a strong solution, i.e. $ Mu(t,x) $ exists almost surely for almost all $ t\in[0,T] $ and $ x\in D $. |
Luckily for us, a really smart guy named Josiah Willard Gibbs figured out how these two tendencies work together back in the late 1800's. As a result, in chemical thermodynamics we now have a state function called the Gibbs Free Energy that describes what is called the thermodynamic potential of a system at constant pressure, temperature, and number of molecules.
Since most of the systems we work with under lab conditions are at constant pressure and temperature and are typically closed to mass transfer (abbreviated as NPT), Gibbs free energy is a very convenient way of determining what will happen to a given chemical system under normal lab conditions.
The reason it works is that it is a potential energy function - it describes how a system's potential energy will change as different state variables change, and therefore lets us predict where the system will "try" to go. An analogy that works pretty well is to imagine it like the surface of a planet, where gravity is the potential energy. If the derivative or slope is negative in a given direction, then things will tend to "roll downhill" at that point. On the other hand, to make things go uphill, you need to add energy.
The other thing we can get from this analogy is that given the opportunity, things will eventually move from a state of higher potential energy to a state of lower potential energy. So we can compare the energy at two different states (the reactants and products, for example), and decide based on the difference in potential energy whether the reaction would be spontaneous under those conditions.
The equation for Gibbs free energy is:
$G = H - TS$
Where $H$ is enthalpy (a.k.a heat of reaction), $T$ is absolute temperature (usually measured in Kelvin) and $S$ is entropy.
If you take the derivative at constant temperature and pressure, you get:
$\Delta G = \Delta H - T\Delta S$
(This is for large changes. for small changes replace $\Delta$ with $\delta$). What this tells us is that change in Gibbs Free Energy is a function of change in enthalpy, change in entropy, and the absolute temperature.
To illustrate how this works, take a look at the following diagram.
Here the line represents the Gibbs Free energy "surface" - the thermodynamic potential as a function of some variable (a reaction coordinate, for example). The ball represents the system at some point along that coordinate axis. If this were regular potential energy, it would be easy to see what happens - the ball will roll downhill if it gets the chance, and it will stop at the lower energy state. To get back up the hill, someone would have to put in that amount of energy. For thermodynamic potentials, it works the same way - the system will move along the coordinate axis in the direction of decreasing thermodynamic potential energy ($G$ in this case).
This means that under constant NPT conditions, any process that involves an overall
decrease in Gibbs Free energy will be spontaneous. It will also tend to move in the "direction" that has the most negative slope in G at any given time. In mathematical terms,
$\Delta G < 0$ - overall process is spontaneous
$dG < 0$ - process will be moving in that direction
$dG = 0$ - process is at equilibrium
So how does this all fit in with what you described about systems trying to reach minimum energy, and maximum entropy? The answer is: those are both thermodynamic potentials for different types of systems. For a system with constant entropy, volume, and number of particles, (NSV) the
internal energy (or total energy) is the thermodynamic potential. For a system at constant entropy and pressure (NSP), enthalpy is the thermodynamic potential. And for a system at constant volume and internal energy (NVE), the negative of entropy is the thermodynamic potential.
In other words, all of these thermodynamic variables are interconnected, and a change in one affects all of the others. You can hold three constant at any given time and still allow the system to "move" through phase-space. Which thermodynamic potential you need to describe how the system will move depends on which variables you choose.
Let's look at the equation for $\Delta G$ and see how changes in enthalpy and entropy affect it.
$\downarrow \Delta G = \space \downarrow \Delta H - T \Delta S$
If $\Delta H$ decreases, it will make $\Delta G$ decrease as well. This makes sense, since we know that exothermic processes tend to be spontaneous, because they are releasing energy and therefore the final system energy is lower than the initial.
$\downarrow \Delta G = \Delta H - T \space \uparrow \Delta S$
On the other hand, $\Delta G$ tends to decrease as $\Delta S$
increases - this is because the change in entropy is subtracted in the equation. This also matches up with what you know - an increase in entropy indicates a spontaneous process.
For your last question:
I am really having a hard time in figuring out a reaction where energy decreases and entropy increases. Can this happen?
Yes, it can! In fact, under these conditions, the process is
guaranteed to be spontaneous - if $\Delta H$ is negative, and $\Delta S$ is positive, then $\Delta G$ has to be negative - the reaction would be spontaneous at any temperature under these conditions.
Let's look at the other possibilities:
$\Delta H > 0; \Delta S < 0$
In this case, since $\Delta H$ is always positive, and we are
subtracting a negative $\Delta S$, the reaction can never be spontaneous.
$\Delta H > 0; \Delta S > 0$
$\Delta H < 0; \Delta S < 0$
In these two cases, the reaction could be spontaneous or it could be non-spontaneous - it depends on the relative
magnitudes of the enthalpy and energy terms as well as the temperature at which it occurs.
To summarize - all thermodynamic variables are related to each other in fairly complicated ways. When you hold three of them constant (two if you don't count number of molecules), you can derive thermodynamic potential energy functions that describe the behavior of the system in terms of a single quantity. For most cases in chemistry, Gibbs Free Energy is the thermodynamic potential that we use. It gives us the relationship between enthalpy, entropy and temperature under constant pressure conditions. Since it's the thermodynamic potential, it also lets us predict how the system will behave - how it will move through phase space. |
I'm trying to rotate the
\vDash symbol from
amssymb:
\documentclass{article}\usepackage{graphicx}\usepackage{amssymb}\newcommand{\vDashR}{\rotatebox[origin=c]{180}{\ensuremath\vDash}}\begin{document}$\Sigma\vDash\vDashR\Gamma$\end{document}
The rotation comes from here.
With Detexify, I find no predefined symbol.
My problem is that the whole thing is not correctly spaced, and if my eyes are correct, the two symbols are not even perfectly aligned. |
What's the exact value of $\lim\limits_{n\rightarrow \infty}\frac{e^n}{\sum\limits_{i=0}^{i=n}\frac{n^i}{i!}}$?
p.s. I suppose it may be 2, but I cannot prove it.
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community
What's the exact value of $\lim\limits_{n\rightarrow \infty}\frac{e^n}{\sum\limits_{i=0}^{i=n}\frac{n^i}{i!}}$?
p.s. I suppose it may be 2, but I cannot prove it.
This question appears to be off-topic. The users who voted to close gave this specific reason:
Yes, it is 2. The inverse fraction is a probability that a Poisson random variable with mean value $n$ takes a value at most $n$. It follows from appropriate central limit theorem that this probability approaches $1/2$ for large $n$. |
Senior Member
Join Date: Oct 2017
Location: Glasgow
Posts: 474
Cooling tower models
As mentioned in another thread, here's some research I've been doing on cooling towers which might come in handy when working out evaporation ratess. The method below uses the effectiveness of a counterflow heat exchanger; for a more straightforward scenario, try using the effectiveness for a flat plate heat exchanger instead.
The NTU-effectiveness method was originally developed by the Environmental Protection Agency (EPA) and is described in Reddy et al. (2016). The NTU-Effectiveness model is a good introduction to cooling tower systems and is presented for pedagogical reasons; it highlights some insights into how cooling towers behave and their calculation.
There are some common definitions and considerations which are important for all cooling tower models. Firstly, there are different kinds of cooling towers. They are usually categorised in terms of the source of the air-flow (mechanical-draft, induced-draft or natural-draft) and the direction the air flow interacts with the water flow (counterflow or crossflow). Here we focus on models describing counterflow, mechanical-draft cooling towers.
Secondly, there are different cooling tower parameters. These are aspects of the cooling tower which only change with the design of the cooling tower and do not change for a given cooling tower design. These are important because the cooling tower can be calculated by considering deviations away from design conditions. They also inform the cooling tower size (for designers). These parameters are:
1. The design entering air condition: various parameters related to the entering air condition can be calculated using psychrometric equations, so it is sufficient to specify the design entering air wet-bulb temperature.
2. The design heat rejection rate: this is set by the peak (condenser) load at design cooling load conditions. The size of the cooling tower increases linearly with load.
3. The water flow rate: this is determined by the chiller condenser specifications or the district cooling return flow rate. Generally, the water flow rate is considered constant and is not modulated at part-load conditions (for numerous reasons).
4. The range: this is the difference between the entering and leaving water temperatures. Tower size decreases with increasing range.
5. The approach: this is the difference between the water leaving temperature and the entering air wet-bulb temperature. The lower the approach, the higher the coefficient of performance (CoP).
The NTU-effectiveness method starts by equating the enthalpy increase in the air stream with the enthalpy decrease in the water stream, including an evaporative losses term. The following differential equation can be derived from this enthalpy balance
$\displaystyle \dot{m}_a dh_a=-\dot{m}_w dh_w+\dot{m}_a dW \cdot h_{liq-vap}.$
See Table 1 for a list of symbols and their meanings. In general, a cooling tower model can solve this differential equation for any geometry and flow conditions. However, if simplified to form a bulk model, the differential equation can be simplified to
$\displaystyle \dot{m}_a (h_{a,o} - h_{a,i})=\dot{m}_w (h_{w,i} - h_{w,o})+\dot{m}_a (W_o - W_i) \cdot h_{liq-vap}.$
If the cooling tower is treated as a heat exchanger, the heat exchanger effectiveness can be described as the ratio of the actual heat transfer to the maximum rate permitted by the second law of thermodynamics,
$\displaystyle \epsilon_{tower} = \dot{Q}/(\dot{m}_a (h_{a,sat,i}-h_{a,i}) ).$
By analogy with the counterflow heat exchanger,
$\displaystyle \epsilon_{tower}=(1 - e^{(-NTU(1-R))})/(1 - R e^{(-NTU(1-R))} ),$
where
$\displaystyle R=(\dot{m}_a c_{p,a,sat})/(\dot{m}_w c_{p,w} ),$
$\displaystyle c_{p,a,sat}=(h_{a,sat,i}-h_{a,sat,o})/(T_{w,i}-T_{w,o} ).$
The number of thermal transfer units (NTU) can be estimated using the empirical correlation
$\displaystyle NTU=a\left(\dot{m}_w/\dot{m}_a \right)^n,$
Where a has a value between 1 and 3 and n has a value between 0.2 and 0.6, depending on the cooling tower. Once effectiveness is known, the enthalpy of outlet air can be calculated using
$\displaystyle h_{a,o}=h_{a,i}+\epsilon_{tower} (h_{a,sat,i}-h_{a,i} )$
and, if water evaporation (mass) loss is ignored,
$\displaystyle T_{w,o}=T_{w,i}-(\dot{m}_a (h_{a,o}-h_{a,i} ))/(\dot{m}_w c_{p,w} ).$
The value of $\displaystyle c_{p,a,sat}$ is a weak function of temperature, so iteration through the above equations multiple times might be required to settle on correct values. The initial value for $\displaystyle c_{p,a,sat}$ can be estimated using the above equation for $\displaystyle c_{p,a,sat}$, but by substituting for the outside air wet-bulb temperature:
$\displaystyle c_{p,a,sat}=(h_{a,sat,i}-h_{a,sat,o})/(T_{w,i}-T_{wb,o} ).$
To summarise, the method proceeds as follows:
Calculate $\displaystyle c_{p,a,sat}$;
Calculate R and NTU;
Calculate $\displaystyle \epsilon_{tower}$;
Calculate $\displaystyle h_{a,o}$;
Calculate $\displaystyle T_{w,o}$;
Repeat steps 1-5 above with the new value of $\displaystyle T_{w,o}$ until the calculation settles on a converged result.
Table 1: List of mathematical symbols used when describing the NTU-Effectiveness method.
Symbol Description Units
a Dimensionless fitting constant (typically between 1 and 3) -
$\displaystyle c_{p,a,sat}$ Specific heat capacity of saturated air J/(kg.K)
$\displaystyle c_{p,w}$ Specific heat capacity of water J/(kg.K)
$\displaystyle \epsilon_{tower}$ Cooling tower heat exchange effectiveness
$\displaystyle h_{a,i}$ Enthalpy of inlet air J
$\displaystyle h_{a,o}$ Enthalpy of outlet air J
$\displaystyle h_{liq-vap}$ Specific enthalpy of vaporisation J/kg
$\displaystyle h_{a,sat,i}$ Enthalpy of (saturated) inlet air J
$\displaystyle h_{w,i}$ Enthalpy of inlet water J
$\displaystyle h_{w,o}$ Enthalpy of outlet water J
n Dimensionless fitting constant (typically between 0.2 and 0.6) -
$\displaystyle \dot{m}_a$ Air mass flow rate kg/s
$\displaystyle \dot{m}_w$ Water mass flow rate kg/s
NTU Number of Transfer Units -
$\displaystyle \dot{Q}$ Heat rejection rate W
R Capacity ratio -
$\displaystyle T_{a,i}$ Dry-bulb temperature of inlet air °C
$\displaystyle T_{a,o}$ Dry-bulb temperature of outlet air °C
$\displaystyle T_{w,i}$ Dry-bulb temperature of inlet water °C
$\displaystyle T_{w,o}$ Dry-bulb temperature of outlet water °C
$\displaystyle T_{wb,i}$ Wet-bulb temperature of inlet air °C
$\displaystyle W_i$ Humidity ratio of inlet air kg/kg
$\displaystyle W_o$ Humidity ratio of outlet air kg/kg
References:
Reddy, T. A., Kreider, J. F., Curtiss, P. S., and Rabi, A., 2016, “Heating and cooling of buildings: Principles and practice of energy efficient design”, 3rd Ed., CRC Press, ISBN-10: 1439899894, ISBN-13: 978-1439899892. |
2018-09-02 17:21
Measurement of $P_T$-weighted Sivers asymmetries in leptoproduction of hadrons / COMPASS Collaboration The transverse spin asymmetries measured in semi-inclusive leptoproduction of hadrons, when weighted with the hadron transverse momentum $P_T$, allow for the extraction of important transverse-momentum-dependent distribution functions. In particular, the weighted Sivers asymmetries provide direct information on the Sivers function, which is a leading-twist distribution that arises from a correlation between the transverse momentum of an unpolarised quark in a transversely polarised nucleon and the spin of the nucleon. [...] arXiv:1809.02936; CERN-EP-2018-242.- Geneva : CERN, 2019-03 - 20 p. - Published in : Nucl. Phys. B 940 (2019) 34-53 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: CERN-EP-2018-242 - PDF; 1809.02936 - PDF; Registre complet - Registres semblants 2018-02-14 11:43
Light isovector resonances in $\pi^- p \to \pi^-\pi^-\pi^+ p$ at 190 GeV/${\it c}$ / COMPASS Collaboration We have performed the most comprehensive resonance-model fit of $\pi^-\pi^-\pi^+$ states using the results of our previously published partial-wave analysis (PWA) of a large data set of diffractive-dissociation events from the reaction $\pi^- + p \to \pi^-\pi^-\pi^+ + p_\text{recoil}$ with a 190 GeV/$c$ pion beam. The PWA results, which were obtained in 100 bins of three-pion mass, $0.5 < m_{3\pi} < 2.5$ GeV/$c^2$, and simultaneously in 11 bins of the reduced four-momentum transfer squared, $0.1 < t' < 1.0$ $($GeV$/c)^2$, are subjected to a resonance-model fit using Breit-Wigner amplitudes to simultaneously describe a subset of 14 selected waves using 11 isovector light-meson states with $J^{PC} = 0^{-+}$, $1^{++}$, $2^{++}$, $2^{-+}$, $4^{++}$, and spin-exotic $1^{-+}$ quantum numbers. [...] arXiv:1802.05913; CERN-EP-2018-021.- Geneva : CERN, 2018-11-02 - 72 p. - Published in : Phys. Rev. D 98 (2018) 092003 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: CERN-EP-2018-021 - PDF; 1802.05913 - PDF; Registre complet - Registres semblants 2018-02-07 15:23
Transverse Extension of Partons in the Proton probed by Deeply Virtual Compton Scattering / Akhunzyanov, R. (Dubna, JINR) ; Alexeev, M.G. (Turin U.) ; Alexeev, G.D. (Dubna, JINR) ; Amoroso, A. (Turin U. ; INFN, Turin) ; Andrieux, V. (Illinois U., Urbana ; IRFU, Saclay) ; Anfimov, N.V. (Dubna, JINR) ; Anosov, V. (Dubna, JINR) ; Antoshkin, A. (Dubna, JINR) ; Augsten, K. (Dubna, JINR ; CTU, Prague) ; Augustyniak, W. (NCBJ, Swierk) et al. We report on the first measurement of exclusive single-photon muoproduction on the proton by COMPASS using 160 GeV/$c$ polarized $\mu^+$ and $\mu^-$ beams of the CERN SPS impinging on a liquid hydrogen target. [...] CERN-EP-2018-016 ; arXiv:1802.02739. - 2018. - 13 p. Full text - Draft (restricted) - Fulltext Registre complet - Registres semblants 2018-01-30 07:15 Registre complet - Registres semblants 2017-09-28 10:30 Registre complet - Registres semblants 2017-09-19 08:11
Transverse-momentum-dependent Multiplicities of Charged Hadrons in Muon-Deuteron Deep Inelastic Scattering / COMPASS Collaboration A semi-inclusive measurement of charged hadron multiplicities in deep inelastic muon scattering off an isoscalar target was performed using data collected by the COMPASS Collaboration at CERN. The following kinematic domain is covered by the data: photon virtuality $Q^{2}>1$ (GeV/$c$)$^2$, invariant mass of the hadronic system $W > 5$ GeV/$c^2$, Bjorken scaling variable in the range $0.003 < x < 0.4$, fraction of the virtual photon energy carried by the hadron in the range $0.2 < z < 0.8$, square of the hadron transverse momentum with respect to the virtual photon direction in the range 0.02 (GeV/$c)^2 < P_{\rm{hT}}^{2} < 3$ (GeV/$c$)$^2$. [...] CERN-EP-2017-253; arXiv:1709.07374.- Geneva : CERN, 2018-02-08 - 23 p. - Published in : Phys. Rev. D 97 (2018) 032006 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Registre complet - Registres semblants 2017-07-08 20:47
New analysis of $\eta\pi$ tensor resonances measured at the COMPASS experiment / JPAC Collaboration We present a new amplitude analysis of the $\eta\pi$ $D$-wave in $\pi^- p\to \eta\pi^- p$ measured by COMPASS. Employing an analytical model based on the principles of the relativistic $S$-matrix, we find two resonances that can be identified with the $a_2(1320)$ and the excited $a_2^\prime(1700)$, and perform a comprehensive analysis of their pole positions. [...] CERN-EP-2017-169; JLAB-THY-17-2468; arXiv:1707.02848.- Geneva : CERN, 2018-04-10 - 9 p. - Published in : Phys. Lett. B 779 (2018) 464-472 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Registre complet - Registres semblants 2017-07-05 15:07 Registre complet - Registres semblants 2017-04-01 00:22 Registre complet - Registres semblants 2017-01-05 16:00
First measurement of the Sivers asymmetry for gluons from SIDIS data / COMPASS Collaboration The Sivers function describes the correlation between the transverse spin of a nucleon and the transverse motion of its partons. It was extracted from measurements of the azimuthal asymmetry of hadrons produced in semi-inclusive deep inelastic scattering of leptons off transversely polarised nucleon targets, and it turned out to be non-zero for quarks. [...] CERN-EP-2017-003; arXiv:1701.02453.- Geneva : CERN, 2017-09-10 - 11 p. - Published in : Phys. Lett. B 772 (2017) 854-864 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Registre complet - Registres semblants |
For a matroid M of rank r on n elements, let b(M) denote the fraction of bases of M among the subsets of the ground set with cardinality r. We show that $$\Omega \left( {1/n} \right) \leqslant 1 - b\left( M \right) \leqslant O\left( {\log {{\left( n \right)}^3}/n} \right)a\;sn \to \infty $$ Ω ( 1 / n ) ≤ 1 − b ( M ) ≤ O ( log ( n ) 3 / n ) a s n → ∞ for asymptotically almost all matroids M on n elements. We derive that asymptotically almost all matroids on n elements (1) have a U k,2k -minor, whenever k≤O(log(n)), (2) have girth ≥Ω(log(n)), (3) have Tutte connectivity $$\geq\Omega\;{(\sqrt {log(n)})}$$ ≥ Ω ( l o g ( n ) ) , and (4) do not arise as the truncation of another matroid.
Combinatorica – Springer Journals
Published: Jun 5, 2018
It’s your single place to instantly
discover and read the research that matters to you.
Enjoy
affordable access to over 18 million articles from more than 15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from
SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C. “Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera |
Here is a beautiful result from numerical analysis. Given any nonsingular $n\times n$ system of linear equations $Ax=b$, an optimal Krylov subspace method like GMRES must necessarily terminate with the exact solution $x=A^{-1}b$ in no more than $n$ iterations (assuming exact arithmetic).
The Cayley-Hamilton theorem provides a simple, elegant proof of this statement. To begin, recall that at the $k$-th iteration, minimum residual methods like GMRES solve the least-squares problem$$\underset{x_k\in\mathbb{R}^n}{\text{minimize }} \|Ax_k-b\|$$by picking a solution from the $k$-th Krylov subspace$$\text{subject to } x_k \in \mathrm{span}\{b,Ab,A^2b,\ldots,A^{k-1}b\}.$$If the objective $ \|Ax_k-b\|$ goes to zero, then we have found the exact solution at the $k$-th iteration (we have assumed that $A$ is full-rank).
Next, observe that $x_k=(c_0 + c_1 A + \cdots + c_{k-1}A^{k-1})b=p(A)b$, where $p(\cdot)$ is a polynomial of order $k-1$. Similarly, $\|Ax_k-b\|=\|q(A)b\|$, where $q(\cdot)$ is a polynomial of order $k$ satisfying $q(0)=-1$. So the least-squares problem from above for each fixed $k$ can be equivalently posed as a polynomial optimization problem with the same optimal objective
$$\text{minimize } \|q_k(A)b\| \text{ subject to } q_k(0)=-1,\; q_k(\cdot) \text{ is an order-} k \text{ polynomial.}$$Again, if the objective $\|q_k(A)b\|$ goes to zero, then GMRES has found the exact solution at the $k$-th iteration.
Finally, we ask: what is a bound on $k$ that guarantees that the objective goes to zero? Well, with $k=n$, and the optimal polynomial $q_n(\cdot)$ for our polynomial optimization problem is just the characteristic polynomial of $A$. According to Cayley-Hamilton, $q_n(A)=0$, so $\|q_n(A)b\|=0$. Hence we conclude that GMRES always terminate with the exact solution at the $n$-th iteration.
This same argument can be repeated (with very minor modifications) for other optimal Krylov methods like conjugate gradients, conjugate residual / MINRES, etc. In each case, the Cayley-Hamilton forms the crux of the argument. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.