text
stringlengths 256
16.4k
|
|---|
We know the schur basis, and many more, for the ring of symmetric functions over a field \( F \). The next step of generalization is consider the field \( F(t) \), and twist a little bit the inner product. In contrast with Macdonald polynomials, we can give a closed expression for Hall-Littlewood polynomials
Definition and first properties
First we need the following \( t \) - analogues
\[ [k]_t := \dfrac{1-t^k}{1-t}= 1+t+t^2+\cdots+t^{k-1} \]
\[ [k]_t! := [k]_t[k-1]_t\cdots[1]_t\]
Then the
Hall-Littlewood polynomial \( P_{\lambda}(x;t) \) in n variables is given by the following formula
\[ P_{\lambda}(x;t) = \dfrac{1}{\prod_{i\geq 0}[\alpha_i]_t!} \sum_{w\in Sn}w\left(x^{\lambda}\dfrac{\Pi_{i<j}(1-tx_j/x_i)}{\Pi_{i<j}(1-x_j/x_i)}\right) \]
Where \( \lambda = (1^{\alpha_1},2^{\lambda2},\cdots) \) and \( \alpha_0 \) is such that \( \sum_{i\geq 0} \alpha_i = n \)
Note that when \( t=0 \) the denominator \( \Pi_{i\geq 0}[\alpha_i]_t! \) goes away and we get precisely the Weyl's character formula for the schur functions, so
\[ P_{\lambda}(x,0) = s_{\lambda}(x) \]
at \( t=1 \) the products inside cancel and we get the usual monomial funcitons
\[ P_{\lambda}(x,1) = m_{\lambda}(x) \]
The Hall-Littlewood polynomials will form a basis, then we can expand schur in this new basis. The "Kostka-Foulkes polynomials" \( K_{\lambda\mu}(t) \) are defined by
\[ s_{\lambda}(x) = \sum_{\mu} K_{\lambda\mu}(t) P_{\lambda}(x;t) \]
They don't deserve the name polynomials yet, because so far we just know that they are rational functions in t. But we will see why they're actual polynomials.
Definition with raising operators
Define the
Jing Operators as t deformations of the Bersntein operator in the following way
\[ S^t_m f = [u^m]f[X+(t-1)u^{-1}]\Omega[uX] \]
and their modified version
\[ \tilde{S}^t_m f = [u^m]f[X-u^{-1}]\Omega[(1-t)uX] \]
which are related by
\[ \tilde{S}^t_m = \Pi_{(1-t)}S^t_m\Pi^{-1}_{(1-t)} \]
where \( \Pi_{(1-t)} \) is the operator with the plethystic substituion \( f\to f[X(1-t)] \), and \( \Pi^{-1}_{(1-t)} \) is its inverse, namely \( f\to f[X/(1-t)] \)
Analogously to the schur functions now defined the
transformed Hall-Littlewood polynomials as
\[ H_{\mu}(x;t) = S^t_{\mu_1}S^t_{\mu_2}\cdots S^t_{\mu_l}(1) \]
And if we set \( Q_{\mu}(x;t) = H_{\mu}((1-t)X;t) \) we get
\[ Q_{\mu}(z;t) = \tilde{S}^t_{\mu_1}\tilde{S}^t_{\mu_2}\cdots \tilde{S}^t_{\mu_l}(1) \]
Recall that the Bernstein operators added one part to a partition. This new operators behave in a more complicated way, but of similar spirit
Theorem: Jing Operators
If \( m\geq \mu_1 \gamma \) and \( \lambda\geq \mu \) then
\[ S^t_m s_\lambda \in \mathbb{Z}[t] \{ s_{\gamma} : \gamma \geq (m,\mu) \} \]
Moreover, \( s_{(m,\lambda)} \) appears with coefficient 1
The last part is saying something similar to the previous situation, we will get the schur function with an additional part m added, but the theorem is saying that we get also polynomial combinations of other schur functions.
By repeated use of the theorem we can conclude that
\[ H_{\mu}(x;t) = \sum_{\lambda\geq \mu} C_{\lambda \mu}(t) s_{\lambda}(x) \]
where \(C_{\lambda \mu}(t)\) are polynomials with \(C_{\mu \mu}(t) = 1\)
That means that we have upper unitriangularity with respect to the schur basis.
We have analogous statements for Q (although with different proof!)
Theorem: Modified Jing Operators
If \( m\geq \mu_1 \gamma \) then
\[ \tilde{S}^t_m s_\lambda \in \mathbb{Z}[t] \{ s_{\gamma} : \gamma \leq (m,\lambda) \} \]
Moreover, \( s_{(m,\lambda)} \) appears with coefficient \(1-t^{\alpha}\) where \( \alpha \) is the multiplicity of m as a part of \( (m,\lambda) \)
Again by repeated use of the theorem we can conclude that
\[ Q_{\mu}(x;t) = \sum_{\lambda\leq \mu} B_{\lambda \mu}(t) s_{\lambda}(x) \]
where \(B_{\lambda \mu}(t)\) are polynomials with \(B_{\mu \mu}(t) = (1-t)^{l(\mu)}\prod_{i\geq 0}[\alpha_i]_t!\)
Which means that we have lower triangularity (but with a messier diagonal elements) with respect to the schur basis.
The operator \( \Pi_{(1-t)} \) is self adjoint for the inner product, i.e. we have
\[\langle f,g[(1-t)X] \rangle = \langle f[(1-t)X],g \rangle\]
By the opposite triangularities of \( H \) and \( Q=H[(1-t)X] \) we have that if \( \langle H_\mu, H_\upsilon[(1-t)X] \rangle \neq 0 \) then \( \mu \leq \upsilon \). Passing the \( (1-t) \) to the other side, we obtain the opposite conclusion \( \mu \geq \upsilon \) and hence \( \mu = \upsilon \). Which implies the following claim
The transformed Hall-Littlewood polynomials are orthogonal with respect to the inner product \( \langle f,g[(1-t)X]\rangle \) and their self inner products are given by
\[\langle H_{\mu},H_{\mu}[(1-t)X] \rangle=(1-t)^{l(\mu)}\prod_{i\geq 0}[\alpha_i]_t \]
Now everything fits smoothly
Really. First, from the definition of \( Q \) one can get the following formula by induction
\[ Q_{\lambda}(x;t)=(1-t)^{l(\lambda)}[n-l(\lambda)]_t! \sum_{w\in S_n} w\left(x^{\lambda} \dfrac{\Pi_{i<j}(1-tx_j/x_i)}{\Pi_{i<j}(1-x_j/x_i)} \right) \]
The relation with the original Hall-Littlewood polynomials is
\[ P_{\lambda}(x;t) = \dfrac{ Q_{\lambda}(x;t)}{(1-t)^{l(\lambda)}\prod_{i\geq 0}[\alpha_i]_t!} \]
Note that the denominator is precisely the self inner product of the \( H \) in the inner product \( \langle f,g[(1-t)X]\rangle \). Classically something a bit different is defined
\[ \langle f,g\rangle_t = \langle f,g[X/(1-t)]\rangle \]
In this product, the basis \( \{P_{\lambda}\} \) and \( \{Q_{\lambda}\} \) are orthogonal and furthermore, they are dual! So recall that we defined the Kostka - Foulkes polynomial as
\[ s_{\lambda}(x) = \sum_{\mu} K_{\lambda\mu}(t) P_{\lambda}(x;t) \]
By taking inner products, and using the duality just mentioned we arrive at
\[ K_{\lambda\mu}(t) = \langle s_{\lambda},Q_{\mu}\rangle_t = \langle s_{\lambda},H_{\mu}\rangle \]
But that last coefficient is equal to our previously defined polynomials \( C_{\lambda \mu}(t) \), showing that the Kostka-Foulkes polynomials are in fact polynomials.
Positivity of Kostka-Foulkes polynomials
It turns out that they are not just integer polynomials, but their coefficients are positive. It may not sound very interesting to show that a quantity is positive, but usually the question is implicitly asking for a interpretation. There are many different approaches here, all far from trivial. Let's briefly review them.
Representation theory
The work of Hotta, Lusztig, and Springer showed deep connections with representation theory. I cannot say more than a few words: They relate the Kostka-Foulkes polynomials, and a variation of them, called ''cocharge '' Kostka-Foulkes polynomials to some hardcore math where the keywords are ''Unipotent Characters, local intersection homology, Springer fiber and perverse sheaves''.
The important point is that they found a ring, the cohomology ring of the Springer fiber, whose Frobenius series is given by the cocharge transformed Hall-Littlewood polynomials, implying they expand schur positively.
Combinatorics of Tableaux
Lascoux and Schutzenberger proved the following simple and elegant formula, that gives a concrete meaning to each coefficient
\[ K_{\lambda\mu}(t) = \sum_T t^{c(T)} \]
the sum is over all SSYT of shape \( \lambda \) and content \( \mu \). The new definition is the ''charge'' \( c(T) \) which is easier to define in terms of cocharge \( cc(T) \) which is an invariant characterized by
Cocharge is invariant under jeu-de-taquin slides Suppose the shape of \( T \) is disconnected, say \( T = X \cup Y \) with \( X \) above and left of \( Y \), and no entry of \( X \) is equal to 1. Then \( S = Y \cup X \), obtained by swapping, has \( cc(S) = |X| - cc(X) \) If \( T \) is a single row, then \( cc(T) = 0 \)
And then \( c(T) = n(\mu) - cc(T) \). The existence of such an invariant requires proof. There is a process to compute the cocharge called ''catabolism''.
Alternative description using tableaux
Kirillov and Reshetikhin gave the following formula
Missing (Somehow it cannot compile it)
where the sum is over all \( (\lambda,\mu) \) - admissible configurations \( \upsilon \).
Complicated as it seems, this expression has clearly positive coefficients. The origin of this formula is from a technique in mathematical physics known as ''Bethe ansatz'', which is used to produced highest weight vectors for some tensor products. The theorem is relating \( K_{\lambda\mu}(t) \) with the enumeration of highesst weight vectors in \( V_{\mu_1}\otimes\cdots\otimes V_{\mu_r} \) by a quantum number. For more info, stay tuned, probably Anne has something to say about in class.
Commutative Algebra
This may be the less technical. Garsia and Procesi simplified the first proof by giving a down to earth interpretation of the cohomology ring of the springer fiber \( R_{\mu} \). Now the action happens inside the polynomial ring \( C[x] = C[x_1,x_2,\cdots,x_n] \). And
\[ R_{\mu} = C[x]/I_{\mu} \]
For an ideal with a relatively explicit description. They manage to give generators, and finally they proof with more elementary methods that the frobenius series is the cocharge invariant
\[ F_{R_{\mu}}(x;t) = t^{n(\mu)}H_{\mu}(x;t^{-1}) = \sum_{\lambda} \tilde{K}_{\lambda\mu}(t)s_{\lambda} \]
where \( \tilde{K}_{\lambda\mu}(t) = t^{n(\mu)}K_{\lambda\mu}(t^{-1}) \) is the cocharge Kostka-Foulkes poylnomial.
|
NOT A DUPLICATE: Homeomorphism in the definition of a manifold for example is slightly different.
A manifold according to Wikipedia, a book by Spivak, and serveral other books has the following in common:
$\forall x\in M\exists n\in\mathbb{N}_{\ge 0}\exists U$ neighbourhood of $x$ such that $U$ is homeomorphic to (an open subset) of $\mathbb{R}^n$
I don't like the "neighbourhood" part as:
A set $U\subseteq M$ is a neighbourhood to $x$ if $\exists V$ open in $M$ with $[x\in V\wedge V\subseteq U]$
There is no requirement for $U$ to be open. It could be closed!
Problem:
Suppose that $U$ is homeomorphic to $\mathbb{R}^n$ by a function $f:U\rightarrow\mathbb{R}^n$, we know that $f$ is bijective and continuous by definition. This means it is surjective. Thus $f^{-1}(\mathbb{R}^n)=U$
By continuity of $f$ this means that $U$ is open in $M$
This is a contradiction, as $U$ need not be open.
I would be much happier if the definition was "there exists an open set containing $x$ that is homeomorphic to $\mathbb{R}^n$
The open subset of $\mathbb{R}^n$ part
The definition requires there exists a neighbourhood (not all neighbourhoods) homeomorphic to $\mathbb{R}^n$ is this the same as requiring the neighbourhood be homeomorphic to an open subset of $\mathbb{R}^n$?
Thoughts:
I understand that any open interval (in $\mathbb{R}$) is homeomorphic to all of $\mathbb{R}$ however the union of two distinct intervals is open but not homeomorphic to all of $\mathbb{R}$, using this sort of logic it suggests that:
I require (probably through the Hausdorff property) the ability to find a small enough connected (I suspect) open set. Then the two would be equivalent.
I could prove this if I assume the manifold has a countable topological basis (because then it is metricisable and I can use open balls) but I'd like to prove it for all manifolds.
|
While teaching a largely student-discovery style elementary number theory course to high schoolers at the Summer@Brown program, we were looking for instructive but interesting problems to challenge our students. By we, I mean Alex Walker, my academic little brother, and me. After a bit of experimentation with generators and orders, we stumbled across a proof of Wilson’s Theorem, different than the standard proof.
Wilson’s theorem is a classic result of elementary number theory, and is used in some elementary texts to prove Fermat’s Little Theorem, or to introduce primality testing algorithms that give no hint of the factorization.
Theorem 1 (Wilson’s Theorem) For a prime number $latex {p}$, we have $$ (p-1)! \equiv -1 \pmod p. \tag{1}$$
The theorem is clear for $latex {p = 2}$, so we only consider proofs for “odd primes $latex {p}$.”
The standard proof of Wilson’s Theorem included in almost every elementary number theory text starts with the factorial $latex {(p-1)!}$, the product of all the units mod $latex {p}$. Then as the only elements which are their own inverses are $latex {\pm 1}$ (as $latex {x^2 \equiv 1 \pmod p \iff p \mid (x^2 – 1) \iff p\mid x+1}$ or $latex {p \mid x-1}$), every element in the factorial multiples with its inverse to give $latex {1}$, except for $latex {-1}$. Thus $latex {(p-1)! \equiv -1 \pmod p.} \diamondsuit$
Now we present a different proof.
Take a primitive root $latex {g}$ of the unit group $latex {(\mathbb{Z}/p\mathbb{Z})^\times}$, so that each number $latex {1, \ldots, p-1}$ appears exactly once in $latex {g, g^2, \ldots, g^{p-1}}$. Recalling that $latex {1 + 2 + \ldots + n = \frac{n(n+1)}{2}}$ (a great example of classical pattern recognition in an elementary number theory class), we see that multiplying these together gives $latex {(p-1)!}$ on the one hand, and $latex {g^{(p-1)p/2}}$ on the other.
As $latex {g^{(p-1)/2}}$ is a solution to $latex {x^2 \equiv 1 \pmod p}$, and it is not $latex {1}$ since $latex {g}$ is a generator and thus has order $latex {p-1}$. So $latex {g^{(p-1)/2} \equiv -1 \pmod p}$, and raising $latex {-1}$ to an odd power yields $latex {-1}$, completing the proof $\diamondsuit$.
After posting this, we have since seen that this proof is suggested in a problem in Ireland and Rosen’s extremely good number theory book. But it was pleasant to see it come up naturally, and it’s nice to suggest to our students that you can stumble across proofs.
It may be interesting to question why $latex {x^2 \equiv 1 \pmod p \iff x \equiv \pm 1 \pmod p}$ appears in a fundamental way in both proofs.
This post appears on the author’s personal website davidlowryduda.com and on the Math.Stackexchange Community Blog math.blogoverflow.com. It is also available in pdf note form. It was typeset in \TeX, hosted on WordPress sites, converted using the utility github.com/davidlowryduda/mse2wp, and displayed with MathJax.
|
In this series so far, we have looked at how we use decibels to put numbers on the loudness of sounds (Part 1), how we can compensate for humans hearing some sound frequencies better than others (Part 2), and how we can determine decibel numbers for sounds that vary significantly in time (Part 3). We can consider what we have looked at so far as being building blocks. With these blocks, we can construct the acoustic quantities used in noise regulations throughout the world.
In this fourth and final part, we will look more closely at these more advanced quantities. In Norway, for example, noise regulations use such quantities to define concepts such as red and yellow noise zones. (I will use Norwegian noise regulations as an example throughout this post; these are the ones that I have particular experience with. However, noise regulations in many other countries will be similar.)
A real-world noise regulation
To find some examples of quantities that are in use, let’s look up Norway’s perhaps most important noise regulation: T-1442/2016: Retningslinje for behandling av støy i arealplanlegging. Its title translates to “Guideline for treatment of noise in area planning”, and it is published by the Norwegian Ministry of Climate and Environment.
Red and yellow zones
Let me translate its definition of the two noise zones this noise regulation uses:
red zone, closest to the noise source, specifies an area that is not suitable for noise-sensitive usages, and establishing new buildings with noise-sensitive usages shall be avoided yellow zoneis an assessment zone, where buildings with noise-sensitive usages can be constructed if mitigating measures provide satisfactory noise conditions
In short, you cannot build e.g. houses or offices in the red zone. In the yellow zone, you can do so if you take special measures. These could for example be adding extra noise insulation or noise screening.
For each type of noise source, the regulation specifies one or more acoustic quantities as thresholds for the two zones. If the noise in an area exceeds one of the red zone thresholds, that area is part of the red zone. If it does not, but it exceeds one of the yellow zone thresholds, the area is part of the yellow zone. Otherwise, the regulations do not put any restrictions on what kind of buildings can be constructed in the area.
Acoustic quantities
Let us look at the acoustic quantities that the regulation uses:
Please don’t mind that it’s in Norwegian; what matters is which international acoustic quantities it uses. We already know one of them from earlier in this series, namely \(L_\text{AFmax}\). We can find this one by:
A-weighting in frequency Fast-weighting in time Finding the maximum value over time, i.e. the largest value of \(L_\text{AF}(t)\)
We have, however, not seen the rest of the quantities. Let us look more closely at them now.
Day, evening, and night
It is often useful to distinguish between noise at different hours. Say that the factory next to your home is loud during the day while you are out working. Not that important, right? However, it’s worse if it’s loud in the evening when you want to enjoy reading in your garden. And it’s even worse if it’s loud during the night when you are trying to sleep with your window open.
For that reason, three time-of-day quantities have been defined. These cover noise during the daytime, during the evening, and during the night. These are all A-weighted equivalent levels. From Part 3, we know that equivalent levels collect all the sound in a given period of time, and represents it as one level. That level corresponds to the sound pressure level of a steady sound that would give you the same noise dose over the period as the sound that you were actually exposed to.
Calculation
We generally calculate A-weighted equivalent levels as:
\( p_{\text{eq},T} = \sqrt{ \frac{1}{T} \int_{T_1}^{T_2} p_\text{A}^2(t) \, \mathrm{d}t } , \qquad L_{\text{eq},T} = 20 \log \left( \frac{p_{\text{eq},T}}{p_\text{ref}} \right) \) ,
where \(p_\text{A}(t)\) is A-weighted sound pressure as function of time and \(T\) is a time period from the time \(T_1\) to the time \(T_2\).
The three equivalent levels for daytime, evening, and nighttime are:
\(L_\text{day}\): A-weighted equivalent level in the period 07–19 (12 hours), \(L_\text{evening}\): A-weighted equivalent level in the period 19–23 (4 hours), and \(L_\text{night}\): A-weighted equivalent level in the period 23–07 (8 hours).
If you live next to a noise source generating the same amount of noise throughout the day, these three levels will be equal. If, however, the noise source varies throughout the day, this variation make the day, evening, and night levels different. For example, road traffic typically has a much lower \(L_\text{night}\) level; people don’t drive so much at night.
\(L_\text{den}\): Overall day-evening-night level
In principle, we could make noise regulations by defining limits in \(L_\text{day}\), \(L_\text{evening}\), and \(L_\text{night}\) individually. However, it’s more convenient to use one single acoustic quantity as a threshold. Such a quantity must take into account the total noise over the entire day-night period. It must also take into account that noise during the evening and the night is more annoying.
For this reason, the EU has defined a quantity called day-evening-night level. We can calculate this by combining the different levels for day, evening, and night, with penalties. Evening noise gets a penalty of 5 dB, while night-time noise gets a penalty of 10 dB. Thus, we get a single acoustic quantity that can almost always predict the average noise annoyance of a large group of people fairly well.
We typically denote the day-evening-night level as \(L_\text{den}\), and calculate it as
\( L_\text{den} = 10 \log \left( \frac{12}{24} 10^{L_\text{day}/10} + \frac{4}{24} 10^{(L_\text{evening}+5)/10} + \frac{8}{24} 10^{(L_\text{night}+10)/10} \right) \).
While this formula might look a little complex, it basically just averages \(L_\text{day}\), \(L_\text{evening}\), and \(L_\text{night}\) over 24 hours. It takes into account that the daytime period is 12 hours, the evening period is 4 hours, and the night period is 8 hours. It also adds a 5 dB penalty to the evening noise and a 10 dB penalty to the night-time noise.
Comparing \(L_\text{den}\) and \(L_\text{Aeq,24h}\)
To further clarify \(L_\text{den}\), we can compare it with \(L_\text{Aeq,24h}\), the A-weighted 24-hour equivalent level that we looked at in Part 3. The difference between the two is really just the penalties applied in \(L_\text{den}\). If we have a noise situation where all the noise occurs during the day, the noise triggers no penalties. In that case, the two quantities are equal. If all the noise occurs during the evening, the evening penalty makes \(L_\text{den}\) 5 dB higher than \(L_\text{Aeq,24h}\). And if all the noise occurs during the night, \(L_\text{den}\) becomes 10 dB higher due to the night-time penalty. Finally, if the noise is the same throughout the day, evening, and night, we can calculate that the total penalties make \(L_\text{den}\) 6.4 dB higher than \(L_\text{Aeq,24h}\).
Statistical levels
In the table above, we can also see two
statistical quantities, \(L_\text{5AF}\) and \(L_\text{5AS}\). The Norwegian regulation T-1442 uses these to limit noise from roads, railways, airports, and motor sports. The two quantities are based on events, which in this case means pass-bys of individual road vehicles, trains, or aircraft.
Let us look at this with an example. When a single aircraft passes by, the sound level will increase as the aircraft approaches, peak when the aircraft is at its closest, and then decrease as the aircraft moves away. This pass-by represents one single event. If we measure the sound level with A-weighting in frequency and Slow weighting in time, i.e. \(L_\text{AS}(t)\), we can find the maximum level \(L_\text{ASmax}\) for the event. Let us then expand our perspective from a single pass-by to all pass-bys over a longer period, such as a year. In principle, we could measure \(L_\text{ASmax}\) for all of them. (In practice, these values are calculated through simulations instead; actually measuring all pass-bys over a year would be a hopeless task.) From these values of \(L_\text{ASmax}\), we can find \(L_\text{5AS}\), which is defined as the level
exceeded only by the 5% most noisy events .
Even though the Norwegian regulations define \(L_\text{5AF}\) and \(L_\text{5AS}\) from the 5% noisiest
events, there are also similar acoustic quantities defined from the level exceeded 5% of the time. And of course, you can also define quantities from other percentages than 5%. How statistical levels can break down
Using \(L_\text{5AF}\) and \(L_\text{5AS}\) in noise regulations thus limits the share of events that can exceed a certain noise level. It does not, however, limit the level of the strongest events. Let’s look at an illustrative (though deeply unrealistic) example of how this can go horribly wrong. Imagine that 4% of the aircraft events at your home are low-flying fighter jets, while the remaining 96% are high-altitude aircraft that you hardly hear. Even though this is an intense noise situation, it would not exceed a typical \(L_\text{5AS}\) limit. Therefore, we can see in the table from T-1442 that \(L_\text{5AF}\) and \(L_\text{5AS}\) appear together with \(L_\text{den}\), which would
definitely be exceeded in this example.
These limits also have other strange side effects. Let’s say that the previous example had 6% extremely noisy events and 94% almost quiet events. In that case, we could paradoxically comply with an \(L_\text{5AS}\) limit by
increasing the amount of nearly quiet events so that the distribution becomes e.g. 4.9% vs. 95.1%. This shows how difficult it is to make good acoustic quantities and set good noise limits! Summary
This series has brought us through a fundamental description of sound levels, how we can calculate them according to how we humans hear, and how we can take into account that most noise varies in time. In this final part, we saw how we can put all of this together. This gives us the acoustic quantities that real-world noise regulations use.
If you have read this series from the beginning, I hope you have been able to follow it, and have gained a better understanding on how we can use numbers to describe sound loudness. If you should come across other acoustic quantities not described in this series, I hope that it will have prepared you to understand them. But of course, please leave a comment if you would like to have something explained!
This post is a translation of a Norwegian-language blog post that I originally wrote for acousticsresearchcentre.no. I would like to thank Rolf Tore Randeberg for proofreading and fact-checking the Norwegian original.
|
If $a_a,a_2,\dots, a_n$ are positive reals in Arithmetic Progression, prove that $a_1a_2\dots a_n>(a_1a_n)^{n/2}$.
$a_2-a_1=a_3-a_2=\dots=a_n-a_{n-1}=d$ say, then $a_n-a_1=(n-1)d$
Is this approache correct? Some hint??
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
If $a_a,a_2,\dots, a_n$ are positive reals in Arithmetic Progression, prove that $a_1a_2\dots a_n>(a_1a_n)^{n/2}$.
$a_2-a_1=a_3-a_2=\dots=a_n-a_{n-1}=d$ say, then $a_n-a_1=(n-1)d$
Is this approache correct? Some hint??
Consider $$(a_1a_2\ldots a_n)^2>(a_1a_n)^n,$$ which is equivalent to $$(a_1a_n)^2(a_2a_{n-1})^2\ldots>(a_1a_n)^n.$$
So we must show that for all $i\in\{2,\ldots, \frac{n+1}{2}\}$ ,$$a_ia_{n-i+1}>a_1a_n.$$
$$a_ia_{n-i+1}=(a_1+(i-1)d)(a_1+(n-i)d)=a_1^2+a_1(i-1)d+a_1(n-i)d+(i-1)(n-i)d^2$$
We have $(i-1)(n-i)d^2=C>0$, which implies that
$$a_ia_{n-i+1}=a_1^2+a_1(n-1)d+C>a_1^2+a_1(n-1)d=a_1(a_1+(n-1)d)=a_1a_n$$, so since this inequality holds, then $(a_1a_2\ldots a_n)^2>(a_1a_n)^n$ holds and by taking the square root of both sides we get the desired result.
Let $$ X = \frac12( a_1 + a_n) $$then for every $k$, $$ X = \frac12( a_{1+k} + a_{n-k}) $$
Now the square of the left hand side is \begin{align} (a_1a_2\dots)^2 &= (a_1a_2\dots)(a_na_{n-1}\dots) = a_1a_n\times a_2a_{n-1}\dots \\&= \left[X - \frac12( a_1 - a_n)\right]\left[X + \frac12( a_1 - a_n)\right]\times \left[X - \frac12( a_2 - a_{n-1})\right]\left[X + \frac12( a_2 - a_{n-1})\right] \dots \\&= \left[X^2 - \frac14( a_1 - a_n)^2\right]\times \left[X^2 - \frac14( a_2 - a_{n-1})^2\right]\dots \end{align} where there are $2n$ factors in each equality until the last one, and $n$ if the last one.
For every $k$, $( a_{1+k} - a_{n-k})^2$ is maximin when $k =0$. So:
$$ (a_1a_2\dots)^2 \ge \left[X^2 - \frac14( a_1 - a_n)^2\right]^n $$which is your result.
Let $m=(a_1+a_n)/2$ and $p=(a_n-a_1)/2$. Take any $t$ with $0\le t\le p$ and notice that $(m-t)(m+t)=m^2-t^2\ge m^2-p^2=(m-p)(m+p)=a_1a_n$. Consider the case when $n$ is even, and list the arithmetic progression as $a_1,a_2,a_3,...,a_{k-2},a_{k-1},a_{k},a_{k+1},...,a_{n-2},a_{n-1},a_n$, where $a_{k-1},a_{k}$ are the two terms "in the middle" of the seqeunce. Group $a_1$ with $a_n$, group $a_2$ with $a_{n-1}$, group $a_3$ with $a_{n-2}$,... group $a_{k-2}$ with $a_{k+1}$, and group $a_{k-1}$ with $a_k$. We have that $a_2a_{n-1}\ge a_1a_n$, $a_3a_{n-2}\ge a_1a_n$, ..., $a_{k-2}a_{k+1}\ge a_1a_n$, $a_{k-1}a_k\ge a_1a_n$, and trivially $a_1a_n\ge a_1a_n$. Since there are exactly $n/2$ such groups,we obtain $a_1a_2a_3...a_{k-2}a_{k-1}a_{k}a_{k+1}...a_{n-2}a_{n-1}a_n \ge (a_1a_n)^{n/2}$. The case when $n$ is odd is similar, left to you as an exercise.
|
Let's say I have a contour integral on a non-closed contour with starting point $z_0$ and ending point $z_1$. Am I allowed to do a substitution like this? And under what assumptions?
$$\displaystyle \int_{z_0}^{z_1} f (z) \, \mathrm d z = \int_{u^{-1}(z_0)}^{u^{-1}(z_1)} f (u (z))u'(z) \, \mathrm d u $$
For example,
$$\int_\epsilon^T t^{1/2 + i}e^{(-4-i)t}\, \mathrm dt \stackrel{?}{\stackrel{u \leftrightarrow (4+i)t}{\longleftrightarrow} }\int_{4\epsilon+i\epsilon}^{4T+iT}\left({\frac{u}{4+i}}\right)^{1/2 +i}e^{-u}\left({\frac{1}{4+i}}\right)\, \mathrm du$$
where $0 < \epsilon < T \in \mathbb R$.
And the integrand is a contour, it's a smooth map from a real interval to the plane, so the integral is a contour integral with the parameterization $t: t \in [\epsilon..T]$ by definition. At least, I'm pretty sure it matches the definition of a contour integral with a parameterization. I even graphed it:
My thoughts:
Since I'd want want the substitution to move the contour somewhere else without changing the value of the integral, then $u$ would have to be a homotopy; thus a necessary condition would be that the old and new contour should each be in a simply connected domain. I'd guess $u$ might have to be analytic and bijective as well.
|
Given a union of open intervals $\bigcup_i (A_i,B_i)$ such that $\bigcup_i (A_i,B_i) \supset [A,B]$ we can take finite number of them $\bigcup_{i\le N} (A_i,B_i)$ such that $\bigcup_{i\le N} (A_i,B_i) \supset [A,B]$
Proof:
Step 1: Cover $A$ with an open interval $(A_i,B_i)$
Step 2: Choose an open interval $(A_{i_1},B_{i_1})$ that intersect with $(A_i,B_i)$ to the left for which
case 1: Length $L=\mu((A_{i_1},B_{i_1})\bigcap (A_i,B_i)^c)$ is maximum if the maximum exists. If $B$ has not been covered repeat step 2 with $(A_{i_1},B_{i_1})$ and $(A_i,B_i)$ replaced with $(A_{i_2},B_{i_2})$ and $(A_{i_1},B_{i_1})$ respectively.
case 2: if the maximum of $L$ doesn't exist that means there is a subsequence of intervals that intersect with $(A_i,B_i)$ for which the subsequence {$(A_{i_m},B_{i_m})$}for which {$B_{i_m}$} is increasing and the limit is $k$, since $k$ is also covered by a certain interval $(A_{i_{2}},B_{i_{2}})$ we can find an interval $(A_{i_1},B_{i_1})$ from the subsequence {$(A_{i_m},B_{i_m})$} that intersects both $(A_{i_{2}},B_{i_{2}})$ and $(A_i,B_i)$. If $B$ has not been covered Repeat step 2 with $(A_{i_1},B_{i_1})$ and $(A_i,B_i)$ replaced with $(A_{i_3},B_{i_3})$ and $(A_{i_2},B_{i_2})$ respectively.
It must be that $\lim_{n \to\infty} B_{i_{n}} > B$ , if not that would have implied that $B$ is not covered in any open interval. Which means there is an $N$ such that $B_{N}>B $ and $\bigcup_{i\le N} (A_i,B_i) \supset [A,B]$
|
How many positive integer $n$ are there such that $2n+1$ , $3n+1$ are both perfect squares ?
$n=40$ is a solution . Is this the only solution ? Is it possible to tell whether finitely many or infinitely many solutions exist ?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
How many positive integer $n$ are there such that $2n+1$ , $3n+1$ are both perfect squares ?
$n=40$ is a solution . Is this the only solution ? Is it possible to tell whether finitely many or infinitely many solutions exist ?
The quick version is $n_0 = 0, \; \; n_1 = 40,$ then $$ \color{magenta}{ n_{k+2} = 98 n_{k+1} - n_k + 40}. $$
Given an $(x,y)$ pair with $3x^2 - 2 y^2 = 1$ we then take $n = (x^2-1)/ 2 = (y^2 - 1)/ 3. $
The first few $x,y$ pairs are $$ x=1, \; y= 1 , \; n=0 $$ $$ x=9, \; y=11, \; n=40 $$ $$ x= 89, \; y=109, \; n=3960 $$ $$ x=881, \; y=1079, \; n= 388080 $$ $$ x=8721, \; y=10681, \; n= 38027920 $$ $$ x=86329, \; y=105731, \; n= 3726348120 $$ and these continue forever with $$ x_{k+2} = 10 x_{k+1} - x_k, $$ $$ y_{k+2} = 10 y_{k+1} - y_k. $$ $$ n_{k+2} = 98 n_{k+1} - n_k + 40. $$
People seem to like these recurrences in one variable. The underlying two-variable recurrence in the pair $(x,y)$ can be abbreviated as $$ (x,y) \; \; \rightarrow \; \; (5x+4y,6x+5y) $$ beginning with $$ (x,y) = (1,1) $$ The two-term recurrences for $x$ and $y$ are just Cayley-Hamilton applied to the matrix $$ A \; = \; \left( \begin{array}{rr} 5 & 4 \\ 6 & 5 \end{array} \right) , $$ that being $$ A^2 - 10 A + I = 0. $$
If $2n+1=x^2$ and $3n+1=y^2$ then $$3x^2-2y^2=1\ .$$ Multiplying by $-2$ and substituting $X=2y$, $Y=x$, this can be written as a Pell-type equation $$X^2-6Y^2=-2\ .$$ This has infinitely many solutions, some of which are given by $X=X_n$, $Y=Y_n$ where $$X_n+Y_n\sqrt6=(2+\sqrt6)(5+2\sqrt6)^n\ .\tag{$*$}$$ For example, taking $n=1$ gives $$X=22,\ Y=9,\ x=9,\ y=11$$ and hence $n=40$, the solution you have already. Equation $(*)$ gives the recurrences $$X_{n+1}=5X_n+12Y_n\ ,\quad Y_{n+1}=2x_n+5y_n\ ,$$ and it is then possible to eliminate the $Y$ terms to get $$X_{n+2}=10X_{n+1}-X_n$$ and similar relations for $x_n$ and $y_n$.
For a more detailed explanation of the method (applied to a slightly different equation), see my answer to this question.
If $2n+1$ is a square then it is (obviously?) of the form $4m^2+4m+1$ and thus $3n+1=6m^2+6m+1$ and so the question can be rephrased:
When is $6m^2+6m+1$ a square for integer $m$?
Which is trivially rephrased:
What are the integer solutions of $6x^2-y^2+6x+1=0$?
which can be answered at
(sorry for the cop-out but it's better than nothing).
I don't think it is possible to explicitly find all such $n's$. This condition of both $2n+1, 3n+1$ being squares has appeared in a lot of contests such as the Putnam but the question asked is always to prove some implication from this condition. For example, one can prove that if $2n+1, 3n+1$ are squares, $5n+3$ cannot be a prime and $40|n$.
We can do this modulo $4$. Since $n \equiv 0,1,2,3 \pmod{4}$, therefore $$2n+1 \equiv 1,3,1,3 \pmod{4}$$ and $$3n+1 \equiv 1,0,3,2 \pmod{4}.$$ However a square of an integer is only $0,1 \pmod{4}$. This means for both $2n+1$ and $3n+1$ to be squares $n \equiv 0 \pmod{4}$.
So let $n=4k$. Then we want $2n+1=8k+1$ and $3n+1=12k+1$ to be perfect squares. Let $8k+1=a^2$ and $12k+1=b^2$. Then $$4k=b^2-a^2=(b-a)(b+a).$$ Can you proceed from here?
$2n+1=x^2$ and $3n+1=y^2$ since , $x $ is odd let $x=2m+1$ $2n+1=4m^2+4m+1$ $n=2 (m)(m+1) $...... (eqn1) This means $4|n $; $y^2= 6m (m+1) $ $let y=2t+1$ From last two, $3 (m)(m+1)=2t (t+1) $ Since , $2|t (t+1) $; this implies $4|m (m+1) $ and this implies $8|n $ ... (from eqn 1). Now it remains to prove that $5|n$ Qudratic residue for $mod5$ are {0,1,4} So, $x^2= {0 or 1 or 4}$ $ mod5$ Also $y^2= {0 or 1 or 4}$ $ mod 5$ The only possible is $ n= 0 mod 5$. (According to first two equations)
|
I am trying to find the generator(s) of the Symmetric Group $S_3$ and I have attempted this via brute force by listing the permutations of $S_3$ and composing and repeating them but I have not found any generators. - thanks
$S_3$ can be generated by a 2 cycle and a 3 cycle. For example $(12)$ and $(1 2 3)$.
As James noted in his comment, generating sets are not unique, since if $A$ is a set that generates the group, then any set containing $A$ will also be a generating set. However, I assume you are trying to find a smallest set of generators.
If you allow an element $g$ to be a generator, then everything in the cyclic group $\langle g \rangle = \{1,g,g^2,\ldots \}$ will be taken care of. In particular, if you have one $3$-cycle, then you get the other one ($(1\,2\,3)^2=(1\,3\,2)$ and $(1\,3\,2)^2=(1\,2\,3)$). So if you have one $3$-cycle as a generator, you only need to get the three transpositions. By inspection, if you take any one of them as a generator, you can get the other two transpositions by multiplying it with the $3$-cycles.
In general, for the symmetric group $S_n$, the following are generating sets. $$\{(1\,2), (2\, 3), \ldots, (n-1,n)\}$$ $$\{(1\,2), (1\,3), \ldots, (1\,n)\}$$ This also implies that the transpositions generate $S_n$.
You can generate $S_3$ with a rotation $(1\:2\:3)$ and a flip $(1\:2)$, think geometrically.
There is a generalization to $S_n$. The generators $\alpha_1,\cdots,\alpha_{n-1}$, such that
$\alpha_i^2 = 1$, $\alpha_i\alpha_j = \alpha_j\alpha_i$ if $j \neq i\pm 1$, $\alpha_i\alpha_{i+1}\alpha_i = \alpha_{i+1}\alpha_i\alpha_{i+1}.$
$\alpha_i$ ``swaps the $i$th and $(i + 1)$-th position''.
See Wikipedia, which says the following:
Other popular generating sets include the set of transpositions that swap $1$ and $i$ for $2 ≤ i ≤ n$ and a set containing any $n$-cycle and a $2$-cycle of adjacent elements in the $n$-cycle.
So for $S_3$, just let $n=3$.
Any permutation in any $S_n$ can be expressed as a product of transpositions (2-cycles), so they constitute a generating set.
Regarding to @Sanath's post and that you already knew the structure of $S_3$'s structure, we may treat the group with the following presentation:
$$\langle a,b\mid a^2=b^3=(ab)^2=1\rangle$$ It is good to know that $S_3=D_6$, the dihedral group of order $6$.
protected by Saad Oct 10 '18 at 13:20
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
I have some problems to determine the eigenvectors of a given matrix:
The matrix is:
$$ A = \left( \begin{array}{ccc} 1 & 0 &0 \\ 0 & 1 & 1 \\ 0 & 0 & 2 \end{array} \right) $$
I calculated the eigenvalues first and got $$ \lambda_1 = 1, \lambda_2 = 2, \lambda_3 = 1$$ There was no problem for me so far. But I do not know how to determine the eigenvectors. The formula I have to use is $$ (A-\lambda_i E)u=0, \lambda_i = \{1,2,3\}, u\ is\ eigenvector$$ When I determined the eigenvector with $ \lambda_2=2$ there was not a problem. I got the result that $x_3 = variable$ and $x_2 = x_3$, so: $$ EV_2= \left( \begin{array}{ccc} 0 \\ \beta \\ \beta \end{array} \right) \ \beta\ is\ variable,\ so\ EV = span\{\left( \begin{array}{ccc} 0 \\ 1 \\ 1 \end{array} \right)\} $$
But when I used $ \lambda_1 = \lambda_3 = 1 $, I had to calculate: $$ \left( \begin{array}{ccc} 0 & 0 &0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right) * \left( \begin{array}{ccc} x_1 \\ x_2 \\ x_3 \end{array} \right) =0 $$
what in my opinion means that $x_3 = 0 $ and $x_1$ and $x_2$ are variable, but not necessarily the same as in the case above, so $ EV_{1,3} = \left( \begin{array}{ccc} \alpha \\ \beta \\ 0 \end{array} \right) $
What does that mean for my solution? is it $$ EV_{1,3} = span\{\left( \begin{array}{ccc} 1 \\ 0 \\ 0 \end{array} \right), \left( \begin{array}{ccc} 0 \\ 1 \\ 0 \end{array} \right), \left( \begin{array}{ccc} 1 \\ 1 \\ 0 \end{array} \right)\} $$
What exactly is now my solution in this case for the eigenvectors $ \lambda_1, \lambda_3 $? In university we just had one variable value in the matrix so I don't know how to handle two of them being different.
|
I think there is some vagueness inherent in the "similarly define ...". How is one to assign the consistency statement $Con(ZF_\lambda)$ for computable $\lambda$? This looks trivial but it is not.
I think also ZF is something of a red herring here. The question arises in PA (since we are looking at $\Pi_1$ sentences quantifying over natural numbers.)
Feferman has shown ("Transfinite Recursive Progression of Theories" JSL 1962) that it is possible to assign for every n in an effective manner a $\Sigma_1$-formula $\varphi_n(v_0)$ where each of the latter is to be thought of as enumerating (integer codes of) axiom sets (which I'll call "theories."). This is done in such a fashion so that if $a,b$ are integers with $b = 2^a$ that $T_b$ is $T_a$ together with the statement
$$\forall \psi \in \Sigma_1\forall x [ Prov_{T_a}\psi(x) \longrightarrow \psi(x)]$$
(This is thus a "1-Reflection Principle" - for $\psi\in\Sigma_1$ here). He does thiswith a view to considering those integers $a$ that are notations for recursive ordinals (in the sense of the notation system devised by Kleene - "Kleene's $O$".)
(There are clauses for $a$ representing a notation for a limit ordinal, when $a = 3^e$).
He proves that there are linear paths through the system of notations of computable ordinals, going through all recursive ordinals $\alpha$,so that
Every true $\Pi_2$sentence in arithmetic is proven by one of the theories along the path.
The startingtheory $T_0$ here can be PA (or ZFC if you want). Such a path givesa definite meaning to $ZF_0, \ldots, ZF_\alpha, \ldots$ etc. for recursive $\alpha$.
Moreover for such a particular progression of theories one would would construe the answer to the question to be "No".
Feferman's starting point was the 1939 paper of Turing ("On Systems of Logic Based on Ordinal"). Turing also considered such paths through Kleene's $O$, but could just prove a theorem for $\Pi_1$ sentences, (using simpler "Consistency" statements"). Feferman shows that if one takes "$n$-Reflection" statements for every $n$ each time one extends the theory then there are paths along which
every true statement ofarithmetic is proven.
The moral of the story is that there are very complex ways of simply defining sequencesof theories, (because there are infinitely many ways, or Turing programs, of representing a recursive ordinal) which can hide/disguise all sorts of information.
A very readable survey is Franzen: "On Transfinite Progressions" BSL 2004.
Update (This is an answer to Scott Aaronson's Update.)
He asks:
given a positive integer k, can we say something concrete about which iterated consistency statements suffice to prove the halting or non-halting of every k-state Turing machine?
Let $M_0, \ldots ,M_{n-1}$ enumerate the $k$-state TM's. Let $P$ be the subset of $n$of those indices of TM's in the list that halt.
The statement
$\forall i (i \in P \rightarrow M_i$ halts $ \wedge \, i \notin P \rightarrow M_i $ does not halt $)$
is a $\Pi_2$ statement. In Feferman's paper (
op.cit.) he shows that every true$\Pi_2$ statement is proven by a theory $T_a$ in a 1-Reflection sequence, where $a$ is a notation for an ordinal of rank equal to $\omega^2 + \omega + 1 $.
So in terms of the question we do not need to vary the $\alpha$ depending on whatordinals a $k$-state machine can produce. (Just fix $\alpha$ as given above.) Of course it gives us zero practical information: there are infinitely many such notations of that rank, and we may not know which one to look at.
|
There is a question I would like to ask at MO, but it seems somewhat unorthodox compared most others, so I want to get some support (or discouragement) here first. What I want to do is to give a new definition, justify why it is a natural extension of a well known concept, and ask if someone has seen it or has something to say about it. I am trying to extend a theory that I understand to a setting outside my main expertise, and I therefore risk making a trivial question. I have put effort in trying to find research or notes about this new object but to no avail. I do have genuine research interest in this thing, so it is not only a matter of curiosity.
Would something like this make a good question? (I don't see why not, but I have not encountered such questions here yet, so I'm slightly worried.) Is there something that I should be particularly careful about when formulating a question of this kind? Are there examples of good and bad questions like this to learn from?
I chose to make this meta question quite general in order to focus discussion on the idea of definitions as questions rather than the particulars of the question I had in mind. But if you want a short description of my question, I can give one. The full question is lengthy, so I will not produce it here.
A short version of the question I had in mind:
Periodic geodesics on compact Lie groups can be described algebraically, without any reference to minimizing arc length: a periodic geodesic is a mapping $S^1\ni t\mapsto x\phi(t)\in G$ where $\phi:S^1\to G$ is a nontrivial homomorphism and $x\in G$. By analogue, we can define a geodesic on a finite group by replacing $S^1$ with a finite cyclic group. The geodesic flow on finite groups can be seen as a discrete time dynamical system.
Have geodesics on finite groups been studied before, perhaps under another name? Does this structure look familiar to anyone?A problem in the field of inverse problems asks whether a function on a closed manifold is uniquely determined by its integrals over all closed geodesics. This problem has been studied quite a lot, also on Lie groups. A natural generalization of the question now asks whether a function on a finite group is uniquely determined by its sums over all geodesics. Has this problem been studied before? Is it known to have applications, abstract or concrete?I have obtained some results on this finite generalization. I am fairly confident that this problem is new to the inverse problems community, but I am not sure if it is well known in another field. It would be great if I could motivate the question or relate it to existing literature. Any ideas, references or analogues could be helpful in understanding and solving the problem.
|
This is a secondary supplemental note on the Gaussian integers, written for my Spring 2016 Elementary Number Theory Class at Brown University. This note is also available as a pdf document.
In this note, we cover the following topics.
Assumed prerequisites from other lectures. Which regular integer primes are sums of squares? How can we classify all Gaussian primes? 1. Assumed Prerequisites
Although this note comes shortly after the previous note on the Gaussian integers, we covered some material from the book in the middle. In particular, we will assume use the results from chapters 20 and 21 from the textbook.
Most importantly, for $latex {p}$ a prime and $latex {a}$ an integer not divisible by $latex {p}$, recall the Legendre symbol $latex {\left(\frac{a}{p}\right)}$, which is defined to be $latex {1}$ if $latex {a}$ is a square mod $latex {p}$ and $latex {-1}$ if $latex {a}$ is not a square mod $latex {p}$. Then we have shown Euler’s Criterion, which states that
$$ a^{\frac{p-1}{2}} \equiv \left(\frac{a}{p}\right) \pmod p, \tag{1}$$
and which gives a very efficient way of determining whether a given number $latex {a}$ is a square mod $latex {p}$.
We used Euler’s Criterion to find out exactly when $latex {-1}$ is a square mod $latex {p}$. In particular, we concluded that for each odd prime $latex {p}$, we have
$$ \left(\frac{-1}{p}\right) = \begin{cases} 1 & \text{ if } p \equiv 1 \pmod 4 \ -1 & \text{ if } p \equiv 3 \pmod 4 \end{cases}. \tag{2}$$
Finally, we assume familiarity with the notation and ideas from the previous note on the Gaussian integers. 2. Understanding When $latex {p = a^2 + b^2}$.
Throughout this section, $latex {p}$ will be a normal odd prime. The case $latex {p = 2}$ is a bit different, and we will need to handle it separately. When used, the letters $latex {a}$ and $latex {b}$ will denote normal integers, and $latex {q_1,q_2}$ will denote Gaussian integers.
We will be looking at the following four statements.
$latex {p \equiv 1 \pmod 4}$ $latex {\left(\frac{-1}{p}\right) = 1}$ $latex {p}$ is nota Gaussian prime $latex {p = a^2 + b^2}$
Our goal will be to show that each of these statements are equivalent. In order to show this, we will show that
$$ (1) \implies (2) \implies (3) \implies (4) \implies (1). \tag{3}$$
Do you see why this means that they are all equivalent?
This naturally breaks down into four lemmas.
We have actually already shown one.
Lemma 1 $latex {(1) \implies (2)}$. Proof: We have already proved this claim! This is exactly what we get from Euler’s Criterion applied to $latex {-1}$, as mentioned in the first section. $latex \Box$
There is one more that is somewhat straightfoward, and which does not rely on going up to the Gaussian integers.
Lemma 2 $latex {(4) \implies (1)}$. Proof: We have an odd prime $latex {p}$ which is a sum of squares $latex {p = a^2 + b^2}$. If we look mod $latex {4}$, we are led to consider $$ p = a^2 + b^2 \pmod 4. \tag{4}$$ What are the possible values of $latex {a^2 \pmod 4}$? A quick check shows that the only possibilites are $latex {a^2 \equiv 0, 1 \pmod 4}$.
So what are the possible values of $latex {a^2 + b^2 \pmod 4}$? We must have one of $latex {p \equiv 0, 1, 2 \pmod 4}$. Clearly, we cannot have $latex {p \equiv 0 \pmod 4}$, as then $latex {4 \mid p}$. Similarly, we cannot have $latex {p \equiv 2 \pmod 4}$, as then $latex {2 \mid p}$. So we necessarily have $latex {p \equiv 1 \pmod 4}$, which is what we were trying to prove. $latex \Box$
For the remaining two pieces, we will dive into the Gaussian integers.
Lemma 3 $latex {(2) \implies (3)}$. Proof: As $latex {\left(\frac{-1}{p}\right) = 1}$, we know there is some $latex {a}$ so that $latex {a^2 \equiv -1 \pmod p}$. Rearranging, this becomes $latex {a^2 + 1 \equiv 0 \pmod p}$.
Over the normal integers, we are at an impasse, as all this tells us is that $latex {p \mid (a^2 + 1)}$. But if we suddenly view this within the Gaussian integers, then $latex {a^2 + 1}$ factors as $latex {a^2 + 1 = (a + i)(a – i)}$.
So we have that $latex {p \mid (a+i)(a-i)}$. If $latex {p}$ were a Gaussian prime, then we would necessarily have $latex {p \mid (a+i)}$ or $latex {p \mid (a-i)}$. (Do you see why?)
But is it true that $latex {p}$ divides $latex {a + i}$ or $latex {a – i}$? For instance, does $latex {p}$ divide $latex {a + i}$? No! If so, then $latex {\frac{a}{p} + \frac{i}{p}}$ would be a Gaussian integer, which is clearly not true.
So $latex {p}$ does not divide $latex {a + i}$ or $latex {a-i}$, and we must therefore conclude that $latex {p}$ is not a Gaussian prime. $latex \Box$
Lemma 4 $latex {(3) \implies (4)}$. Proof: We now know that $latex {p}$ is not a Gaussian prime. In particular, this means that $latex {p}$ is not irreducible, and so it has a nontrivial factorization in the Gaussian integers. (For example, $latex {5}$ is a regular prime, but it is not a Gaussian prime. It factors as $latex {5 = (1 + 2i)(1 – 2i)}$ in the Gaussian integers.)
Let’s denote this nontrivial factorization as $latex {p = q_1 q_2}$. By nontrivial, we mean that neither $latex {q_1}$ nor $latex {q_2}$ are units, i.e. $latex {N(q_1), N(q_2) > 1}$. Taking norms, we see that $latex {N(p) = N(q_1) N(q_2)}$.
We can evaluate $latex {N(p) = p^2}$, so we have that $latex {p^2 = N(q_1) N(q_2)}$. Both $latex {N(q_1)}$ and $latex {N(q_2)}$ are integers, and their product is $latex {p^2}$. Yet $latex {p^2}$ has exactly two different factorizations: $latex {p^2 = 1 \cdot p^2 = p \cdot p}$. Since $latex {N(q_1), N(q_2) > 1}$, we must have the latter.
So we see that $latex {N(q_1) = N(q_2) = p}$. As $latex {q_1, q_2}$ are Gaussian integers, we can write $latex {q_1 = a + bi}$ for some $latex {a, b}$. Then since $latex {N(q_1) = p}$, we see that $latex {N(q_1) = a^2 + b^2}$. And so $latex {p}$ is a sum of squares, ending the proof. $latex \Box$
Notice that $latex {2 = 1 + 1}$ is also a sum of squares. Then all together, we can say the following theorem.
Theorem 5 A regular prime $latex {p}$ can be written as a sum of two squares, $$ p = a^2 + b^2, \tag{5}$$ exactly when $latex {p = 2}$ or $latex {p \equiv 1 \pmod 4}$.
A remarkable aspect of this theorem is that it is entirely a statement about the behaviour of the regular integers. Yet in our proof, we used the Gaussian integers in a very fundamental way. Isn’t that strange?
You might notice that in the textbook, Dr. Silverman presents a proof that does not rely on the Gaussian integers. While interesting and clever, I find that the proof using the Gaussian integers better illustrates the deep connections between and around the structures we have been studying in this course so far. Everything connects!
Example 1 The prime $latex {5}$ is $latex {1 \pmod 4}$, and so $latex {5}$ is a sum of squares. In particular, $latex {5 = 1^2 + 2^2}$. Example 2 The prime $latex {101}$ is $latex {1 \pmod 4}$, and so is a sum of squares. Our proof is not constructive, so a priori we do not know what squares sum to $latex {101}$. But in this case, we see that $latex {101 = 1^2 + 10^2}$. Example 3 The prime $latex {97}$ is $latex {1 \pmod 4}$, and so it also a sum of squares. It’s less obvious what the squares are in this case. It turns out that $latex {97 = 4^2 + 9^2}$. Example 4 The prime $latex {43}$ is $latex {3 \pmod 4}$, and so is not a sum of squares. 3. Classification of Gaussian Primes
In the previous section, we showed that each integer prime $latex {p \equiv 1 \pmod 4}$ actually splits into a product of two Gaussian numbers $latex {q_1}$ and $latex {q_2}$. In fact, since $latex {N(q_1) = p}$ is a regular prime, $latex {q_1}$ is a Gaussian irreducible and therefore a Gaussian prime (can you prove this? This is a nice midterm question.)
So in fact, $latex {p \equiv 1 \pmod 4}$ splits in to the product of two Gaussian primes $latex {q_1}$ and $latex {q_2}$.
In this way, we’ve found infinitely many Gaussian primes. Take a regular prime congruent to $latex {1 \pmod 4}$. Then we know that it splits into two Gaussian primes. Further, if we know how to write $latex {p = a^2 + b^2}$, then we know that $latex {q_1 = a + bi}$ and $latex {q_2 = a – bi}$ are those two Gaussian primes.
In general, we will find all Gaussian primes by determining their interaction with regular primes.
Suppose $latex {q}$ is a Gaussian prime. Then on the one hand, $latex {N(q) = q \overline{q}}$. On the other hand, $latex {N(q) = p_1^{a_1} p_2^{a_2} \cdots p_k^{a_k}}$ is some regular integer. Since $latex {q}$ is a Gaussian prime (and so $latex {q \mid w_1 w_2}$ means that $latex {q \mid w_1}$ or $latex {q \mid w_2}$), we know that $latex {q \mid p_j}$ for some regular integer prime $latex {p_j}$.
So one way to classify Gaussian primes is to look at every regular integer prime and see which Gaussian primes divide it. We have figured this out for all primes $latex {p \equiv 1 \pmod 4}$. We can handle $latex {2}$ by noticing that $latex {2 = (1 + i) (1-i)}$. Both $latex {(1+i)}$ and $latex {(1-i)}$ are Gaussian primes.
The only primes left are those regular primes with $latex {p \equiv 3 \pmod 4}$. We actually already covered the key idea in the previous section.
Lemma 6 If $latex {p \equiv 3 \pmod 4}$ is a regular prime, then $latex {p}$ is also a Gaussian prime. Proof: In the previous section, we showed that if $latex {p}$ is not a Gaussian prime, then $latex {p = a^2 + b^2}$ for some integers $latex {a,b}$, and then $latex { p \equiv 1 \pmod 4}$. Since $latex {p \not \equiv 1 \pmod 4}$, we see that $latex {p}$ is a Gaussian prime. $latex \Box$
In total, we have classified all Gaussian primes.
Theorem 7 The Gaussian primes are given by $latex {(1+i), (1-i)}$ Regular primes $latex {p \equiv 3 \pmod 4}$ The factors $latex {q_1 q_2}$ of a regular prime $latex {p \equiv 1 \pmod 4}$. Further, these primes are given by $latex {a \pm bi}$, where $latex {p = a^2 + b^2}$.
4. Concluding Remarks
I hope that it’s clear that the regular integers and the Gaussian integers are deeply connected and intertwined. Number theoretic questions in one constantly lead us to investigate the other. As one dives deeper into number theory, more and different integer-like rings appear, all deeply connected.
Each time I teach the Gaussian integers, I cannot help but feel the sense that this is a hint at a deep structural understanding of what is really going on. The interplay between the Gaussian integers and the regular integers is one of my favorite aspects of elementary number theory, which is one reason why I deviated so strongly from the textbook to include it. I hope you enjoyed it too.
|
Consider $n$ processes sharing the CPU in a round-robin fashion. Assuming that each process switch takes $s$ seconds, what must be the quantum size $q$ such that the overhead resulting from process switching is minimized but at the same time each process is guaranteed to get its turn at the CPU at least every $t$ seconds?
$q \leq \frac{t-ns}{n-1}$
$q \geq \frac{t-ns}{n-1}$
$q \leq \frac{t-ns}{n+1}$
$q \geq \frac{t-ns}{n+1}$
wht does this line mean ?
at the same time each process is guaranteed to get its turn at the CPU at least every t seconds?
@ set2018
wht does this line mean ?at the same time each process is guaranteed to get its turn at the CPU at least every t seconds?
at the same time each process is guaranteed to get its turn at the CPU at least every t seconds?
Let consider this scenario : process p1 is running for q time quantum then it is preempted , when p1 is preempted then again after t time units p1 gets chance to run into cpu.
In this t time unit p1 is waiting in the ready queue .
P1 .P2..............Pn-1.. P1
| .. .. .... t ..............|
Answer: (A)
Each process runs for q period and if there are n process: $p_{1}$, $p_{2}$,$p_{3}$,, ....., $p_{n}$,.Then $p_1$'s turn comes again when it has completed time quanta for remaining process p2 to pn, i.e, it would take at most $(n-1)q$ time. So,, each process in round robin gets its turn after $(n-1)q$ time when we don't consider overheads but if we consider overheads then it would be $ns + (n-1)q$So, we have $ns + (n-1)q \leq t$
#for easy understanding attached diagram.
@ AnilGoudar
If the sum q(n-1) + ns = 4 and t is 5
If the sum q(n-1) + ns = 4 and t is 5
then the first process will come again for execution at 't' seconds. so in this case first process come after minimum 4 seconds and maximum 5 seconds .
In question see it asked " overhead resulting from process switching is minimized" so that means after 4 seconds first process must come in this scenario.
@Bikram Sir,
Sorry for the late response,
I am unable to follow the statement "overhead resulting from process switching is minimized".
Please explain if possible. I am not getting the logic, but when i gave some numbers to the variables n,t and s. I am getting the option as A, but from the statement it is still unclear.
please clarify this Sir.
overhead resulting from process switching is minimized
overhead resulting from process switching is minimized
this line tells
q(n-1) + ns ≤ t
where overhead is (n-1)q + ns
now we get q(n-1) + ns = 4 and t = 5
as it is minimum overhead, then q(n-1) + ns = 4 only ..
ns + (n-1)q = 5 also as t =5 but we choose value 4 just because of that above mention line .
I had the same confusion b/w >=t and <=t. But this is the way i made myself understand that :
each process is guaranteed to get its turn at the CPU at least every t seconds
each process is guaranteed to get its turn at the CPU at least every t seconds
So each process must get it's turn within t secs i.e. if it gets turn after 1 sec then good..if 2 secs then also good..in t secs still okay.. but if in (t+1) secs then not at all okay..
So whatever be the time we calculated [(n-1)q+ns] that can be either 1 or 2 or anything less than equal to t.
So, (n-1)q+ns <=t
Hope it helps.. :)
@MiNiPanda
each process is guaranteed to get its turn at the CPU at least every t seconds
each process is guaranteed to get its turn at the CPU at least every t seconds
how is this same with - each process is guaranteed to get its turn at the CPU atmost every t seconds
as the t >= (n-1)q + ns will implies the atmost time t.
ANS is A
Let us take a simple example of 4 processes P1 , P2 , P3 and P4 . Here n=4Consider P1 || P2 || P3 || P4 || P1 || P2 || ..... will be the round robin scheduling order.Now acc to the question the context switch time is S , here context is shown by " || "and time quantum is " Q "and T is the time taken by a process to again get the CPU after scheduling once .if we see our scheduling pattern P1 || P2 || P3 || P4 || P1 || P2 ||P1 gets the CPU again after 4 ( = n) context switch and 3 ( =n-1) time quantum.So 4S + 3Q <= TIn general, where n is the process count , this becomesnS + (n-1) Q <= T(n-1)Q <= T - nS=> Q <= (T- nS ) / ( n-1)
Answer is A
Refer - https://gateoverflow.in/119288/scheduling
For combinatorics , can add balls and bin...
|
On existence, uniform decay rates and blow up for solutions of systems of nonlinear wave equations with damping and source terms
1.
Department of Mathematics and Statistics, Federal University of Campina Grande, 58109-970, Campina Grande, PB, Brazil
2.
Department of Mathematics - State University of Maringá, 87020-900 Maringá, PR, Brazil
3.
Department of Mathematics, State University of Maringá, Maringá, PR, 87020-900, Brazil
4.
Department of Mathematics, University of Nebraska-Lincoln, Lincoln, NE, 68588-0130, United States
5.
Department of Mathematics, University of Nebraska-Lincoln, Lincoln, NE 68588
$u_{t t}$ $- \Delta u + |u_t|^{m-1}u_t = F_u(u,v) \text{ in }\Omega\times ( 0,\infty )$,
$v_{t t}$$ - \Delta v + |v_t|^{r-1}v_t = F_v(u,v) \text{ in }\Omega\times( 0,\infty )$,
where $\Omega$ is a bounded domain in $\mathbb{R}^n$, $n=1,2,3$ with a smooth boundary $\partial\Omega=\Gamma$ and $F$ is a $C^1$ function given by
$ F(u,v)=\alpha|u+v|^{p+1}+ 2\beta |uv|^{\frac{p+1}{2}}. $
Under some conditions on the parameters in the system and with careful analysis involving the Nehari Manifold, we obtain several results on the global existence, uniform decay rates, and blow up of solutions in finite time when the initial energy is nonnegative.
Mathematics Subject Classification:Primary: 35L55, 35L05; Secondary: 35B40, 74H3. Citation:Claudianor O. Alves, M. M. Cavalcanti, Valeria N. Domingos Cavalcanti, Mohammad A. Rammaha, Daniel Toundykov. On existence, uniform decay rates and blow up for solutions of systems of nonlinear wave equations with damping and source terms. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 583-608. doi: 10.3934/dcdss.2009.2.583
[1]
Vo Anh Khoa, Le Thi Phuong Ngoc, Nguyen Thanh Long.
Existence, blow-up and exponential decay of solutions for a porous-elastic system with damping and source terms.
[2] [3]
Yanbing Yang, Runzhang Xu.
Nonlinear wave equation with both strongly and weakly damped terms: Supercritical initial energy finite time blow up.
[4]
Nguyen Thanh Long, Hoang Hai Ha, Le Thi Phuong Ngoc, Nguyen Anh Triet.
Existence, blow-up and exponential decay estimates for a system of nonlinear viscoelastic wave equations with nonlinear boundary conditions.
[5]
Huiling Li, Mingxin Wang.
Properties of blow-up solutions to a parabolic system with nonlinear localized terms.
[6] [7]
Pan Zheng, Chunlai Mu, Xuegang Hu.
Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source.
[8]
Akmel Dé Godefroy.
Existence, decay and blow-up for solutions to the sixth-order generalized Boussinesq equation.
[9]
Evgeny Galakhov, Olga Salieva.
Blow-up for nonlinear inequalities with gradient terms and singularities on unbounded sets.
[10] [11]
Vural Bayrak, Emil Novruzov, Ibrahim Ozkol.
Local-in-space blow-up criteria for two-component nonlinear dispersive wave system.
[12]
Min Li, Zhaoyang Yin.
Blow-up phenomena and travelling wave solutions to the periodic integrable dispersive Hunter-Saxton equation.
[13]
Yuta Wakasugi.
Blow-up of solutions to the one-dimensional semilinear wave equation
with damping depending on time and space variables.
[14]
Asma Azaiez.
Refined regularity for the blow-up set at non characteristic points for the vector-valued semilinear wave equation.
[15] [16]
Julián López-Gómez, Pavol Quittner.
Complete and energy blow-up in indefinite superlinear parabolic problems.
[17]
Petronela Radu, Grozdena Todorova, Borislav Yordanov.
Higher order energy decay rates for damped wave
equations with variable coefficients.
[18]
Jun Zhou.
Global existence and energy decay estimate of solutions for a class of nonlinear higher-order wave equation with general nonlinear dissipation and source term.
[19]
Guangyu Xu, Jun Zhou.
Global existence and blow-up of solutions to a singular Non-Newton polytropic filtration equation with critical and supercritical initial energy.
[20]
Mohamed-Ali Hamza, Hatem Zaag.
Blow-up results for semilinear wave equations in the
superconformal case.
2018 Impact Factor: 0.545
Tools Metrics Other articles
by authors
[Back to Top]
|
Hi guys I have an equation like this: $$2n-n^8<0$$
How can I simplify this equation and get the value of $n$?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Hi guys I have an equation like this: $$2n-n^8<0$$
How can I simplify this equation and get the value of $n$?
$2n-n^8\lt 0$ is equivalent to $n^8 - 2n \gt 0$. Since $n^8 -2n = n(n^7-2)$, you are looking at $$ n(n^7-2)\gt 0.$$
A product is positive if both factors are positive, or if both factors are negative. So:
Case 1. Both factors positive.
We need $n\gt 0$ and $n^7-2\gt 0$; $n^7-2\gt 0$ means $n^7\gt 2$; taking $7$th roots (which we can do because $7$ is odd, so raising to the $7$th power respects inequalities) we get $n\gt 2^{1/7}$. So we need $n\gt 0$
and $n\gt 2^{1/7}$. The latter condition implies the first, so we get that this case will happen exactly when $n\gt 2^{1/7}$. Case 2. Both factors negative.
I'll let you do this one.
First of all what you have is an inequality, not an equation, as you claim.
So, $$\begin{align}2n-n^8 &\lt 0 \\ n^8 & \gt2n \end{align}$$
To determine all the solutions to this inequality, we go over every value of $n$ and ask if this $n$ satisfies our inequality. We break this process into various cases: $n \gt 0$; $n \lt 0$ and $n=0$ Case 1: $n \lt 0$
Note that if $n \lt 0$, we have that $2n \lt 0$ but however, $n^8 \gt 0$. This means, $$n^8 \gt 0 \gt 2n ~~~~\mbox{for all $n \lt 0$}$$
Case 2: $n \gt 0$
If $n \gt 0$, we can cancel out the $n$ without having to reverse the inequality. So, we get, $$n^7 \gt 2 \implies n \gt \sqrt[7] 2$$
Case 3: $n=0$
It is also clear that $n=0$ is not a solution as $0 \not \gt 0$.
So, the set of all $n$ that satisfies this inequality is $$n \in (-\infty, 0) \cup (\sqrt[7]{2}, \infty)$$
|
For $p \in \mathbb{R}$, consider the following problem: \begin{equation} \label{1} \begin{cases} \operatorname{div}(a \nabla u ) = p\delta_{x_0} \quad \text{in } \Omega \\ u=0 \quad \text{on } \partial \Omega ; \end{cases} \end{equation} under the assumption that $a \in L^\infty$ is constant in some neighbourhood of $x_0$, i.e. $a(\mathbf{x})= a_0 \text{Id}$ for $\mathbf{x} \in B=B_{r_0}(x_0)$, $a_0 \in \mathbb{R}$, we can look for a solution in the form $$ u(x) = \psi(x) + K(x-x_0), $$ where $K(\cdot)$ is the fundamental solution (up to the constants $a_0,p$) of the Laplace operator and $\psi \in H^1(\Omega)$ satisfies a classical, well-posed, Neumann problem with data depending on $K|_{\Omega \setminus B}$. Note that the solution $u$ is not quite regular globally since it reads the singularity of $K$ at $x_0$.
Nevertheless, we can set up a control problem "away" from $x_0$ with the number $p$ as control and the quadratic tracking cost functional $$ \min_{p} \left( \frac{1}{2} \| u(p) - u_{d} \|_{0, \Omega \setminus B}^2 + \frac{1}{2} |p|^2 \right), $$ for some desired state $u_d \in L^2$, $u(p)$ being the solution of the above problem (in the above sense!) corresponding to the control $p$.
I see some problems arising while trying to formulate go-to results like necessary optimality conditions: it is not clear what should be a suitable adjoint problem, since a weak formulation is only available for $\psi=\psi_p$, but the state $u$ also depends on $K=K_p$, making $u(p)$ not a trivial translation of $\psi$. Moreover, the choice of the $L^2(\Omega \setminus B)$ norm in the optimization was made to somehow regularize $u$, on the other hand:
Is the control problem still meaningful, as we are trying - in principle - to approximate a global a priorichosen desired state taking into account only the behavior away from a fixed point? Working with integrals in $\Omega \setminus B$ rather than $\Omega$ gives rise to unwanted boundary terms in integrations by parts.
Are there any references for optimal control problems of this kind?
Note: I know that it is possible to set up a global weak formulation for this type of Dirac-source problems (see reference) using sharp functional analysis results on weighted spaces, but this is not known to be possible for larger classes of operators, like those I have to deal with in my research. Therefore, this is a model example and the "split" solution is most likely the only option. Reference: Allendes, Alejandro, et al. "An a posteriori error analysis for an optimal control problem with point sources." ESAIM: Mathematical Modelling and Numerical Analysis 52.5 (2018): 1617-1650.
|
Consider the equation
\[ f(x,y) = C. \]
Taking the gradient we get
\[ f_x(x,y)\hat{\textbf{i}} + f_y(x,y)\hat{\textbf{j}} = 0.\]
We can write this equation in differential form as
\[ f_x(x,y)\, dx+ f_y(x,y)\, dy = 0.\]
Now divide by \( dx \) (we are not pretending to be rigorous here) to get
\[ f_x(x,y)+ f_y(x,y) \dfrac{dy}{dx} = 0.\]
Which is a first order differential equation. The goal of this section is to go backward. That is if a differential equation if of the form above, we seek the original function \(f(x,y)\) (called a
). A differential equation with a potential function is called potential function . If you have had vector calculus, this is the same as finding the potential functions and using the fundamental theorem of line integrals. exact
Example \(\PageIndex{1}\)
Solve
\[ 4xy + 1 + (2x^2 + \cos y)y' = 0. \]
Solution
We seek a function \(f(x,y)\) with
\[ f_x(x,y) = 4xy + 1 \]
and
\[ f_y(x,y) = 2x^2 + \cos y. \]
Integrate the first equation with respect to \(x\) to get
\[ f(x,y) = 2x^2y + x + C(y) . \]
Notice since \(y\) is treated as a constant, we write \(C(y)\). Now take the partial derivative with respect to \(y\) to get
\[ f_y(x,y) = 2x^2 + C'(y) .\]
We have two formulae for \( f_y(x,y) \) so we can set them equal to each other.
\[ 2x^2 + \cos y = 2x^2 + C'(y) \]
That is
\[ C'(y) = \cos\, y \]
or
\[ C(y) = \sin \, y .\]
Hence
\[ f(x,y) = 2x^2y + x + \sin \, y. \]
The solution to the differential equation is
\[ 2x^2y + x + \sin \, y = C. \]
Does this method always work? The answer is no. We can tell if the method works by remembering that for a function with continuous partial derivatives, the mixed partials are order independent. That is
\[ f_{xy} = f_{yx} .\]
If we have the differential equation
\[ M(x,y) + N(x,y)y' = 0 \]
then we say it is an
exact differential equation if
\[ M_y(x,y) = N_x(x,y) . \]
Theorem (Solutions to Exact Differential Equations)
Let \(M\), \(N\), \(M_y\), and \(N_x\) be continuous with
\[ M_y = N_x.\]
Then there is a function \(f(x,y)\) with
\( f_x = M \) and \( f_y = N \)
such that
\[ f(x,y) = C \]
is a solution to the differential equation
\[ M(x,y) + N(x,y)y' = 0 .\]
Example \(\PageIndex{2}\)
Solve the differential equation
\[ y + (2xy - e^{-2y})y' = 0 . \]
Solution
We have
\[ M(x,y) = y\]
and
\[N(x,y) = 2xy - e^{-2y}. \]
Now calculate
\[ M_y = 1 \;\;\; \text{and} \;\;\; N_x = 2y. \]
Since they are not equal, finding a potential function \(f\) is hopeless. However there is a glimmer of hope if we remember how we solved first order linear differential equations. We multiplied both sides by an integrating factor \(m\). We do that here to get
\[ mM + mN_y' = 0 .\]
For this to be exact we must have
\[ (mM)_y = (mN)_x . \]
Using the product rule gives
\[ m_yM + mM_y = m_xN + mN_x . \]
We now have a new differential equation that is unfortunately more difficult to solve than the original differential equation. We simplify the equation by assuming that either m is a function of only \(x\) or only \(y\). If it is a function of only \(x\), then \( m_y = 0 \) and
\[ mM_y = m_xN + mN_x .\]
Solving for \(m_x\), we get
\[ m_x = \dfrac{M_y-N_x}{N}. \]
If this is a function of \(y\) only, then we will be able to find an integrating factor that involves \(y\) only. If it is a function of only \(y\), then \( m_x = 0\) and
\[ m_yM + mM_y = mN_x . \]
Solving for \(m_y\), we get
\[ m_y = \dfrac{N_x-M_y}{M} m .\]
If this is a function of \(y\) only, then we will be able to find an integrating factor that involves \(y\) only.
For our example
\[ m_y = \dfrac{N_x - M_y }{M} m = \dfrac{2y-1}{y} m = (2-\frac{1}{y})m .\]
Separating gives
\[ \dfrac{dm}{m} = (2-\frac{1}{y}) \,dy. \]
Integrating gives
\[ ln \, m = 2y - ln\, y. \]
\[ m = e^{2y - ln\, y} = y ^{-1}e^{2y}. \]
Multiplying both sides of the original differential equation by \(m\) gives
\[ y(y ^{-1}e^{2y}) + (y ^{-1}e^{2y})(2xy - e^{-2y})y' = 0 \]
\[ \implies e^{2y} + (2xe^{2y} - \frac{1}{y})y' = 0 . \]
Now we see that
\[ M_y = 2e^{2y} = N_x. \]
Which tells us that the differential equation is exact. We therefore have
\[ f_x (x,y) = e^{2y}. \]
Integrating with respect to \(x\) gives
\[ f(x,y) = xe^{2y} + C(y). \]
Now taking the partial derivative with respect to \(y\) gives
\[ f_y(x,y) = 2xe^{2y} + C'(y) = 2xe^{2y} - \frac{1}{y} .\]
So that
\[ C'(y) = \frac{1}{y}. \]
Integrating gives
\[ C(y) = ln\, y. \]
The final solution is
\[ xe^{2y} + ln\, y = 0. \]
Contributors Larry Green (Lake Tahoe Community College)
Integrated by Justin Marshall.
|
A simple, but important and useful, type of separable equation is the
first order homogeneous linear equation:
Definition: first order homogeneous linear differential equation
A first order homogeneous linear differential equation is one of the form
\[\dot y + p(t)y=0\]
or equivalently
\[\dot y = -p(t)y.\]
"Linear'' in this definition indicates that both \(\dot y\) and \(y\) occur to the first power; "homogeneous'' refers to the zero on the right hand side of the first form of the equation.
Example \(\PageIndex{2}\)
The equation \(\dot y = 2t(25-y)\) can be written \(\dot y + 2ty= 50t\). This is linear, but not homogeneous. The equation \(\dot y=ky\), or \(\dot y-ky=0\) is linear and homogeneous, with a particularly simple \(p(t)=-k\).
Because first order homogeneous linear equations are separable, we can solve them in the usual way:
$$\eqalign{ \dot y &= -p(t)y\cr \int {1\over y}\,dy &= \int -p(t)\,dt\cr \ln|y| &= P(t)+C\cr y&=\pm\,e^{P(t)}\cr y&=Ae^{P(t)},\cr} $$
where \(P(t)\) is an anti-derivative of \(-p(t)\). As in previous examples, if we allow \(A=0\) we get the constant solution \(y=0\).
Example \(\PageIndex{3}\)
Solve the initial value problems \(\dot y + y\cos t =0\), \(y(0)=1/2\) and \(y(2)=1/2\).
Solution
We start with
$$P(t)=\int -\cos t\,dt = -\sin t,$$
so the general solution to the differential equation is
$$y=Ae^{-\sin t}.$$
To compute \(A\) we substitute:
$$ {1\over 2} = Ae^{-\sin 0} = A,$$
so the solutions is
$$ y = {1\over 2} e^{-\sin t}.$$
For the second problem,
$$ \eqalign{{1\over 2} &= Ae^{-\sin 2}\cr A &= {1\over 2}e^{\sin 2}\cr}$$
so the solution is
$$ y = {1\over 2}e^{\sin 2}e^{-\sin t}.$$
Example \(\PageIndex{4}\)
Solve the initial value problem \(y\dot y+3y=0\), \(y(1)=2\), assuming \(t>0\).
Solution
We write the equation in standard form: \(\dot y+3y/t=0\). Then
$$P(t)=\int -{3\over t}\,dt=-3\ln t$$
and
$$ y=Ae^{-3\ln t}=At^{-3}.$$
Substituting to find \(A\): \(2=A(1)^{-3}=A\), so the solution is \(y=2t^{-3}\).
|
14 0 Homework Statement A yo-yo is placed on a conveyor belt accelerating ##a_C = 1 m/s^2## to the left. The end of the rope of the yo-yo is fixed to a wall on the right. The moment of inertia is ##I = 200 kg \cdot m^2##. Its mass is ##m = 100kg##. The radius of the outer circle is ##R = 2m## and the radius of the inner circle is ##r = 1m##. The coefficient of static friction is ##0.4## and the coefficient of kinetic friction is ##0.3##. Find the initial tension in the rope and the angular acceleration of the yo-yo. Homework Equations ##T - f = ma## ##\tau_P = -fr## ##\tau_G = Tr## ##I_P = I + mr^2## ##I_G = I + mR^2## ##a = \alpha R##
First off, I was wondering if the acceleration of the conveyor belt can be considered a force. And I'm not exactly sure how to use Newton's second law if the object of the forces is itself on an accelerating surface.
Also, I don't know whether it rolls with or without slipping. I thought I could use ##a_C = \alpha R## for the angular acceleration, but the acceleration of the conveyor belt is not the only source of acceleration, since the friction and the tension also play a role. I can't find a way to combine these equations to get the
Also, I don't know whether it rolls with or without slipping.
I thought I could use ##a_C = \alpha R## for the angular acceleration, but the acceleration of the conveyor belt is not the only source of acceleration, since the friction and the tension also play a role.
I can't find a way to combine these equations to get the
|
I was given this innocent looking homework question.
Given two nonempty sets $A,B \subseteq X$ where $(X,d)$ is a metric space.
Show that $\mathrm{dist}(A,B) = \inf \{d(x,y) \mid x \in A, y \in B \}$ is well-defined. Suppose $A \cap B = \emptyset$. Suppose $A$ is closed and $B$ is compact. Show that $\mathrm{dist}(A, B) > 0$.
Aren't both (1) and (2) properties of the fact that $S = \{ d(x,y) \mid x \in A, y \in B \}$ is a subset of $P = \{ x \mid x \ge 0 \}$, which is bounded?
for (1): $S \subseteq P$ and $P$ bounded implies $S$ is bounded. Hence $\mathrm{inf} S$ exists. Since $\mathrm{inf} S$ is unique, we conclude that $\mathrm{dist}(A, B)$ is well defined.
for (2): Since $\mathrm{inf} P \ge 0$ and $S \subseteq P$, $\mathrm{dist}(A, B) \ge 0$. Since $A \cap B = \emptyset$, $x \ne y$ $\forall x \in A, y \in B$. Then $d(x,y) \ne 0$. Hence $\mathrm{dist}(A,B) > 0$.
Am I missing something very very obvious? Where does compactness come into play?
|
Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux
1.
School of Mathematics, Southeast University, Nanjing 210096, China
2.
Institute for Applied Mathematics, School of Mathematics, Southeast University, Nanjing 211189, China
$\begin{eqnarray*} \left\{\begin{array}{lll}n_{t}+u\cdot\nabla n=\Delta n^m-\nabla\cdot(uS(x,n,c)\cdot\nabla c),&x\in\Omega,\ \ t>0,\\[1mm]c_t+u\cdot\nabla c=\Delta c-c+n,&x\in\Omega,\ \ t>0,\\[1mm]u_t+k(u\cdot\nabla)u=\Delta u+\nabla P+n\nabla\phi,&x\in\Omega,\ \ t>0\\[1mm]\nabla\cdot u=0,&x\in\Omega,\ \ t>0 \end{array}\right.\end{eqnarray*}$
$\Omega\subset\mathbb{R}^3$
$k\in\mathbb{R}$
$\phi\in W^{2,\infty}(\Omega)$
$S$
$\overline\Omega\times[0,\infty)^2\rightarrow\mathbb{R}^{3\times 3}$
$|S(x,n,c)|\leq S_0(n+1)^{-\alpha}\ \ {\rm for\ all}\ x\in\mathbb{R}^3,\ n\geq0,\ c\geq0.$
$m+\alpha>\frac{4}{3}$
$m>\frac{1}{3}$ Mathematics Subject Classification:35K65, 35Q35, 35Q51, 92C17. Citation:Feng Li, Yuxiang Li. Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5409-5436. doi: 10.3934/dcdsb.2019064
References:
[1]
V. Calvez and J. A. Carrillo,
Volume effects in the Keller-Segel model: Energy estimates preventing blow-up,
[2]
X. Cao and S. Ishida,
Global-in-time bounded weak solutions to a degenerate quasilinear Keller-Segel system with rotation,
[3]
R. Duan, X. Li and Z. Xiang,
Global existence and large time behavior for a two-dimensional chemotaxis-Navier-Stokes system,
[4]
R. Duan, A. Lorz and P. Markowich,
Global solutions to the coupled chemotaxis-fluid equations,
[5]
H. He and Q. Zhang,
Global existence of weak solutions for the 3D chemotaxis-Navier-Stokes equations,
[6] [7] [8] [9] [10] [11] [12] [13]
Y. Li and Y. Li,
Global boundedness of solutions for the chemotaxis-Navier-Stokes system in $\Bbb{R}^2$,
[14]
J. Liu and Y. Wang,
Global existence and boundedness in a Keller-Segel-(Navier-)Stokes system with signal-dependent sensitivity,
[15]
J. Liu and Y. Wang,
Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes system involving a tensor-valued sensitivity with saturation,
[16] [17] [18] [19]
Y. Peng and Z. Xiang, Global existence and boundedness in a 3D Keller-Segel-Stokes system with nonlinear diffusion and rotational flux,
[20] [21]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[22]
Y. Tao and M. Winkler,
Boundedness vs. blow-up in a two-species chemotaxis system with two chemicals,
[23]
I. Tuval, L. Cisneros, C. Dombrowski, C. W. Wolgemuth, J. O. Kessler and R. E. Goldstein,
Bacterial swimming and oxygen transport near contact lines,
[24]
Y. Wang,
Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes system with subcritical sensitivity,
[25]
Y. Wang and X. Cao,
Global classical solutions of a 3D chemotaxis-Stokes system with rotation,
[26]
Y. Wang, M. Winkler and Z. Xiang,
Global classical solutions in a two-dimensional chemotaxis-Navier-Stokes system with subcritical sensitivity,
[27] [28] [29]
M. Winkler,
Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops,
[30] [31]
M. Winkler,
Boundedness and large time behavior in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion and general sensitivity,
[32] [33]
M. Winkler,
Global weak solutions in a three-dimensional chemotaxis–Navier-Stokes system,
[34] [35]
M. Winkler,
Does fluid interaction affect regularity in the three-dimensional Keller-Segel system with saturated sensitivity?,
[36]
M. Winkler,
Global existence and stabilization in a degenerate chemotaxis-Stokes system with mildly strong diffusion enhancement,
[37] [38]
M. Yang,
Global solutions to Keller-Segel-Navier-Stokes equations with a class of large initial data in critical Besov spaces,
[39]
H. Yu, W. Wang and S. Zheng,
Global classical solutions to the Keller-Segel-Navier-Stokes system with matrix-valued sensitivity,
[40]
Q. Zhang and Y. Li,
Global existence and asymptotic properties of the solution to a two-species chemotaxis system,
[41] [42]
Q. Zhang and Y. Li,
Convergence rates of solutions for a two-dimensional chemotaxis-Navier-Stokes system,
[43] [44]
Q. Zhang and Y. Li,
Global weak solutions for the three-dimensional chemotaxis-Navier-Stokes system with nonlinear diffusion,
[45]
Q. Zhang and X. Zheng,
Global well-posedness for the two-dimensional incompressible chemotaxis-Navier-Stokes equations,
[46]
J. Zheng,
Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes system with nonlinear diffusion,
show all references
References:
[1]
V. Calvez and J. A. Carrillo,
Volume effects in the Keller-Segel model: Energy estimates preventing blow-up,
[2]
X. Cao and S. Ishida,
Global-in-time bounded weak solutions to a degenerate quasilinear Keller-Segel system with rotation,
[3]
R. Duan, X. Li and Z. Xiang,
Global existence and large time behavior for a two-dimensional chemotaxis-Navier-Stokes system,
[4]
R. Duan, A. Lorz and P. Markowich,
Global solutions to the coupled chemotaxis-fluid equations,
[5]
H. He and Q. Zhang,
Global existence of weak solutions for the 3D chemotaxis-Navier-Stokes equations,
[6] [7] [8] [9] [10] [11] [12] [13]
Y. Li and Y. Li,
Global boundedness of solutions for the chemotaxis-Navier-Stokes system in $\Bbb{R}^2$,
[14]
J. Liu and Y. Wang,
Global existence and boundedness in a Keller-Segel-(Navier-)Stokes system with signal-dependent sensitivity,
[15]
J. Liu and Y. Wang,
Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes system involving a tensor-valued sensitivity with saturation,
[16] [17] [18] [19]
Y. Peng and Z. Xiang, Global existence and boundedness in a 3D Keller-Segel-Stokes system with nonlinear diffusion and rotational flux,
[20] [21]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[22]
Y. Tao and M. Winkler,
Boundedness vs. blow-up in a two-species chemotaxis system with two chemicals,
[23]
I. Tuval, L. Cisneros, C. Dombrowski, C. W. Wolgemuth, J. O. Kessler and R. E. Goldstein,
Bacterial swimming and oxygen transport near contact lines,
[24]
Y. Wang,
Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes system with subcritical sensitivity,
[25]
Y. Wang and X. Cao,
Global classical solutions of a 3D chemotaxis-Stokes system with rotation,
[26]
Y. Wang, M. Winkler and Z. Xiang,
Global classical solutions in a two-dimensional chemotaxis-Navier-Stokes system with subcritical sensitivity,
[27] [28] [29]
M. Winkler,
Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops,
[30] [31]
M. Winkler,
Boundedness and large time behavior in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion and general sensitivity,
[32] [33]
M. Winkler,
Global weak solutions in a three-dimensional chemotaxis–Navier-Stokes system,
[34] [35]
M. Winkler,
Does fluid interaction affect regularity in the three-dimensional Keller-Segel system with saturated sensitivity?,
[36]
M. Winkler,
Global existence and stabilization in a degenerate chemotaxis-Stokes system with mildly strong diffusion enhancement,
[37] [38]
M. Yang,
Global solutions to Keller-Segel-Navier-Stokes equations with a class of large initial data in critical Besov spaces,
[39]
H. Yu, W. Wang and S. Zheng,
Global classical solutions to the Keller-Segel-Navier-Stokes system with matrix-valued sensitivity,
[40]
Q. Zhang and Y. Li,
Global existence and asymptotic properties of the solution to a two-species chemotaxis system,
[41] [42]
Q. Zhang and Y. Li,
Convergence rates of solutions for a two-dimensional chemotaxis-Navier-Stokes system,
[43] [44]
Q. Zhang and Y. Li,
Global weak solutions for the three-dimensional chemotaxis-Navier-Stokes system with nonlinear diffusion,
[45]
Q. Zhang and X. Zheng,
Global well-posedness for the two-dimensional incompressible chemotaxis-Navier-Stokes equations,
[46]
J. Zheng,
Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes system with nonlinear diffusion,
[1]
Dan Li, Chunlai Mu, Pan Zheng, Ke Lin.
Boundedness in a three-dimensional Keller-Segel-Stokes system involving tensor-valued sensitivity with saturation.
[2]
Hirofumi Notsu, Masato Kimura.
Symmetry and positive definiteness of the tensor-valued spring constant derived from P1-FEM for the equations of linear elasticity.
[3]
Sachiko Ishida.
Global existence and boundedness for chemotaxis-Navier-Stokes systems
with position-dependent sensitivity in 2D bounded domains.
[4]
Youshan Tao, Michael Winkler.
Global existence and boundedness in a Keller-Segel-Stokes model with arbitrary porous medium diffusion.
[5] [6]
Youshan Tao.
Global dynamics in a higher-dimensional repulsion chemotaxis model with nonlinear sensitivity.
[7]
Wei Wang, Yan Li, Hao Yu.
Global boundedness in higher dimensions for a fully parabolic chemotaxis system with singular sensitivity.
[8]
Marco Di Francesco, Alexander Lorz, Peter A. Markowich.
Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion:
Global existence and asymptotic behavior.
[9]
Laiqing Meng, Jia Yuan, Xiaoxin Zheng.
Global existence of almost energy solution to the two-dimensional chemotaxis-Navier-Stokes equations with partial diffusion.
[10]
T. Hillen, K. Painter, Christian Schmeiser.
Global existence for chemotaxis with finite sampling radius.
[11]
María Astudillo, Marcelo M. Cavalcanti.
On the upper semicontinuity of the global attractor for a porous medium type problem with large diffusion.
[12] [13]
Markus Gahn.
Multi-scale modeling of processes in porous media - coupling reaction-diffusion processes in the solid and the fluid phase and on the separating interfaces.
[14]
Brahim Amaziane, Leonid Pankratov, Andrey Piatnitski.
The existence of weak solutions to immiscible compressible two-phase flow in porous media: The case of
fields with different rock-types.
[15]
Zhi-An Wang, Kun Zhao.
Global dynamics and diffusion limit of a one-dimensional repulsive chemotaxis model.
[16]
Chunhua Jin.
Boundedness and global solvability to a chemotaxis-haptotaxis model with slow and fast diffusion.
[17]
Sainan Wu, Junping Shi, Boying Wu.
Global existence of solutions to an attraction-repulsion chemotaxis model with growth.
[18]
Radek Erban, Hyung Ju Hwang.
Global existence results for complex hyperbolic models of bacterial
chemotaxis.
[19]
Johannes Lankeit, Yulan Wang.
Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption.
[20]
Abelardo Duarte-Rodríguez, Lucas C. F. Ferreira, Élder J. Villamizar-Roa.
Global existence for an attraction-repulsion chemotaxis fluid model with logistic source.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top]
|
I’m back! Croatia, Greece, Turkey… all behind me. In the meantime, I’ve fallen even more in love with math.stackexchange and have ended up as a temporary moderator for philsophy.stackexchange (check them out). To announce my return, a little fun:
from Loers Hey everyone thanx for the amazing effort that u provide us with , over here . just gotta a simple questions why cannot we differentiate |x|when x = 0 ? or let’s say |x+2| when x = -2 this is really annoying me I cannot see a proper reason for it thanx again
We strive to develop our humor. The Chaz (quite the internet sensation, if you haven’t run across him) writes:
Yes, the absolute value looks like a “V”. My point (pun intended) is that at the tip of the “V”, you can place a line that only touches once, but there are infinitely many such lines.
Or you can go a little overboard. I wrote:
It’s all pointy-like there, and derivatives don’t like sharp objects. They’re timid creatures, tortured even longer than students on this forum (I know – hard to believe!) by math teachers and old enough to remember not only when teachers were allowed to hit or throw chalk at their students, but were also allowed to stab them.
To be honest (in the sense that completely false stories can still convey a sense of truth), derivatives once thought to themselves, “Why shouldn’t we differentiate absolute values?” This, of course, made absolute value very happy, because the differentiables were one of the most exclusive clubs in all functiondon (not counting those smooth functions – they’re pricks… albeit very smooth ones). So they let him in. But to their dismay, they realized that a new function arrived in functiondom, calling himself absolute value’s derivative. And he was bipolar! Sometimes he was as positive as can be, and everyone loved him. He was very consistent. But then, without any notice or change, he was suddenly incredibly negative! It was terrible! Worse, there was a moment in the middle where they couldn’t tell what he was! So they quickly tossed absolute value out of the club, and he’s felt very ostracized ever since – you see, he’s even continuous, and these guys always feel like they deserve more than they do.
I end by nothing that it is completely ordinary to anthropomorphize derivatives.
But I bow to the deadpan over-extended answer. For example:
Prioryofxd writes: … * … * … = 27 How do i find out whats behind the empty spaces they must be the same number?
To this, the great CRGreathouse responds:
Uselessly elaborate answers:
(algebra)
You’re trying to solve x * x * x = 27, that is, x^3 = 27. Start by transposing to get x^3 – 27 = 0 and then factor as (x – 3)(x^2 + 3x + 9) = 0. The roots are the zeros of the first factor together with those of the second factor. For the first x = 3 works by inspection, and there are no further solutions by the Fundamental Theorem of Algebra. The second can be factored with the quadratic formula, giving $latex \dfrac{-3 \pm \sqrt{3^2 – 4 \cdot 1 \cdot 9} } {2} = \frac{-3}{2} \pm 3 i \sqrt{3/4}$
(complex analysis)
Alternately, notice that the roots are evenly spaced about the circle about the origin with magnitude 3. In particular the solutions are each 3 times a third root of unity:$latex 3e^{2 \pi i k / 3}$ for k = 0, 1, or 2. You can expand this using Euler’s formula to get $latex 3 \cos{ 2 \pi k / 3} + i \sin{ 2 \pi k /3}$. which are equivalent to the above three solutions. You can equate these with the relevant terms from the quadratic formula to derive trig identities if desired…(geometry) You might reasonably despair, knowing that constructing the general solution to this problem is equivalent to the Delian problem of doubling the cube, long known to be impossible with ruler and straightedge. But never fear! A linkage or marked ruler provide solutions to cubic equations like this. Another powerful approach: use origami, following Huzita’s axioms. These are related to the Pierpont primes, Sloane’s A005109.
(numerical analysis)
But perhaps you aren’t willing to just guess answers until you find one. You might be tempted to use Newton’s method on x^3 – 27 using its derivative 3x^2, but it’s well-known that if you choose a bad starting point you may never converge on the answer. Bisection is slow, and further you need to be able to bracket the interval, which puts you back in the position of guessing again! I recommend the Schonhage-Gourdon method, as implemented (e.g.) in gp (see my .sig). Try Code: Select all
polroots(x^3-27)
(Diophantine equations)
Like geometry, the outlook seems bleak at first: I don’t even know if cubic Diophantine equations are known to be decidable in general. Quartic Diophantine equations in sufficiently many (58) variables are known to be unsolvable in the general case, as they can encode universal problems. But fortunately univariate cubics are solvable due to a method of del Ferro, Tartaglia, and Cardano. Noteworthy: this method typically requires working with imaginary numbers, even if all roots are real!
(algebraic number theory)
It is well-known that the Eisenstein integers are a unique factorization domain, and it’s not hard to check that $latex 27 = -(1 + 2omega)^6$ with $latex omega = e^{2 pi i /3}$. Thus the equation is of the form $latex x^3 + y^6 = 0$ and so the solutions can be found through usual factorization techniques.
(Galois theory)
Fortunately the equation is not of degree five or higher, or by Abel’s theorem there would be no general closed-form solution using root extraction, multiplication, and addition. For lower degrees such methods do exist: in particular Cardano’s method, mentioned above. But even degree five is attackable: the Glashan-Runge-Young criteria show if a particular equation is solvable with radicals, and if not Hermite showed a method using elliptic integrals. Sixth degree is expected to be harder…
That’s really quite magical in a way. Along the same lines, but not as artfully presented is my answer to the question of how to find the line between two points, but that’s for another forum. But for now, I’m back to blogging!
|
Emergence of large densities and simultaneous blow-up in a two-species chemotaxis system with competitive kinetics
School of Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
$ \begin{eqnarray*} \left\{\begin{array}{lll} u_t = \Delta u-\chi_1\nabla\cdot(u\nabla w)+\mu_1u(1-u-a_1v),\ \ \ &x\in \Omega,\ t>0,\\ v_t = \Delta v-\chi_2\nabla\cdot(v\nabla w)+\mu_2v(1-v-a_2u),\ \ &x\in \Omega,\ t>0,\\ w_t = \Delta w-w+u+v,\ \ &x\in \Omega,\ t>0 \end{array}\right. \end{eqnarray*} $
$ \Omega\subset\mathbb{R}^n $
$ n\geq3 $
$ \chi_i, \mu_i, a_i>0 $
$ i = 1, 2 $ Mathematics Subject Classification:Primary: 35B44; Secondary: 92C17, 35K55. Citation:Yan Li. Emergence of large densities and simultaneous blow-up in a two-species chemotaxis system with competitive kinetics. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5461-5480. doi: 10.3934/dcdsb.2019066
References:
[1]
X. Bai and M. Winkler,
Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics,
[2]
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler,
Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[3] [4] [5] [6]
C. Conca, E. Espejo and K. Vilches,
Remarks on the blowup and global existence for a two species chemotactic Keller-Segel system in $\Bbb R^2$,
[7]
E. E. Espejo Arenas, A. Stevens and J. J. L. Velázquez,
Simultaneous finite time blow-up in a two-species model for chemotaxis,
[8] [9]
D. Horstmann,
Generalizing the Keller-Segel model: Lyapunov functionals, steady state analysis, and blow-up results for multi-species chemotaxis models in the presence of attraction and repulsion between competitive interacting species,
[10] [11] [12]
J. Lankeit,
Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source,
[13] [14] [15]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[16]
I. G. Pearce, M. A. J. Chaplain, P. G. Schofield, A. R. A. Anderson and S. F. Hubbard,
Chemotaxis-induced spatio-temporal heterogeneity in multi-species host-parasitoid systems,
[17] [18]
C. Stinner, C. Surulescu and M. Winkler,
Global weak solutions in a pde-ode system modeling multiscale cancer cell invasion,
[19] [20]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic keller-segel system with subcritical sensitivity,
[21] [22] [23] [24]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[25] [26] [27]
M. Winkler,
Emergence of large population densities despite logistic growth restrictions in fully parabolic chemotaxis systems,
[28]
M. Winkler, Finite-time blow-up in low-dimensional keller-segel systems with logistic-type superlinear degradation,
[29] [30]
Q. Zhang and Y. Li,
Global existence and asymptotic properties of the solution to a two-species chemotaxis system,
[31]
show all references
References:
[1]
X. Bai and M. Winkler,
Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics,
[2]
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler,
Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[3] [4] [5] [6]
C. Conca, E. Espejo and K. Vilches,
Remarks on the blowup and global existence for a two species chemotactic Keller-Segel system in $\Bbb R^2$,
[7]
E. E. Espejo Arenas, A. Stevens and J. J. L. Velázquez,
Simultaneous finite time blow-up in a two-species model for chemotaxis,
[8] [9]
D. Horstmann,
Generalizing the Keller-Segel model: Lyapunov functionals, steady state analysis, and blow-up results for multi-species chemotaxis models in the presence of attraction and repulsion between competitive interacting species,
[10] [11] [12]
J. Lankeit,
Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source,
[13] [14] [15]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[16]
I. G. Pearce, M. A. J. Chaplain, P. G. Schofield, A. R. A. Anderson and S. F. Hubbard,
Chemotaxis-induced spatio-temporal heterogeneity in multi-species host-parasitoid systems,
[17] [18]
C. Stinner, C. Surulescu and M. Winkler,
Global weak solutions in a pde-ode system modeling multiscale cancer cell invasion,
[19] [20]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic keller-segel system with subcritical sensitivity,
[21] [22] [23] [24]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[25] [26] [27]
M. Winkler,
Emergence of large population densities despite logistic growth restrictions in fully parabolic chemotaxis systems,
[28]
M. Winkler, Finite-time blow-up in low-dimensional keller-segel systems with logistic-type superlinear degradation,
[29] [30]
Q. Zhang and Y. Li,
Global existence and asymptotic properties of the solution to a two-species chemotaxis system,
[31]
[1]
Hai-Yang Jin, Tian Xiang.
Convergence rates of solutions for a two-species chemotaxis-Navier-Stokes sytstem with competitive kinetics.
[2]
Youshan Tao, Michael Winkler.
Boundedness vs.blow-up in a two-species chemotaxis system with two chemicals.
[3]
Tobias Black.
Global existence and asymptotic stability in a competitive two-species chemotaxis system with two signals.
[4] [5] [6]
Liangchen Wang, Jing Zhang, Chunlai Mu, Xuegang Hu.
Boundedness and stabilization in a two-species chemotaxis system with two chemicals.
[7]
Casimir Emako, Luís Neves de Almeida, Nicolas Vauchelet.
Existence and diffusive limit of a two-species kinetic model of chemotaxis.
[8]
Tai-Chia Lin, Zhi-An Wang.
Development of traveling waves in an interacting two-species chemotaxis model.
[9] [10]
Miaoqing Tian, Sining Zheng.
Global boundedness versus finite-time blow-up of solutions to a quasilinear fully parabolic Keller-Segel system of two species.
[11]
Maria Antonietta Farina, Monica Marras, Giuseppe Viglialoro.
On explicit lower bounds and blow-up times in a model of chemotaxis.
[12]
Chueh-Hsin Chang, Chiun-Chuan Chen.
Travelling wave solutions of a free boundary problem for a two-species competitive model.
[13]
C. Brändle, F. Quirós, Julio D. Rossi.
Non-simultaneous blow-up for a quasilinear parabolic system with reaction at the boundary.
[14]
Lan Qiao, Sining Zheng.
Non-simultaneous blow-up for heat equations with positive-negative sources and coupled boundary flux.
[15]
Xinyu Tu, Chunlai Mu, Pan Zheng, Ke Lin.
Global dynamics in a two-species chemotaxis-competition system with two signals.
[16]
Masaaki Mizukami.
Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity.
[17]
Ke Lin, Chunlai Mu.
Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source.
[18]
Tahir Bachar Issa, Rachidi Bolaji Salako.
Asymptotic dynamics in a two-species chemotaxis model with non-local terms.
[19]
Masaaki Mizukami.
Improvement of conditions for asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity.
[20]
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top]
|
This is joint work with Thomas Hulse, Chan Ieong Kuan, and Alex Walker.
We have just uploaded a paper to the arXiv on the second moment of sums of Fourier coefficients of cusp forms. This is the first in a trio of papers that we will be uploading and submitting in the near future.
Suppose $latex {f(z)}$ and $latex {g(z)}$ are weight $latex {k}$ holomorphic cusp forms on $latex {\text{GL}_2}$ with Fourier expansions
$$\begin{align} f(z) &= \sum_{n \geq 1} a(n) e(nz) \\
g(z) &= \sum_{n \geq 1} b(n) e(nz). \end{align}$$
Denote the sum of the first $latex {n}$ coefficients of a cusp form $latex {f}$ by $$ S_f(n) := \sum_{m \leq n} a(m). \tag{1}$$
We consider upper bounds for the second moment of $latex {S_f(n)}$.
The famous Ramanujan-Petersson conjecture gives us that $latex {a(n)\ll n^{\frac{k-1}{2} + \epsilon}}$. So one might assume $latex {S_f(X) \ll X^{\frac{k-1}{2} + 1 + \epsilon}}$. However, we expect the better bound $$ S_f(X) \ll X^{\frac{k-1}{2} + \frac{1}{4} + \epsilon}, \tag{2}$$
which we refer to as the “Classical Conjecture,” echoing Hafner and Ivić [HI].
where $latex {B(x)}$ is an error term, $$ B(X) = \begin{cases} O(X^{k}\log^2(X)) \ \Omega\left(X^{k – \frac{1}{4}}\frac{(\log \log \log X)^3}{\log X}\right), \end{cases} \tag{4}$$
A application of the Cauchy-Schwarz inequality to~(3) leads to the on-average statement that $$ \frac{1}{X} \sum_{n \leq X} |S_f(n)| \ll X^{\frac{k-1}{2} + \frac{1}{4}}. \tag{6}$$
Better lower bounds are known for $latex {B(X)}$. In the same work [HI] improved the lower bound of [CN] for full-integral weight forms of level one and showed that $$ B(X) = \Omega\left(X^{k – \frac{1}{4}}\exp\left(D \tfrac{(\log \log x )^{1/4}}{(\log \log \log x)^{3/4}}\right)\right), \tag{8}$$
for a particular constant $latex {D}$.
The question of better understanding $latex {B(X)}$ is analogous to understanding the error term in the circle problem or divisor problem. In our paper, we introduce the Dirichlet series $$D(s, S_f \times S_g) := \sum_{n \geq 1} \frac{S_f(n) \overline{S_g(n)}}{n^{s + k – 1}}$$
D(s, S_f \times \overline{S_g}) &:= \sum_{n \geq 1} \frac{S_f(n)S_g(n)}{n^{s + k – 1}} and provide their meromorphic continuations. From our review of the literature, these Dirichlet series and their meromorphic continuations are new and provide new approaches to the classical problems related to $latex {S_f(n)}$.
Our primary result is the meromorphic continuation of $latex {D(s, S_f \times S_g)}$. As a first application, we prove a smoothed generalization to~(3).
Theorem 1 Suppose either that $latex {f = g}$ is a Hecke eigenform or that $latex {f}$ and $latex {g}$ have real coefficients. \begin{equation*} \frac{1}{X} \sum_{n \geq 1}\frac{S_f(n)\overline{S_g(n)}}{n^{k – 1}}e^{-n/X} = CX^{\frac{1}{2}} + O_{f,g,\epsilon}(X^{-\frac{1}{2} + \theta + \epsilon}) \end{equation*} where \begin{equation*} C = \frac{\Gamma(\tfrac{3}{2}) }{4\pi^2} \frac{L(\frac{3}{2}, f\times g)}{\zeta(3)}= \frac{\Gamma(\tfrac{3}{2})}{4\pi ^2} \sum_{n \geq 1} \frac{a(n)\overline{b(n)}}{n^{k + \frac{1}{2}}}, \end{equation*} and $latex {\theta}$ denotes progress towards Selberg’s Eigenvalue Conjecture. Similarly, \begin{equation*} \frac{1}{X} \sum_{n \geq 1}\frac{S_f(n)S_g(n)}{n^{k – 1}}e^{-n/X} = C’X^{\frac{1}{2}} + O_{f,g,\epsilon}(X^{-\frac{1}{2} + \theta + \epsilon}), \end{equation*} where \begin{equation*} C’ = \frac{\Gamma(\tfrac{3}{2})}{4\pi^2} \frac{L(\frac{3}{2}, f\times \overline{g})}{\zeta(3)} = \frac{\Gamma(\tfrac{3}{2})}{4\pi ^2} \sum_{n \geq 1} \frac{a(n)b(n)}{n^{k + \frac{1}{2}}}.\end{equation*}
We have a complete meromorphic continuation, and it would not be hard to give additional terms in the asymptotic. But the next terms come from zeroes of the zeta function and are complicated to nail down exactly.
Choosing $latex {f = g}$, we recover a proof of the Classical Conjecture on Average. More interestingly, we show that the secondary growth terms do not arise from a pole, nor are there prescribed polar reasons for growth. The secondary growth in the classical result comes from choosing a sharp cutoff instead of the nicely behaving and natural smooth cutoffs.
We prove analogous results for sums of normalized Fourier coefficients $$ S_f^\alpha(n) := \sum_{m \leq n} \frac{a(m)}{m^\alpha} \tag{9}$$
for $latex {0 \leq \alpha < k}$.
In the path to proving these results, we explicitly demonstrate remarkable cancellation between Rankin-Selberg convolution $latex {L}$-functions $latex {L(s, f\times g)}$ and shifted convolution sums $$ Z(s, 0; f,g) := \sum_{n, h} \frac{a(n)\overline{b(n-h)}}{n^{s + k – 1}}. \tag{10}$$
Comparing our results and methodologies with the main results of [CN] guarantees similar cancellation for general level and general weight, including half-integral weight forms.
We provide additional applications of the meromorphic continuation of $latex {D(s, S_f \times S_g)}$ in forthcoming works, which will be uploaded to the arXiv and described briefly here soon.
For exact references, see the paper.
|
Here is a general continuity type result on perturbations of the spectral radius:
Theorem. Let $A,B \in \mathbb{C}^{n \times n}$ be matrices and let $r(A)$ denote the spectral radius of $A$. For each $r > r(A)$ we set\begin{align*} \alpha(r) := \sup_{\lvert\lambda\rvert \ge r} \lVert (\lambda - A)^{-1} \rVert\end{align*}(note that $\alpha(r) < \infty$). If $\lVert B \rVert < 1/\alpha(r)$, then $r(A+B) < r$.
Proof. This is a simple consequence of the Neumann series: Let $\mu$ be a complex number of moduls $\lvert \mu \rvert \ge r$. Then $\lVert (\mu - A)^{-1} B \rVert < 1$, so the matrix\begin{align*} \mu - (A+B) = (\mu - A) (I - (\mu - A)^{-1} B)\end{align*}is invertible since $(\mu - A)$ and $I - (\mu - A)^{-1}B$ are invertible (the latter due to the Neumann series).
The following formulation of the theorem is probably a bit easier to read:
Corollary. Let $A \in \mathbb{C}^{n \times n}$. For every $r > r(A)$ there exists $\delta(r) > 0$ such that $r(A+B) < r$ for every matrix $B \in \mathbb{C}^{n \times n}$ which fulfils $\lVert B \rVert < \delta(r)$.
Proof. Take $\delta := 1/\alpha(r)$.
Remark 1. If $r(A) = 1$ and if $A$ is in addition power-bounded, i.e. $M := \sup_{n \in \mathbb{N}_0} \lVert A^n \rVert < \infty$, then we have $\alpha(r) \le M/(r - 1)$ for each $r > 1$ (this is again a simple consequence of the Neumann series representation of the resolvent). Hence, in this special case we can choose $\delta(r) = (r-1)/M$ in the corollary.
Remark 2. The above results are very rough (as indicated by their elementary proofs). There exist more precise (and quantitative) results, as for instance indicated by Federico Poloni in the comments.
|
Uniqueness of Meromorphic Functions Concerning Sharing Two Small Functions with Their Derivatives
Ma Linke,Liu Dan,Fang Mingliang
Keywords:Meromorphic functions, Shared small functions, Derivatives.
Abstract:
In this paper, we study the uniqueness of meromorphic functions that share two small functions with their derivatives. We prove the following result: Let $f$ be a nonconstant meromorphic function such that $\mathop {\overline{\lim}}\limits_{r\to\infty} \frac{\bar{N}(r,f)}{T(r,f)}<\frac{3}{128}$, and let $a$, $b$ be two distinct small functions of $f$ with $a\not\equiv\infty$ and $b\not\equiv\infty$. If $f$ and $f'$ share $a$ and $b$ IM, then $f\equiv f'$.
|
Many authors pull the definitions of the raising and lowering (or ladder) operators out of their butt with no attempt at motivation. This is pointed out nicely in [1] by Eli along with one justification based on factoring the Hamiltonian.
In [2] is a small exception to the usual presentation. In that text, these operators are defined as usual with no motivation. However, after the utility of these operators has been shown, the raising and lowering operators show up in a context that does provide that missing motivation as a side effect.
It doesn’t look like the author was trying to provide a motivation, but it can be interpreted that way.
When seeking the time evolution of Heisenberg-picture position and momentum operators, we will see that those solutions can be trivially expressed using the raising and lowering operators. No special tools nor black magic is required to find the structure of these operators. Unfortunately, we must first switch to both the Heisenberg picture representation of the position and momentum operators, and also employ the Heisenberg equations of motion. Neither of these last two fit into standard narrative of most introductory quantum mechanics treatments. We will also see that these raising and lowering “operators” could also be introduced in classical mechanics, provided we were attempting to solve the SHO system using the Hamiltonian equations of motion.
I’ll outline this route to finding the structure of the ladder operators below. Because these are encountered trying to solve the time evolution problem, I’ll first show a simpler way to solve that problem. Because that simpler method depends a bit on lucky observation and is somewhat unstructured, I’ll then outline a more structured procedure that leads to the ladder operators directly, also providing the solution to the time evolution problem as a side effect.
The starting point is the Heisenberg equations of motion. For a time independent Hamiltonian \( H \), and a Heisenberg operator \( A^{(H)} \), those equations are
\begin{equation}\label{eqn:harmonicOscDiagonalize:20}
\ddt{A^{(H)}} = \inv{i \Hbar} \antisymmetric{A^{(H)}}{H}. \end{equation}
Here the Heisenberg operator \( A^{(H)} \) is related to the Schrodinger operator \( A^{(S)} \) by
\begin{equation}\label{eqn:harmonicOscDiagonalize:60}
A^{(H)} = U^\dagger A^{(S)} U, \end{equation}
where \( U \) is the time evolution operator. For this discussion, we need only know that \( U \) commutes with \( H \), and do not need to know the specific structure of that operator. In particular, the Heisenberg equations of motion take the form
\begin{equation}\label{eqn:harmonicOscDiagonalize:80}
\begin{aligned} \ddt{A^{(H)}} &= \inv{i \Hbar} \antisymmetric{A^{(H)}}{H} \\ &= \inv{i \Hbar} \antisymmetric{U^\dagger A^{(S)} U}{H} \\ &= \inv{i \Hbar} \lr{ U^\dagger A^{(S)} U H – H U^\dagger A^{(S)} U } \\ &= \inv{i \Hbar} \lr{ U^\dagger A^{(S)} H U – U^\dagger H A^{(S)} U } \\ &= \inv{i \Hbar} U^\dagger \antisymmetric{A^{(S)}}{H} U. \end{aligned} \end{equation}
The Hamiltonian for the harmonic oscillator, with Schrodinger-picture position and momentum operators \( x, p \) is
\begin{equation}\label{eqn:harmonicOscDiagonalize:40}
H = \frac{p^2}{2m} + \inv{2} m \omega^2 x^2, \end{equation}
so the equations of motions are
\begin{equation}\label{eqn:harmonicOscDiagonalize:100}
\begin{aligned} \ddt{x^{(H)}} &= \inv{i \Hbar} U^\dagger \antisymmetric{x}{H} U \\ &= \inv{i \Hbar} U^\dagger \antisymmetric{x}{\frac{p^2}{2m}} U \\ &= \inv{2 m i \Hbar} U^\dagger \lr{ i \Hbar \PD{p}{p^2} } U \\ &= \inv{m } U^\dagger p U \\ &= \inv{m } p^{(H)}, \end{aligned} \end{equation}
and
\begin{equation}\label{eqn:harmonicOscDiagonalize:120} \begin{aligned} \ddt{p^{(H)}} &= \inv{i \Hbar} U^\dagger \antisymmetric{p}{H} U \\ &= \inv{i \Hbar} U^\dagger \antisymmetric{p}{\inv{2} m \omega^2 x^2 } U \\ &= \frac{m \omega^2}{2 i \Hbar} U^\dagger \lr{ -i \Hbar \PD{x}{x^2} } U \\ &= -m \omega^2 U^\dagger x U \\ &= -m \omega^2 x^{(H)}. \end{aligned} \end{equation}
In the Heisenberg picture the equations of motion are precisely those of classical Hamiltonian mechanics, except that we are dealing with operators instead of scalars
\begin{equation}\label{eqn:harmonicOscDiagonalize:140}
\begin{aligned} \ddt{p^{(H)}} &= -m \omega^2 x^{(H)} \\ \ddt{x^{(H)}} &= \inv{m } p^{(H)}. \end{aligned} \end{equation}
In the text the ladder operators are used to simplify the solution of these coupled equations, since they can decouple them. That’s not really required since we can solve them directly in matrix form with little work
\begin{equation}\label{eqn:harmonicOscDiagonalize:160}
\ddt{} \begin{bmatrix} p^{(H)} \\ x^{(H)} \end{bmatrix} = \begin{bmatrix} 0 & -m \omega^2 \\ \inv{m} & 0 \end{bmatrix} \begin{bmatrix} p^{(H)} \\ x^{(H)} \end{bmatrix}, \end{equation}
or, with length scaled variables
\begin{equation}\label{eqn:harmonicOscDiagonalize:180}
\begin{aligned} \ddt{} \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} &= \begin{bmatrix} 0 & -\omega \\ \omega & 0 \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \\ &= -i \omega \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \\ &= -i \omega \sigma_y \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix}. \end{aligned} \end{equation}
Writing \( y = \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \), the solution can then be written immediately as
\begin{equation}\label{eqn:harmonicOscDiagonalize:200}
\begin{aligned} y(t) &= \exp\lr{ -i \omega \sigma_y t } y(0) \\ &= \lr{ \cos \lr{ \omega t } I – i \sigma_y \sin\lr{ \omega t } } y(0) \\ &= \begin{bmatrix} \cos\lr{ \omega t } & \sin\lr{ \omega t } \\ -\sin\lr{ \omega t } & \cos\lr{ \omega t } \end{bmatrix} y(0), \end{aligned} \end{equation}
or
\begin{equation}\label{eqn:harmonicOscDiagonalize:220}
\begin{aligned} \frac{p^{(H)}(t)}{m \omega} &= \cos\lr{ \omega t } \frac{p^{(H)}(0)}{m \omega} + \sin\lr{ \omega t } x^{(H)}(0) \\ x^{(H)}(t) &= -\sin\lr{ \omega t } \frac{p^{(H)}(0)}{m \omega} + \cos\lr{ \omega t } x^{(H)}(0). \end{aligned} \end{equation}
This solution depends on being lucky enough to recognize that the matrix has a Pauli matrix as a factor (which squares to unity, and allows the exponential to be evaluated easily.)
If we hadn’t been that observant, then the first tool we’d have used instead would have been to diagonalize the matrix. For such diagonalization, it’s natural to work in completely dimensionless variables. Such a non-dimensionalisation can be had by defining
\begin{equation}\label{eqn:harmonicOscDiagonalize:240}
x_0 = \sqrt{\frac{\Hbar}{m \omega}}, \end{equation}
and dividing the working (operator) variables through by those values. Let \( z = \inv{x_0} y \), and \( \tau = \omega t \) so that the equations of motion are
\begin{equation}\label{eqn:harmonicOscDiagonalize:260}
\frac{dz}{d\tau} = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} z. \end{equation}
This matrix can be diagonalized as
\begin{equation}\label{eqn:harmonicOscDiagonalize:280}
A = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = V \begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix} V^{-1}, \end{equation}
where
\begin{equation}\label{eqn:harmonicOscDiagonalize:300}
V = \inv{\sqrt{2}} \begin{bmatrix} i & -i \\ 1 & 1 \end{bmatrix}. \end{equation}
The equations of motion can now be written
\begin{equation}\label{eqn:harmonicOscDiagonalize:320}
\frac{d}{d\tau} \lr{ V^{-1} z } = \begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix} \lr{ V^{-1} z }. \end{equation}
This final change of variables \( V^{-1} z \) decouples the system as desired. Expanding that gives
\begin{equation}\label{eqn:harmonicOscDiagonalize:340}
\begin{aligned} V^{-1} z &= \inv{\sqrt{2}} \begin{bmatrix} -i & 1 \\ i & 1 \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{x_0 m \omega} \\ \frac{x^{(H)}}{x_0} \end{bmatrix} \\ &= \inv{\sqrt{2} x_0} \begin{bmatrix} -i \frac{p^{(H)}}{m \omega} + x^{(H)} \\ i \frac{p^{(H)}}{m \omega} + x^{(H)} \end{bmatrix} \\ &= \begin{bmatrix} a^\dagger \\ a \end{bmatrix}, \end{aligned} \end{equation}
where
\begin{equation}\label{eqn:harmonicOscDiagonalize:n} \begin{aligned} a^\dagger &= \sqrt{\frac{m \omega}{2 \Hbar}} \lr{ -i \frac{p^{(H)}}{m \omega} + x^{(H)} } \\ a &= \sqrt{\frac{m \omega}{2 \Hbar}} \lr{ i \frac{p^{(H)}}{m \omega} + x^{(H)} }. \end{aligned} \end{equation}
Lo and behold, we have the standard form of the raising and lowering operators, and can write the system equations as
\begin{equation}\label{eqn:harmonicOscDiagonalize:360}
\begin{aligned} \ddt{a^\dagger} &= i \omega a^\dagger \\ \ddt{a} &= -i \omega a. \end{aligned} \end{equation}
It is actually a bit fluky that this matched exactly, since we could have chosen eigenvectors that differ by constant phase factors, like
\begin{equation}\label{eqn:harmonicOscDiagonalize:380}
V = \inv{\sqrt{2}} \begin{bmatrix} i e^{i\phi} & -i e^{i \psi} \\ 1 e^{i\phi} & e^{i \psi} \end{bmatrix}, \end{equation}
so
\begin{equation}\label{eqn:harmonicOscDiagonalize:341}
\begin{aligned} V^{-1} z &= \frac{e^{-i(\phi + \psi)}}{\sqrt{2}} \begin{bmatrix} -i e^{i\psi} & e^{i \psi} \\ i e^{i\phi} & e^{i \phi} \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{x_0 m \omega} \\ \frac{x^{(H)}}{x_0} \end{bmatrix} \\ &= \inv{\sqrt{2} x_0} \begin{bmatrix} -i e^{i\phi} \frac{p^{(H)}}{m \omega} + e^{i\phi} x^{(H)} \\ i e^{i\psi} \frac{p^{(H)}}{m \omega} + e^{i\psi} x^{(H)} \end{bmatrix} \\ &= \begin{bmatrix} e^{i\phi} a^\dagger \\ e^{i\psi} a \end{bmatrix}. \end{aligned} \end{equation}
To make the resulting pairs of operators Hermitian conjugates, we’d want to constrain those constant phase factors by setting \( \phi = -\psi \). If we were only interested in solving the time evolution problem no such additional constraints are required.
The raising and lowering operators are seen to naturally occur when seeking the solution of the Heisenberg equations of motion. This is found using the standard technique of non-dimensionalisation and then seeking a change of basis that diagonalizes the system matrix. Because the Heisenberg equations of motion are identical to the classical Hamiltonian equations of motion in this case, what we call the raising and lowering operators in quantum mechanics could also be utilized in the classical simple harmonic oscillator problem. However, in a classical context we wouldn’t have a justification to call this more than a change of basis.
References
[1] Eli Lansey.
The Quantum Harmonic Oscillator Ladder Operators, 2009. URL http://behindtheguesses.blogspot.ca/2009/03/quantum-harmonic-oscillator-ladder.html. [Online; accessed 18-August-2015].
[2] Jun John Sakurai and Jim J Napolitano.
Modern quantum mechanics, chapter {Time Development of the Oscillator}. Pearson Higher Ed, 2014.
|
Question:
Watson is filling a pool with water. The flow rate {eq}r(t) {/eq} gives the gallons per second at which the water is flowing into the pool at time {eq}t {/eq} seconds after he turns on the faucet. We are going to make a new function, {eq}\displaystyle f (\hat t) = \int_{t = 5}^{\hat t} r(t)dt {/eq}. First, let's try to figure out why this is even a function.
(a) What does the function {eq}f {/eq} input? .
(b) What does the function {eq}f {/eq} output? .
(c) Suppose {eq}f(10) = 26 {/eq} and {eq}f(18) = 26 {/eq}. What would account for this?
Integration:
In this question, the function
f integrates the rate of change over the desired time interval. What happens due to this integration is that the rate of change, which gives the amount of water added to the pool at any moment of time, gets summed up over the entire interval we are interested in. Answer and Explanation:
a) The function
f inputs the value of t between 5 and some value {eq}\hat t {/eq} into the function r(t).
b) The function
f outputs the amount of water in gallons that is present in the pool after {eq}\hat t {/eq} seconds.
c) The value of the function will remain the same if no water is filled between 10th and the 18th seconds. This would also mean that rate of change
r is zero between 10 and 18, and therefore, does not add any values to the function f. Become a member and unlock all Study Answers
Try it risk-free for 30 daysTry it risk-free
Ask a question
Our experts can answer your tough homework and study questions.Ask a question Ask a question
Search Answers Learn more about this topic:
from AP Calculus AB & BC: Homework Help ResourceChapter 13 / Lesson 13
|
FtYou writes
Hello everyone ! There is a concept I have a hard time getting my head wrap around. If you have a Vector Space V and a subspace W, I understand that you can find the least square vector approximation from any vector in V to a vector in W. And this correspond to the projection of V to the subspace W. Now , for data fitting … Let’s suppose you have a bunch of points (xi, yi) where you want to fit a set a regressors so you can approximate yi by a linear combination of the regressors lets say ( 1, x, x2 … ). What Vector space are we talking about ? If we consider the Vector space of function R -> R, in what subspace are we trying to map these vectors ?
I have a hard time merging these two concepts of projecting to a vector space and fitting the data. In the latter case what vector are we using ? The functions ? If so I understand the choice of regressors ( which constitute a basis for the vector space ) But what’s the role of the (xi,yi) ?
I want to point out that I understand completely how to build the matrices to get Y = AX and solving using least square approx. What I miss is the big picture. The linear algebra picture. Thanks for any help !
We’ll go over this by closely examining and understanding an example. Suppose we have the data points $latex {(x_i, y_i)}$
$latex \displaystyle \begin{cases} (x_1, y_1) = (-1,8) \\ (x_2, y_2) = (0,8) \\ (x_3, y_3) = (1,4) \\ (x_4, y_4) = (2,16) \end{cases}, $
and we have decided to try to find the best fitting quadratic function. What do we mean by best-fitting? We mean that we want the one that approximates these data points the best. What exactly does that mean? We’ll see that before the end of this note – but in linear algebra terms, we are projecting on to some sort of vector space – we claim that projection is the ”best-fit” possible.
So what do we do? A generic quadratic function is $latex {f(t) = a + bt + ct^2}$. Intuitively, we apply what we know. Then the points above become
$latex \displaystyle \begin{cases} f(-1) = a – b + c = 8 \\ f(0) = a = 8 \\ f(1) = a + b + c = 4 \\ f(2) = a + 2b + 4c = 16 \end{cases}, $
and we want to find the best $latex {[a b c]}$ we can that ”solves” this. Of course, this is a matrix equation:
$latex \displaystyle \begin{pmatrix} 1 & -1 & 1 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \\ 1 & 2 & 4 \end{pmatrix} \begin{pmatrix} a \\ b \\ c \end{pmatrix} = \begin{pmatrix} 8 \\ 8 \\ 4 \\ 16 \end{pmatrix}. $
And so you see how the algorithm would complete this. But now let’s get down the ”linear algebra picture,” as you say.
We know that quadratic polynomials $latex {f(t) = a + bt + ct^2}$ are a three dimensional vector space (which I denote by $latex {P_2)}$ spanned by $latex {1, t, t^2}$. We know we have four data points, so we will define a linear transformation $latex {A}$ to be the transformation taking a quadratic polynomial $latex {f}$ to $latex {\mathbb{R}^4}$ by evaluating $latex {f}$ at $latex {-1, 0, 1, 2}$ (i.e. the $latex {x_i}$). In other words,
$latex \displaystyle A : P_2 \longrightarrow \mathbb{R}^4 $
where
$latex \displaystyle A(f) = \begin{pmatrix} f(-1) \\ f(0) \\ f(1) \\ f(2) \end{pmatrix}. $
We interpret $latex {f}$ as being given by three coordinates, $latex {a, b, c \in \mathbb{R}^3}$, so we can think of $latex {A}$ as a linear transformation from $latex {\mathbb{R}^3 \longrightarrow \mathbb{R}^4}$. In fact, $latex {A}$ is nothing more than the matrix we wrote above.
Then a solution to
$latex \displaystyle A^T A \begin{pmatrix} a \\ b \\ c \end{pmatrix} = A^T \begin{pmatrix} 8 \\ 8 \\ 4 \\ 16 \end{pmatrix} $
is the projection of the space of quadratic polynomials on $latex {\mathbb{R}^4}$ (which in this case is the space of evaluations of quadratic polynomials at four different points). If $latex {f^* }$ is the found projection, and I denote the $latex {y_i}$ coordinate vector as $latex {y^ *}$, then this projection minimizes
$latex \displaystyle || y^* – Af^*||^2 = (y_1 – f^*(x_1))^2 + \ldots + (y_4 – f^*(x_4))^2, $
and it is in this sense that we mean we have the ”best-fit.” (This is roughly interpreted as the distances between the $latex {y_i}$ and $latex {f^*(x_i)}$ are minimized; really, it’s the sum of the squares of the distances – hence ”Least-Squares”).
So in short: $latex {A}$ is a matrix evaluating quadratic polynomials at different points. The columns vectors correspond to a basis for the space of quadratic polynomials, $latex {1, t, t^2}$. The codomain is $latex {\mathbb{R}^4}$, coming from the evaluation of the input polynomial at the four different $latex {x_i}$. The projection of the set of quadratic polynomials onto their evaluation space minimizes the sum of the squares of the distances between $latex {f(x_i)}$ and $latex {y_i}$.
Does that make sense?
This is also available as a pdf.
|
Simple question. When are we allowed to exchange limits and integrals? I'm talking about situations like$$\lim_{\varepsilon\to0^+} \int_{-\infty}^\infty dk f(k,\varepsilon) \overset{?}{=} \int_{-\infty}^\infty dk\lim_{\varepsilon\to0^+} f(k,\varepsilon).$$Everyone refers to either
dominated convergence theorem or monotone convergence theorem but I'm not sure if I understand how exactly one should go about applying it. Both theorems are about sequences and I don't see how that relates to integration in practice. Help a physicist out :)
Simple question. When are we allowed to exchange limits and integrals? I'm talking about situations like$$\lim_{\varepsilon\to0^+} \int_{-\infty}^\infty dk f(k,\varepsilon) \overset{?}{=} \int_{-\infty}^\infty dk\lim_{\varepsilon\to0^+} f(k,\varepsilon).$$Everyone refers to either
The statement of the dominated convergence theorem (DCT) is as follows:
"Discrete" DCT.Suppose $\{f_n\}_{n=1}^\infty$ is a sequence of (measurable) functions such that $|f_n| \le g$ for some integrable function $g$ and all $n$, and $\lim_{n\to\infty}f_n = f$ pointwise almost everywhere. Then, $f$ is an integrable function and $\int |f-f_n| \to 0$. In particular, $\lim_{n\to\infty}\int f_n = \int f$ (by the triangle inequality). This can be written as $$ \lim_{n\to\infty}\int f_n = \int \lim_{n\to\infty} f_n.$$
(The statement and conclusion of the monotone convergence theorem are similar, but it has a somewhat different set of hypotheses.)
As you note, the statements of these theorems involve
sequences of functions, i.e., a $1$-discrete-parameter family of functions $\{f_n\}_{n=1}^\infty$. To apply these theorems to a $1$-continuous-parameter family of functions, say $\{f_\epsilon\}_{0<\epsilon<\epsilon_0}$, one typically uses a characterization of limits involving a continuous parameter in terms of sequences:
Proposition.If $f$ is a function, then $$\lim_{\epsilon\to0^+}f(\epsilon) = L \iff \lim_{n\to\infty}f(a_n) = L\quad \text{for $\mathbf{all}$ sequences $a_n\to 0^+$.}$$
With this characterization, we can formulate a version of the dominated convergence theorem involving continuous-parameter families of functions (note that I use quotations to title these versions of the DCT because these names are not standard as far as I know):
"Continuous" DCT.Suppose $\{f_\epsilon\}_{0<\epsilon<\epsilon_0}$ is a $1$-continuous-parameter family of (measurable) functions such that $|f_\epsilon| \le g$ for some integrable function $g$ and all $0<\epsilon<\epsilon_0$, and $\lim_{\epsilon\to0^+}f_\epsilon=f$ pointwise almost everywhere. Then, $f$ is an integrable function and $\lim_{\epsilon\to 0^+}\int f_\epsilon = \int f$. This can be written as $$ \lim_{\epsilon\to0^+}\int f_\epsilon = \int \lim_{\epsilon\to0^+} f_\epsilon.$$
The way we use the continuous DCT in practice is by picking an
arbitrary sequence $\pmb{a_n\to 0^+}$ and showing that the hypotheses of the "discrete" DCT are satisfied for this arbitrary sequence $a_n$, using only the assumption that $a_n\to 0^+$ and properties of the family $\{f_\epsilon\}$ that are known to us.
Let's look at it in a sample case. We want to prove by DCT that $$\lim_{\varepsilon\to0^+} \int_0^\infty e^{-y/\varepsilon}\,dy=0$$
This is the case if and only if for all sequences $\varepsilon_n\to 0^+$ it holds $$\lim_{n\to\infty}\int_0^\infty e^{-y/\varepsilon_n}\,dy=0$$
And now you can use DCT on each of these sequences. Of course, the limiting function will always be the zero function and you may consider the dominating function $e^{-x}$.
|
Could 2 moons that orbit same terrestrial planet never see each other if they orbit the planet at same time?
Moons have different mass and gravity.
Worldbuilding Stack Exchange is a question and answer site for writers/artists using science, geography and culture to construct imaginary worlds and settings. It only takes a minute to sign up.Sign up to join this community
In theory if the two moons were in the exact same orbit on opposite sides of the planet then yes. Having the moons closer to the planet and smaller also makes that easier. For example geostationary satellites over opposite sides of earth will never have direct line of sight to each other.
In practice though that would be a very unstable arrangement (even if there were no other moons to disrupt things) and would also be very unlikely to form naturally.
So it would be very unlikely to form naturally and if it did form it would be unstable ... so realistically the answer is "no" but if you can explain away the improbabilities somehow then "yes".
The moons having different masses doesn't change their behavior in this case. If they are in the same orbit they are in the same orbit.
Yes, this is possible.
A large moon and a smaller moon can share the same orbit if one is 60 degrees ahead of the other. In such an orbit, the smaller moon would be at one of the stable Lagrangian points L4 and L5. If the orbital radius is less than $\frac{1}{\cos (30^{\circ})} = \frac{2}{\sqrt{3}}R_M \approx 1.15 R_M$ (where $R_M$ is the radius of the planet), then the planet will block the line of sight between the two moons. That is, each moon will be beyond the horizon as seen from the other moon.
Of course, such orbits would be very close to the planet. Would the moons break apart due to tidal forces? The answer to that is given by the Roche limit, which for a rigid satellite is
$$ d = R_M \left( 2\frac{\rho_M}{\rho_m} \right)^{1/3} $$
where $\rho_M$ and $\rho_m$ are the densities of the planet and the moon respectively. If the moons orbit outside this radius, they will survive. If they are inside the radius, they will break apart. For our scenario, we need the Roche limit to be less than $1.15 R_M$, so the density of the moons must be at least 30% larger ($\frac{3^{3/2}}{2^2}$) than the density of the planet.
Orbits are elliptical, normally quite eccentric - our moon's almost circular orbit is unusual. For two moons not to see each other, both their orbits would have to be extremely circular and almost exactly in the same plane.
The system would be unstable. If one moon lead the other by a tiny fraction it would be accelerated by the lagging moon and the lagging moon would be dragged by the leading one. This would rapidly cause the system to collapse.
However, it is not impossible.
|
Modeling of Materials in Wave Electromagnetics Problems
Whenever we are solving a wave electromagnetics problem in COMSOL Multiphysics, we build a model that is composed of domains and boundary conditions. Within the domains, we use various material models to represent a wide range of substances. However, from a mathematical point of view, all of these different materials end up being handled identically within the governing equation. Let’s take a look at these various material models and discuss when to use them.
What Equations Are We Solving?
Here, we will speak about the frequency-domain form of Maxwell’s equations in the
Electromagnetic Waves, Frequency Domain interface available in the RF Module and the Wave Optics Module. The information presented here also applies to the Electromagnetic Waves, Beam Envelopes formulation in the Wave Optics Module.
Under the assumption that material response is linear with field strength, we formulate Maxwell’s equations in the frequency domain, so the governing equations can be written as:
This equation solves for the electric field, \mathbf{E}, at the operating (angular) frequency \omega = 2 \pi f (c_0 is the speed of light in vacuum). The other inputs are the material properties \mu_r, the relative permeability; \epsilon_r, the relative permittivity; and \sigma , the electrical conductivity. All of these material inputs can be positive or negative, real or complex-valued numbers, and they can be scalar or tensor quantities. These material properties can vary as a function of frequency as well, though it is not always necessary to consider this variation if we are only looking at a relatively narrow frequency range.
Let us now explore each of these material properties in detail.
Electrical Conductivity
The
electrical conductivity quantifies how well a material conducts current — it is the inverse of the electrical resistivity. The material conductivity is measured under steady-state (DC) conditions, and we can see from the above equation that as the frequency increases, the effective resistivity of the material increases. We typically assume that the conductivity is constant with frequency, and later on we will examine different models for handling materials with frequency-dependent conductivity.
Any material with non-zero conductivity will conduct current in an applied electric field and dissipate energy as a resistive loss, also called
Joule heating. This will often lead to a measurable rise in temperature, which will alter the conductivity. You can enter any function or tabular data for variation of conductivity with temperature, and there is also a built-in model for linearized resistivity. Linearized Resistivity is a commonly used model for the variation of conductivity with temperature, given by:
where \rho_0 is the reference resistivity, T_{ref} is the reference temperature, and \alpha is the resistivity temperature coefficient. The spatially-varying temperature field, T, can either be specified or computed.
Conductivity is entered as a real-valued number, but it can be anisotropic, meaning that the material’s conductivity varies in different coordinate directions. This is an appropriate approach if you have, for example, a laminated material in which you do not want to explicitly model the individual layers. You can enter a homogenized conductivity for the composite material, which would be either experimentally determined or computed from a separate analysis.
Within the RF Module, there are two other options for computing a homogenized conductivity: Archie’s Law for computing effective conductivity of non-conductive porous media filled with conductive liquid and a Porous Media model for mixtures of materials.
Archie’s Law is a model typically used for the modeling of soils saturated with seawater or crude oil, fluids with relatively higher conductivity compared to the soil. Porous Media refers to a model that has three different options for computing an effective conductivity for a mixture of up to five materials. First, the Volume Average, Conductivity formulation is:
\sum \theta_i \sigma_i
where \theta is the volume fraction of each material. This model is appropriate if the material conductivities are similar. If the conductivities are quite different, the
Volume Average, Resistivity formulation is more appropriate:
\sum\frac{\theta_i}{ \sigma_i}
Lastly, the
Power Law formulation will give a conductivity lying between the other two formulations:
\prod\sigma_i^{\theta_i }
These models are all only appropriate to use if the length scale over which the material properties’ change is much smaller than the wavelength.
Relative Permittivity
The
relative permittivity quantifies how well a material is polarized in response to an applied electric field. It is typical to call any material with \epsilon_r>1 a dielectric material, though even vacuum (\epsilon_r=1) can be called a dielectric. It is also common to use the term dielectric constant to refer to a material’s relative permittivity.
A material’s relative permittivity is often given as a complex-valued number, where the negative imaginary component represents the loss in the material as the electric field changes direction over time. Any material experiencing a time-varying electric field will dissipate some of the electrical energy as heat. Known as
dielectric loss, this results from the change in shape of the electron clouds around the atoms as the electric fields change. Dielectric loss is conceptually distinct from the resistive loss discussed earlier; however, from a mathematical point of view, they are actually handled identically — as a complex-valued term in the governing equation. Keep in mind that COMSOL Multiphysics follows the convention that a negative imaginary component (a positive-valued electrical conductivity) will lead to loss, while a positive complex component (a negative-valued electrical conductivity) will lead to gain within the material.
There are seven different material models for the relative permittivity. Let’s take a look at each of these models.
Relative Permittivity is the default option for the RF Module. A real- or complex-valued scalar or tensor value can be entered. The same Porous Media models described above for the electrical conductivity can be used for the relative permittivity. Refractive Index is the default option for the Wave Optics Module. You separately enter the real and imaginary part of the refractive index, called n and k, and the relative permittivity is \epsilon_r=(n-jk)^2. This material model assumes zero conductivity and unit relative permeability. Loss Tangent involves entering a real-valued relative permittivity, \epsilon_r', and a scalar loss tangent, \delta. The relative permittivity is computed via \epsilon_r=\epsilon_r'(1-j \tan \delta), and the material conductivity is zero. Dielectric Loss is the option for entering the real and imaginary components of the relative permittivity \epsilon_r=\epsilon_r'-j \epsilon_r''. Be careful to note the sign: Entering a positive-valued real number for the imaginary component \epsilon_r'' when using this interface will lead to loss, since the multiplication by -j is done within the software. For an example of the appropriate usage of this material model, please see the Optical Scattering off of a Gold Nanosphere tutorial.
The
Drude-Lorentz Dispersion model is a material model that was developed based upon the Drude free electron model and the Lorentz oscillator model. The Drude model (when \omega_0=0) is used for metals and doped semiconductors, while the Lorentz model describes resonant phenomena such as phonon modes and interband transitions. With the sum term, the combination of these two models can accurately describe a wide array of solid materials. It predicts the frequency-dependent variation of complex relative permittivity as:
\sum\frac{f_k\omega_p^2}{\omega_{0k}^2-\omega^2+i\Gamma_k \omega}
where \epsilon_{\infty} is the high-frequency contribution to the relative permittivity, \omega_p is the plasma frequency, f_k is the oscillator strength, \omega_{0k} is the resonance frequency, and \Gamma_k is the damping coefficient. Since this model computes a complex-valued permittivity, the conductivity inside of COMSOL Multiphysics is set to zero. This approach is one way of modeling frequency-dependent conductivity.
The
Debye Dispersion model is a material model that was developed by Peter Debye and is based on polarization relaxation times. The model is primarily used for polar liquids. It predicts the frequency-dependent variation of complex relative permittivity as:
\sum\frac{\Delta \epsilon_k}{1+i\omega \tau_k}
where \epsilon_{\infty} is the high-frequency contribution to the relative permittivity, \Delta \epsilon_k is the contribution to the relative permittivity, and \tau_k is the relaxation time. Since this model computes a complex-valued permittivity, the conductivity is assumed to be zero. This is an alternate way to model frequency-dependent conductivity.
The
Sellmeier Dispersion model is available in the Wave Optics Module and is typically used for optical materials. It assumes zero conductivity and unit relative permeability and defines the relative permittivity in terms of the operating wavelength, \lambda, rather than frequency:
\sum\frac{B_k \lambda^2}{\lambda^2-C_k}
where the coefficients B_k and C_k determine the relative permittivity.
The choice between these seven models will be dictated by the way the material properties are available to you in the technical literature. Keep in mind that, mathematically speaking, they enter the governing equation identically.
Relative Permeability
The
relative permeability quantifies how a material responds to a magnetic field. Any material with \mu_r>1 is typically referred to as a magnetic material. The most common magnetic material on Earth is iron, but pure iron is rarely used for RF or optical applications. It is more typical to work with materials that are ferrimagnetic. Such materials exhibit strong magnetic properties with an anisotropy that can be controlled by an applied DC magnetic field. Opposed to iron, ferrimagnetic materials have a very low conductivity, so that high-frequency electromagnetic fields are able to penetrate into and interact with the bulk material. This tutorial demonstrates how to model ferrimagnetic materials.
There are two options available for specifying relative permeability: The
Relative Permeability model, which is the default for the RF Module, and the Magnetic Losses model. The Relative Permeability model allows you to enter a real- or complex-valued scalar or tensor value. The same Porous Media models described above for the electrical conductivity can be used for the relative permeability. The Magnetic Losses model is analogous to the Dielectric Loss model described above in that you enter the real and imaginary components of the relative permeability as real-valued numbers. An imaginary-valued permeability will lead to a magnetic loss in the material. Modeling and Meshing Notes
In any electromagnetic modeling, one of the most important things to keep in mind is the concept of
skin depth, the distance into a material over which the fields fall off to 1/e of their value at the surface. Skin depth is defined as:
where we have seen that relative permittivity and permeability can be complex-valued.
You should always check the skin depth and compare it to the characteristic size of the domains in your model. If the skin depth is much smaller than the object, you should instead model the domain as a boundary condition as described here: “Modeling Metallic Objects in Wave Electromagnetics Problems“. If the skin depth is comparable to or larger than the object size, then the electromagnetic fields will penetrate into the object and interact significantly within the domain.
A plane wave incident upon objects of different conductivities and hence different skin depths. When the skin depth is smaller than the wavelength, a boundary layer mesh is used (right). The electric field is plotted.
If the skin depth is smaller than the object, it is advised to use boundary layer meshing to resolve the strong variations in the fields in the direction normal to the boundary, with a minimum of one element per skin depth and a minimum of three boundary layer elements. If the skin depth is larger than the effective wavelength in the medium, it is sufficient to resolve the wavelength in the medium itself with five elements per wavelength, as shown in the left figure above.
Summary
In this blog post, we have looked at the various options available for defining the material properties within your electromagnetic wave models in COMSOL Multiphysics. We have seen that the material models for defining the relative permittivity are appropriate even for metals over a certain frequency range. On the other hand, we can also define metal domains via boundary conditions, as previously highlighted on the blog. Along with earlier blog posts on modeling open boundary conditions and modeling ports, we have now covered almost all of the fundamentals of modeling electromagnetic waves. There are, however, a few more points that remain. Stay tuned!
Comments (5) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Now we know that some functions can be expressed as power series, which look like infinite polynomials. Since calculus, that is, computation of derivatives and antiderivatives, is easy for polynomials, the obvious question is whether the same is true for infinite series. The answer is yes:
Theorem 13.9.1 Suppose the power series $f(x)=\ds\sum_{n=0}^\infty a_n(x-a)^n$ has radius of convergence $R$. Then $$\eqalign{ f'(x)&=\sum_{n=0}^\infty na_n(x-a)^{n-1},\cr \int f(x)\,dx &= C+\sum_{n=0}^\infty {a_n\over n+1}(x-a)^{n+1},\cr }$$ and these two series have radius of convergence $R$ as well.
Example 13.9.2 Starting with the geometric series: $$\eqalign{ {1\over 1-x} &= \sum_{n=0}^\infty x^n\cr \int{1\over 1-x}\,dx &= -\ln|1-x| = \sum_{n=0}^\infty {1\over n+1}x^{n+1}\cr \ln|1-x| &= \sum_{n=0}^\infty -{1\over n+1}x^{n+1}\cr }$$ when $|x|< 1$. The series does not converge when $x=1$ but does converge when $x=-1$ or $1-x=2$. The interval of convergence is $[-1,1)$, or $0< 1-x\le2$, so we can use the series to represent $\ln(x)$ when $0< x\le2$. For example $$ \ln(3/2)=\ln(1- -1/2)= \sum_{n=0}^\infty (-1)^n{1\over n+1}{1\over 2^{n+1}} $$ and so $$ \ln(3/2)\approx {1\over 2}-{1\over 8}+{1\over 24}-{1\over 64} +{1\over 160}-{1\over 384}+{1\over 896} ={909\over 2240}\approx 0.406 .$$ Because this is an alternating series with decreasing terms, we know that the true value is between $909/2240$ and $909/2240-1/2048=29053/71680\approx .4053$, so correct to two decimal places the value is $0.41$.
What about $\ln(9/4)$? Since $9/4$ is larger than 2 we cannot use the series directly, but $$\ln(9/4)=\ln((3/2)^2)=2\ln(3/2)\approx 0.82,$$ so in fact we get a lot more from this one calculation than first meets the eye. To estimate the true value accurately we actually need to be a bit more careful. When we multiply by two we know that the true value is between $0.8106$ and $0.812$, so rounded to two decimal places the true value is $0.81$.
Exercises 13.9
Ex 13.9.1Find a series representation for $\ln 2$.(answer)
Ex 13.9.2Find a power series representation for $\ds 1/(1-x)^2$.(answer)
Ex 13.9.3Find a power series representation for $\ds 2/(1-x)^3$.(answer)
Ex 13.9.4Find a power series representation for $\ds 1/(1-x)^3$.What is the radius of convergence?(answer)
Ex 13.9.5Find a power series representation for $\ds\int\ln(1-x)\,dx$.(answer)
|
The previous section showed that, in some ways, derivatives behave nicely. The Constant Multiple and Sum/Difference Rules established that the derivative of \(f(x) = 5x^2+\sin x \) was not complicated. We neglected computing the derivative of things like \(g(x) = 5x^2\sin x\) and \(h(x) = \frac{5x^2}{\sin x}\) on purpose; their derivatives are
not as straightforward. (If you had to guess what their respective derivatives are, you would probably guess wrong.) For these, we need the Product and Quotient Rules, respectively, which are defined in this section.
We begin with the Product Rule.
Theorem 14: Product Rule
Let \(f\) and \(g\) be differentiable functions on an open interval \(I\). Then \(fg\) is a differentiable function on \(I\), and \[\frac{d}{dx}\Big(f(x)g(x)\Big) = f(x)g^\prime(x) + f^\prime(x)g(x).\]
Important: \(\frac{d}{dx}\Big(f(x)g(x)\Big) \neq f^\prime(x)g^\prime(x)\)! While this answer is simpler than the Product Rule, it is wrong.
We practice using this new rule in an example, followed by an example that demonstrates why this theorem is true.
Example 49: Using the Product Rule
Use the Product Rule to compute the derivative of \(y=5x^2\sin x\). Evaluate the derivative at \(x=\pi/2\).
Solution:
To make our use of the Product Rule explicit, let's set \(f(x) = 5x^2\) and \(g(x) = \sin x\). We easily compute/recall that \(f^\prime(x) = 10x\) and \(g^\prime (x) = \cos x\). Employing the rule, we have \[\frac{d}{dx}\Big(5x^2\sin x\Big) = 5x^2\cos x + 10x\sin x.\]
At \(x=\pi/2\), we have \[y^\prime (\pi/2) = 5\left(\frac{\pi}{2}\right)^2\cos \left(\frac{\pi}2\right) + 10\frac{\pi}2 \sin\left(\frac{\pi}{2}\right) = 5\pi.\]We graph \(y\) and its tangent line at \(x=\pi/2\), which has a slope of \(5\pi\), in Figure 2.15. While this does not
prove that the Product Rule is the correct way to handle derivatives of products, it helps validate its truth. Figure 2.15: A graph of \(y=5x^2\sin x\) and its tangent line at \(x=\pi /2\).
We now investigate why the Product Rule is true.
Example 50: A proof of the Product Rule
Use the definition of the derivative to prove Theorem 14.
Solution:
By the limit definition, we have
\[\frac{d}{dx}\Big(f(x)g(x)\Big) =\lim_{h\to0} \frac{f(x+h)g(x+h)-f(x)g(x)}{h}.\]
We now do something a bit unexpected; add 0 to the numerator (so that nothing is changed) in the form of \(-f(x+h)g(x)+f(x+h)g(x)\), then do some regrouping as shown.
\[\begin{align*}\frac{d}{dx}\Big(f(x)g(x)\Big) &=\lim_{h\to0} \frac{f(x+h)g(x+h)-f(x)g(x)}{h} \quad \text{(now add 0 to the numerator)}\\ &= \lim_{h\to0} \frac{f(x+h)g(x+h)-f(x+h)g(x)+f(x+h)g(x)-f(x)g(x)}{h} \quad \text{(regroup)} \\ &= \lim_{h\to0} \frac{\Big(f(x+h)g(x+h)-f(x+h)g(x)\Big)+\Big(f(x+h)g(x)-f(x)g(x)\Big)}{h}\\ &= \lim_{h\to0} \frac{f(x+h)g(x+h)-f(x+h)g(x)}{h}+\lim_{h\to0}\frac{f(x+h)g(x)-f(x)g(x)}{h}\quad\text{(factor)}\\ &=\lim_{h\to0} f(x+h)\frac{g(x+h)-g(x)}{h}+\lim_{h\to0}\frac{f(x+h)-f(x)}{h}g(x)\quad \text{(apply limits)}\\ &=f(x)g^\prime(x) + f^\prime(x)g(x) \end{align*}\]
It is often true that we can recognize that a theorem is true through its proof yet somehow doubt its applicability to real problems. In the following example, we compute the derivative of a product of functions in two ways to verify that the Product Rule is indeed "right.''
Example 51: Exploring alternate derivative methods
Let \(y = (x^2+3x+1)(2x^2-3x+1)\). Find \(y^prime\) two ways: first, by expanding the given product and then taking the derivative, and second, by applying the Product Rule. Verify that both methods give the same answer.
Solution:
We first expand the expression for \(y\); a little algebra shows that \(y = 2x^4+3x^3-6x^2+1\). It is easy to compute \(y^\prime\); \[y^\prime = 8x^3+9x^2-12x.\]
Now apply the Product Rule.
\[\begin{align*}y^\prime &= (x^2+3x+1)(4x-3)+(2x+3)(2x^2-3x+1) \\ &= \big(4x^3+9x^2-5x-3\big) + \big(4x^3-7x+3\big)\\ & = 8x^3+9x^2-12x. \end{align*}\]
The uninformed usually assume that "the derivative of the product is the product of the derivatives.'' Thus we are tempted to say that \(y^\prime = (2x+3)(4x-3) = 8x^2+6x-9\). Obviously this is not correct.
Example 52: Using the Product Rule with a product of these three functions
Let \(y = x^3\ln x\cos x\). Find \(y^\prime\).
Solution:
We have a product of three functions while the Product Rule only specifies how to handle a product of two functions. Our method of handling this problem is to simply group the latter two functions together, and consider \(y = x^3\big(\ln x\cos x\big)\). Following the Product Rule, we have
\[\begin{align*} y^\prime &= (x^3)\big(\ln x\cos x\big)' + 3x^2\big(\ln x\cos x\big) \\ &\text{To evaluate \(\big(\ln x\cos x\big)^\prime\), we apply the Product Rule again:}\\ &= (x^3)\big(\ln x(-\sin x) + \frac1x\cos x\big)+ 3x^2\big(\ln x\cos x\big)\\ &= x^3\ln x(-\sin x) + x^3\frac1x\cos x+ 3x^2\ln x\cos x \end{align*}\]
Recognize the pattern in our answer above: when applying the Product Rule to a product of three functions, there are three terms added together in the final derivative. Each terms contains only one derivative of one of the original functions, and each function's derivative shows up in only one term. It is straightforward to extend this pattern to finding the derivative of a product of 4 or more functions.
We consider one more example before discussing another derivative rule.
Example 53: Using the Product Rule
Find the derivatives of the following functions.
\(f(x) = x\ln x\) \(g(x) = x\ln x - x\). Solution:
Recalling that the derivative of \(\ln x\) is \(1/x\), we use the Product Rule to find our answers.
\( \frac{d}{dx}\Big(x\ln x\Big) = x\cdot 1/x + 1\cdot \ln x = 1+\ln x\). Using the result from above, we compute \[ \frac{d}{dx}\Big(x\ln x-x\Big) = 1+\ln x-1 = \ln x.\]
This seems significant; if the natural log function \(\ln x\) is an important function (it is), it seems worthwhile to know a function whose derivative is \(\ln x\). We have found one. (We leave it to the reader to find another; a correct answer will be
very similar to this one.)
We have learned how to compute the derivatives of sums, differences, and products of functions. We now learn how to find the derivative of a quotient of functions.
Theorem 15: Quotient Rule
Let \(f\) and \(g\) be functions defined on an open interval \(I\), where \(g(x) \neq 0\) on \(I\). Then \(f/g\) is differentiable on \(I\), and
\[\frac{d}{dx}\left(\frac{f(x)}{g(x)}\right) = \frac{g(x)f^\prime(x) - f(x)g^\prime(x)}{g(x)^2}.\]
The Quotient Rule is not hard to use, although it might be a bit tricky to remember. A useful mnemonic works as follows. Consider a fraction's numerator and denominator as "HI'' and "LO'', respectively. Then \[\frac{d}{dx}\left(\frac{\text{HI}}{\text{LO}}\right) = \frac{\text{LO}\cdot \text{dHI} - \text{HI} \cdot \text{dLO}}{\text{LOLO}},\]read "low dee high minus high dee low, over low low.'' Said fast, that phrase can roll off the tongue, making it easy to memorize. The "dee high'' and "dee low'' parts refer to the derivatives of the numerator and denominator, respectively.
Let's practice using the Quotient Rule.
Example 54: Using the Quotient Rule
Let \( f(x) = \frac{5x^2}{\sin x}\). Find \(f^\prime(x)\).
Solution:
Directly applying the Quotient Rule gives:
\[\begin{align*} \frac{d}{dx}\left(\frac{5x^2}{\sin x}\right) &= \frac{\sin x\cdot 10x - 5x^2\cdot \cos x}{\sin^2x} \\ &= \frac{10x\sin x - 5x^2\cos x}{\sin^2 x}. \end{align*}\]
The Quotient Rule allows us to fill in holes in our understanding of derivatives of the common trigonometric functions. We start with finding the derivative of the tangent function.
Example 55
Using the Quotient Rule to find \(\frac{d}{dx}\big(\tan x\big)\).
Find the derivative of \(y=\tan x\).
Solution:
At first, one might feel unequipped to answer this question. But recall that \(\tan x = \sin x/\cos x\), so we can apply the Quotient Rule.
\[\begin{align*} \frac{d}{dx}\Big(\tan x\Big) &= \frac{d}{dx}\left(\frac{\sin x}{\cos x}\right) \\ &= \frac{\cos x \cos x - \sin x (-\sin x)}{\cos^2 x} \\ &= \frac{\cos^2x+\sin^2x}{\cos^2x}\\ &= \frac{1}{\cos^2x} \\ &= \sec ^2 x. \end{align*}\]
This is a beautiful result. To confirm its truth, we can find the equation of the tangent line to \(y=\tan x\) at \(x=\pi/4\). The slope is \(\sec^2(\pi/4) = 2\); \(y=\tan x\), along with its tangent line, is graphed in Figure 2.16.
Figure 2.16: A graph of \(y=\tan x\) along with its tangent line at \(x=\pi /4\).
We include this result in the following theorem about the derivatives of the trigonometric functions. Recall we found the derivative of \(y=\sin x\) in Example 38 and stated the derivative of the cosine function in Theorem 12. The derivatives of the cotangent, cosecant and secant functions can all be computed directly using Theorem 12 and the Quotient Rule.
Theorem 16: Derivatives of Trigonometric Functions
To remember the above, it may be helpful to keep in mind that the derivatives of the trigonometric functions that start with "c'' have a minus sign in them.
Example 56: Exploring alternative derivative methods
In Example 54 the derivative of \( f(x) = \frac{5x^2}{\sin x}\) was found using the Quotient Rule. Rewriting \(f\) as \(f(x) = 5x^2\csc x\), find \(f^\prime\) using Theorem 16 and verify the two answers are the same.}
Solution:
We found in Example 54 that the \( f^\prime(x) = \frac{10x\sin x - 5x^2\cos x}{\sin^2 x}\). We now find \(f^\prime\) using the Product Rule, considering \(f\) as \(f(x) = 5x^2\csc x\).
\[\begin{align*} f^\prime(x) &= \frac{d}{dx}\Big(5x^2\csc x\Big) \\ &= 5x^2(-\csc x\cot x) + 10x\csc x \quad \text{(now rewrite trig functions)}\\ &= 5x^2\cdot \frac{-1}{\sin x}\cdot \frac{\cos x}{\sin x} + \frac{10x}{\sin x}\\ &= \frac{-5x^2\cos x}{\sin ^2x}+\frac{10x}{\sin x}\quad \text{(get common denominator)}\\ &= \frac{10x\sin x - 5x^2\cos x}{\sin^2x} \end{align*}\]
Finding \(f^\prime\) using either method returned the same result. At first, the answers looked different, but some algebra verified they are the same. In general, there is not one final form that we seek; the immediate result from the Product Rule is fine. Work to "simplify'' your results into a form that is most readable and useful to you.
The Quotient Rule gives other useful results, as show in the next example.
Example 57: Using the Quotient Rule to expand the Power Rule
Find the derivatives of the following functions.
\(f(x) = \frac{1}{x}\) \( f(x)= \frac{1}{x^n}\), where \(n>0\) is an integer. Solution:
We employ the Quotient Rule.
\( f^\prime(x) = \frac{x\cdot 0 - 1\cdot 1}{x^2} = -\frac{1}{x^2}\). \(f^\prime(x) = \frac{x^n\cdot 0 - 1\cdot nx^{n-1}}{(x^n)^2} = -\frac{nx^{n-1}}{x^{2n}} = -\frac{n}{x^{n+1}}.\)
The derivative of \( y=\frac{1}{x^n}\) turned out to be rather nice. It gets better. Consider:
\[\begin{align*} \frac{d}{dx}\left(\frac{1}{x^n}\right) &= \frac{d}{dx}\Big(x^{-n}\Big)\quad \text{(apply result from Example 57)}\\ &= -\frac{n}{x^{n+1}}\text{(rewrite algebraically)} \\ &= -nx^{-(n+1)} \\ &= -nx^{-n-1}. \end{align*}\]
This is reminiscent of the Power Rule: multiply by the power, then subtract 1 from the power. We now add to our previous Power Rule, which had the restriction of \(n>0\).
Theorem 17: Power Rule with Integer Exponents
Let \(f(x) = x^n\), where \(n\neq 0\) is an integer. Then \[f^\prime(x) = n\cdot x^{n-1}.\]
Taking the derivative of many functions is relatively straightforward. It is clear (with practice) what rules apply and in what order they should be applied. Other functions present multiple paths; different rules may be applied depending on how the function is treated. One of the beautiful things about calculus is that there is not "the'' right way; each path, when applied correctly, leads to the same result, the derivative. We demonstrate this concept in an example.
Example 58: Exploring alternate derivative methods
Let \(f(x) = \frac{x^2-3x+1}{x}\). Find \(f^\prime(x)\) in each of the following ways:
By applying the Quotient Rule, by viewing \(f\) as \(f(x) = \big(x^2-3x+1\big)\cdot x^{-1}\) and applying the Product and Power Rules, and by "simplifying\primeskip'' first through division.
Verify that all three methods give the same result.
Solution: Applying the Quotient Rule gives: \[ f^\prime(x) = \frac{x\cdot\big(2x-3\big)-\big(x^2-3x+1\big)\cdot 1}{x^2} = \frac{x^2-1}{x^2} = 1-\frac{1}{x^2}.\] By rewriting \(f\), we can apply the Product and Power Rules as follows:\[\begin{align*} f^\prime(x) &= \big(x^2-3x+1\big)\cdot (-1)x^{-2} + \big(2x-3\big)\cdot x^{-1} \\ &= -\frac{x^2-3x+1}{x^2}+\frac{2x-3}{x} \\ &= -\frac{x^2-3x+1}{x^2}+\frac{2x^2-3x}{x^2}\\ &= \frac{x^2-1}{x^2} = 1-\frac{1}{x^2}, \end{align*}\]the same result as above. As \(x\neq 0\), we can divide through by \(x\) first, giving \( f(x) = x-3+\frac{1}x\). Now apply the Power Rule. \[f^\prime(x) = 1-\frac{1}{x^2},\]the same result as before.
Example 58 demonstrates three methods of finding \(f^\prime\). One is hard pressed to argue for a "best method'' as all three gave the same result without too much difficulty, although it is clear that using the Product Rule required more steps. Ultimately, the important principle to take away from this is: reduce the answer to a form that seems "simple'' and easy to interpret. In that example, we saw different expressions for \(f^\prime\), including:
\[1-\frac{1}{x^2} = \frac{x\cdot\big(2x-3\big)-\big(x^2-3x+1\big)\cdot 1}{x^2} = \big(x^2-3x+1\big)\cdot (-1)x^{-2} + \big(2x-3\big)\cdot x^{-1}.\]
They are equal; they are all correct; only the first is "clear.'' Work to make answers clear.
In the next section we continue to learn rules that allow us to more easily compute derivatives than using the limit definition directly. We have to memorize the derivatives of a certain set of functions, such as "the derivative of \(\sin x\) is \(\cos x\).'' The Sum/Difference, Constant Multiple, Power, Product and Quotient Rules show us how to find the derivatives of certain combinations of these functions. The next section shows how to find the derivatives when we
compose these functions together.
|
Since log is such basic element in secondary education, such questions are accessible to most students. In fact, they appear frequently in one of Taiwan's university qualifying exam AST. You are given the value of log 2 and log 3, then you are required to approximate some
logged numbers, without calculator of course.
Ironically when you are asked to approximate something that is not a multiple of 2, 3 and 5, the best approach is to go back to natural log...
Wait. The natural number e is irrational right? How can we approximate natural log without using calculator?
Let us define our rule before proceeding.
(1) The approximation log 2 ~= 0.30103 and log 3 ~= 0.44712 can be used. Here log always mean log base 10 and ln is the natural log.
(2) Arithmetic and integral powers may be used.
(3) Calculating the log function is not allowed, unless it is an approximation already made.
*
Consider the simplest non-trivial log-number.
Approximate log 7. (A. 0.8450980...) Question: Approximate by the average of log 6 and log 8, which are products of 2 and 3s. Solution 1.
That gives 0.84062 with an error of 5% and is only correct to 1dp. Can we do better?
Observe that 7^2 = 49 is also sandwiched by two nice numbers, 48 and 50, so Solution 2.
$\log 7 \approx \frac{1}{4}(2+3\log 2+\log 3) \approx 0.84505$
That gives an error of 0.00005 and is accurate up to 4dp. That should be enough for most question.
*
But some are not satisfied, because there is no error analysis. We get the right answer because we calculate accurate enough, but we have no idea on why this is accurate enough. To do this we need can go back to our good ol' partner: the linear approximation. Derivative says
$\frac{d \log x}{dx} = \frac{1}{x \ln 10}$
so linear approximation says
$\log (x+a) \approx \log x + \frac{a}{x\ln 10}$
If you are using a calculator then it gives 0.84514... (both from 48 and 50), which is of the same accuracy as Solution 2. The problem is...how to calculate ln 10?
The only way to do it is to compare the powers. We know we it is a bit more than 2, so we can prove something like
$\ln 10 > 2.2 \Leftrightarrow e < 10^5 e^{-10}$
Using the approximation $e\approx 2.71828$ we have $1.33 < 10e^{-2} < 1.36$, so $10^5e^{-10} > 1.33^5 > 4 > e$.
Similarly, we can prove $\frac{9}{4} < \ln 10 < \frac{7}{3}$. In fact, $\ln 10 \approx 2.302$, but it would be too hard to compare $e^{2.3}$ and $10$. Going back to our linear approximation. For the sake of killing off the denominator we multiply all terms by 50:
$50 \log 50 - \frac{50}{49 \ln 10} < 100 \log 7 < 50 \log 50 - \frac{1}{ \ln 10}$
Apply $\ln 10 > 2.25$ and $(\ln 10)^{-1} > 0.42857$ here:
$84.9485 - \frac{50}{49 \times 2.25} < 100 \log 7 < 84.9485 - 0.42857$
$0.8449499 < \log 7 < 0.8451993$
with a maximum error of 0.0001247. This is really accurate just by hand.
In a more general set up, one may argue that even calculating Taylor series by hand would be more accurate, but if you sense something nice about the number and its neighbors there is nothing bad on taking a shortcut.
*
Knowing the approximate value of ln 10, we can actually bound the error terms from the two solutions, still without calculator but rather painlessly.
Prove that in solution 2, the error is less than 0.0001. That is, to prove that Exercise:
$| \log 7 - \frac{1}{4} (\log 48 + \log 50)| < 10^{-4}$.
Hint: when you apply linear approximation to get log 49 from either log 48 or log 50,
both estimates are larger than the actual value. From there it suffices to find a quadratic error term that is accurate enough. Oh of course you may want to use calculus here :)
Solution
|
Let $S=\prod_{i=1}^{n}{R_i}$ where each $R_i$ is a commutative ring with identity. The prime ideals of $S$ are of the form $\prod_{i=1}^{n}{P_i}$ where for some $j$, $P_j$ is a prime ideal of $R_j$ and for $i\neq j$, $P_i=R_i$.
It is clear that any ideal of $S$ of the form stated above is prime. Let $P$ be a prime ideal of $S$. For $1\leq k\leq n$, let $e_k$ be the element of $S$ whose $k$th coordinate is $1$ and all other coordinates are $0$. $P$ is proper, so some $e_j$ (say $e_1$) is not in $P$. For $k\neq 1$ we have $e_{1}e_k=0\in P$, so $e_k\in P$. Thus $0\times \prod_{i=2}^{n}{R_i}\subseteq P$. Let $\pi_1\colon S\to R_1$ be the canonical projection. Then $\pi_1(P)$ is a prime ideal of $R_1$ and $P=\pi_1(P)\times \prod_{i=2}^{n}{R_i}.$
Let $R_{1}$ and $R_{2}$ be two commutative rings with unity.
Let $P$ be a prime ideal in $S=R_{1} \times R_{2}$.
Let $ P= P_{1} \times P_{2}$ where $P_{1}$ is a ideal in $R_{1}$ and $P_{2}$ is a ideal in $R_{2}$.
Then $S/P \simeq R_{1}/P_{1} \times R_{2}/P_{2} $.
Since product of two integral domains is not an integral domain,
Therefore only one of the $P_{1}$ or $P_{2}$ is a prime ideal and other should be corresponding ring.
|
Convolution & Cross-correlation
The primary implement of convolution and cross-correlation.
Notes
You can refer to the complete notes from 04 Notes
Linear Systems(Filter)
Denotes the input function:
$$f[m,n]$$
Filter is used to convert input function to an output(or response) function:
$$g[m,n]$$
$\mathcal{S}$ is refered to as
system operator which maps a member of the set of possible
outputs $g[m,n]$ to a member of the set of possible inputs $f[m,n]$. When using notation involving $\mathcal{S}$, we can write that
$$\begin{aligned}
\mathcal{S}[g]&=f \\ \mathcal{S}\left\{f[m,n]\right\}&=g[m,n] \\ f[m,n]&\mapsto{g[m,n]} \end{aligned}$$ Examples of Filters Moving Average
If we want to smooth or blur the image, we can set the value of a pixel to be the average of its neighboring pixels:
$$g[m,n]=\frac{1}{9}\sum_{i=-1}^{1}\sum_{j=-1}^1f[m-i,n-j]$$
Image Segmentation
The value of a pixel can be set:
$$g[m,n]=\begin{cases}
255 & f[m,n]\geq{t}\\ 0&\text{otherwise} \end{cases}$$ Properties of Systems
Not all systems will have the following(or any) of these properties in general. You can refer to the notes or slide about other properties.
Amplitude Properties Superposition(叠加原理)
$$\mathcal{S}[\alpha{f_i[n,m]}+\beta{f_j[n,m]]}=\alpha\mathcal{S}[f_i[n,m]]+\beta\mathcal{S}[f_j[n,m]]$$
Properties of Systems Shift Invariance(平移不变性)
$$f[n-m_0,n-n_0]\xrightarrow{\mathcal{S}}g[m-m_0,n-n_0]$$
Linear Systems A linear system is a system that satisfies the property of superposition.
Linear systems are also known as
impulse response(脉冲响应) of a system $\mathcal{S}$. Consider a function $\delta_2[m,n]$:
$$\delta_2[m,n]=\begin{cases}
1&m=0\text{ and }n=0\\ 0&\text{otherwise} \end{cases}$$
The response $r$:
$$r=\mathcal{S}[\delta_2]$$
A simple linear shift-invariant system is a system that shifts the pixels of an image, based on the shifting property of the delta function.
$$f[m,n]=\sum_{i=-\infty}^\infty\sum_{j=-\infty}^\infty{f[i,j]\delta_2[m-i,n-j]}$$
Then we can use the superposition property to write
any linear shift-invariant system as a weighted sum of such shifting system:
$$\alpha_1\sum_{i=-\infty}^\infty\sum_{j=-\infty}^\infty{f[i,j]\delta_{2,1}[m-i,n-j]}+\alpha_2\sum_{i=-\infty}^\infty\sum_{j=-\infty}^\infty{f[i,j]\delta_{2,}[m-i,n-j]}+\cdots$$
We can define the filter $h$ of a linear shift-invariant system as
$$h[m,n]=\alpha_1\delta_{2,1}[m-i,n-j]+\alpha_2\delta_{2,2}[m-i,n-j]+\cdots$$
Linear shift invariant systems(LSI):
Systems that satisfy the superposition property Have an impulse response: $\mathcal{S}[\delta_2[n,m]]=\delta_2[n,m]$ Discrete convolution: $f[n,m]*h[n,m]$(multiplication of shifted-version of impulse response by original function) Convolution and Correlation Convolution
The impulse function $\delta[n]$ is defined to be 1 at $n=1$ and 0 elsewhere. For an arbitrary input signal $x$ can be written as $x[n]=\sum_{k=-\infty}^\infty{x[k]\delta[n-k]}$. If we pass the equation into a linear, shift-invariant system, the output is $y[n]=\sum_{k=-\infty}^\infty{x[k]h[n-k]}$, i.e the convolution of the signal $x$ with the impulse response $h$.
However, an image is written in form of matrix. For a linear, shift-invariant system, the output is $y[n,m]=\sum_{i=-\infty}^\infty\sum_{j=-\infty}^\infty{x[i,j]h[n-i,m-j]}$
,i.e the convolution of the signal $x$ with the impulse response $h$ in 2 dimensions.
Correlation(互相关)
Cross Correlation is often used to measure the similarity of two images. cross correlation is the same as convolution, except that the filter kernel is not flipped. Two-dimensional cross correlation is:
$$r[k,l]=\sum_{m=-\infty}^\infty\sum_{n=-\infty}^\infty{f[m+k,n+l]}g[m,n]$$
Summary
The steps for discrete convolution are:
Fold $h[k,l]$(折叠) about origin to form $h[-k]$ Shift th folded results by $n$ to form $h[n-k]$ Multiply $h[n-k]$ by $f[k]$ Sum over all $k$ Repeat for every n
However, cross correlation doesn't have to fold the filter. The convolution amis to calculate the value of weighted $f[m,n]$ by folded and shifted filter $f[k,l]$
In this picture, we assume function $f$ and its height is 1.0. The value of the result at 5 different points is indicated by the shaded area below each point. Also, the symmetry of f is the reason f*g and $f\star{g}$ are identical in this example.
Cross correlation doesn't satisfy commutative property(交换律):
$$f\star{g}(t)=(g\star{f})(-t)$$
However, convolution satisfy commutative property:
$$f*g=g*f$$
How to run Download files from my Github repo
hw1.ipynbis interface for user to debug/visualize the code in
filter.py
|
Edit: I am seeking a solution that uses only calculus and real analysis methods -- not complex analysis. This is an old advanced calculus exam question, and I think we are not allowed to use any complex analysis that could make the problem statement a triviality.
Show that the series
$$\sum_{n=2}^{\infty} \frac{\sin(n)}{\log(n)}$$
converges.
Any hints or suggestions are welcome.
Some thoughts:
The integral test is not applicable here, since the summands are not positive.
The Dirichlet test does seem applicable either, since if I let 1/log(n) be the decreasing sequence, then the series of sin(n) does not have bounded partial sums for every interval.
Thanks,
|
I'm learning to integrate and I'd like to hear what are you favorite integration tricks?
I can't contribute much to this thread, but I like the fact that:
$$\int_{-a}^{a}{f(x)}dx=0 \space\text{if}\space f(x) \space\text{is odd}$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
There is even a separate thread on stack-exchange on integration by parts.
The discrete analog of integration by parts i.e. the summation by parts is also an important tool especially in analytical number theory when we want to find asymptotic. For instance, my recent post here, uses this to get an estimate of $\displaystyle \sum_{n=N+1}^{\infty} \dfrac1{n^s}$.
One I really like is this one :
If $f$ is a continuous function for which $f(a+b-t)=f(t)$ then $$\int_a^b t\cdot f(t) \mathrm{d}t=\frac{a+b}{2}\int_a^bf(t) \mathrm{d}t$$
Example :
$$\begin{align} \int_0^{\pi} \frac{x\sin(x)}{1+\cos^2 (x)}\mathrm{d}x &=\frac{\pi}{2}\int_0^{\pi} \frac{\sin(x)}{1+\cos^2 (x)}\mathrm{d}x\\ &=\frac{\pi}{2} \left[-\arctan(\cos(x))\right]_0^{\pi} \\ &=\frac{\pi^2}{4}\end{align}$$
|
The motivation comes from the following question on MathOverflow:
Question 1.Is there a natural number $n$ satisfying the equation $n(n+1)=2^{[\log^{n}_{2}]+2}$ where $[.]$ indicates the floor function. I am also interested in the minimum $n$ satisfying this equation if there is any.
As the answer to the above question is negative, let's consider the following more generalized form:
Question 2.Are there natural number $n$ and prime number $p$ satisfying the equation $\frac{n(n+1)}{2}=\frac{p}{p-1}\times p^{[\log_{p}^{n}]}+\frac{p-2}{p-1}\times p^n$ where $[.]$ indicates the floor function. (Note that for $p=2$ we get the previous equation.)
|
Division Theorem for Polynomial Forms over Field/Proof 1 Theorem
Let $X$ be transcendental over $F$.
Let $F \sqbrk X$ be the ring of polynomials in $X$ over $F$.
Then $\forall f \in F \sqbrk X: \exists q, r \in F \sqbrk X: f = q \circ d + r$ such that either: $(1): \quad r = 0_F$
or:
$(2): \quad r \ne 0_F$ and $r$ has degree that is less than $n$. Proof
From the equation $0_F = 0_F \circ d + 0_F$, the theorem is true for the trivial case $f = 0_F$.
If $m < n$, the equation $f = 0_F \circ d + f$ would show that $f$ was not a counterexample.
Therefore $m \ge n$.
Suppose $d \divides f$ in $F \sqbrk X$.
Then:
$\exists q \in F \sqbrk X: f = q \circ d + 0_F$
and $f$ would not be a counterexample.
So $d \nmid f$ in $F \sqbrk X$.
So, suppose that:
\(\displaystyle f\) \(=\) \(\displaystyle \sum_{k \mathop = 0}^m {a_k \circ X^k}\) \(\displaystyle d\) \(=\) \(\displaystyle \sum_{k \mathop = 0}^n {b_k \circ X^k}\) \(\displaystyle m\) \(\ge\) \(\displaystyle n\)
Since $d \nmid f$, $f_1$ is a non-zero polynomial.
Therefore:
$f_1 = q_1 \circ d + r$
for some $q_1, r \in F \sqbrk X$, where either:
$r = 0_F$
or:
Hence:
\(\displaystyle f\) \(=\) \(\displaystyle f_1 + \paren {a_m \circ b_n^{-1} \circ X^{m - n} } \circ d\) \(\displaystyle \) \(=\) \(\displaystyle \paren {q_1 + a_m \circ b_n^{-1} \circ X^{m - n} } \circ d + r\)
Thus $f$ is not a counterexample.
From this contradiction follows the result.
$\blacksquare$
|
Let $\{X_1, X_2, \ldots, X_n\}$ be independent and identically distributed (i.i.d.) random variables sampled from a common distribution with density $f_{\theta}(x)$, where $\theta$ is an unknown parameter. We want to estimate $\theta$ given these $n$ samples. Suppose $\hat{\theta}$ is an estimator based on these samples. For simplicity, suppose this is unbiased, so that $E[\hat{\theta}] = \theta$.
Cramer-Rao bound theory implies that for any unbiased estimator: $$ E[(\hat{\theta} - \theta)^2] \geq \frac{1}{I(\theta)} = \Theta(1/n) $$where $I(\theta)$ is the Fisher information. However, I am interested not in the mean-square error, but the
mean absolute error: $$ E[|\hat{\theta} - \theta|] \geq ??? $$
This must be a well-studied problem. Any references or insights on this would be helpful.
Intuitively one expects $E[|\hat{\theta}-\theta|]\geq \Theta(1/\sqrt{n})$, and this is what I eventually want to show for my particular context (actually, eventually I am interested in possibly biased estimators). If one assumes the absolute error is at most $M$ then: $$ \Theta(1/n) \leq E[(\hat{\theta}-\theta)^2] \leq ME[|\hat{\theta}-\theta|] $$ but this inequality is weaker than I want since it means the absolute error also decays by at most $\Theta(1/n)$, whereas I want to increase the bound to $\Theta(1/\sqrt{n})$.
Actually, I can prove something of this form in a special case when $\theta$ represents the mean $E[X_1]$. I'm wondering if such a thing is known? Estimating the mean leads to the "obvious" estimator $\hat{\theta}=\frac{1}{n}\sum_{i=1}^nX_i$, but it is not obvious how to show this is "best" in some sense, particularly for the mean-absolute-error.
|
The fraction \frac{num}{den} are usually rendered by centering the numerator and the denominator.
Unfortunately, I find the result in the following MWE quite ugly:
\documentclass{article}\begin{document}If $r\neq 1$, then \[\sum_{k=0}^nr^k = \frac{1-r^{n+1}}{1-r}\]\end{document}
Indeed, the $r^{n+1}$ part is too big. I wish that the two - signs of a fraction of the form \frac{a-b}{c-d} to be vertically aligned (whatever the size of a,b,c and d) at the center of the fraction, in order to show the symmetry of the formula.
How can I get such a result?
Note: if it matters for the answer, I would like the alignment to work also for inline maths mode.
|
The first thing to observe is that by rotating the triangle about the hypotenuse I obtain two cones having the same base and opposite vertices.
To find the volume of a cone you need its height and the radius of its base. So you need to calculate the radius and the height of each cone.
$$V = \frac{1}{3}\pi r^2 h$$
The radius of the base of the cones is the altitude of the right-angled triangle ($r$), and the corresponding heights are $AX$ and $CX$.
Working directly on the general case - as it seems simpler. Let the sides of the triangle be $a$, $c$ and $b$ ($b$- the hypotenuse).
\begin{eqnarray}V &=& \frac{1}{3}\pi r^2 h \\ &=& \frac{1}{3}\pi r^2 h_{1}+\frac{1}{3}\pi r^2 h_{2}\\ &=&\frac{1}{3}\pi r^2 (h_{1}+h_{2}) \\ &=& \frac{1}{3}\pi r^2 b \end{eqnarray}
Finding the area of a right-angled triangle in two ways:
$$\frac{ac}{2}=\frac{rb}{2}$$
Therefore $$r = \frac{ac}{b}$$
Substituting for $r$ in the equation for the volume you get:
\begin{eqnarray} V &=& \frac{1}{3}\pi r^2 b \\ &=& \frac{1}{3}\pi \left( {\frac{{ac}}{b}} \right)^2 b \\ &=& \frac{1}{3}\pi \frac{{(ac)}^2}{b}\end{eqnarray}
From here you can substitute for the particular case given at the start of the problem.
|
\( \renewcommand{\vec}[1]{ \mathbf{#1} } \)
Introduction
When people hear "implicit surface" most of them think about metaballs, blobby objects and the marching cube. I want to define here implicit surface as a much more general concept and basic jargon related to this field. Let's be clear from the start, implicit surfaces are
not just metaballs! Nowadays you can model virtually anything with implicit surfaces, from simple shapes like cubes, cylinders etc. to very intricate shapes such as human characters or creatures. Implicit surfaces can represent sharp edges and are not just limited to soft shapes. Some nice rendering of implicit surfaces also known as (signed) distance fields:
Considering only metaballs and the marching cube for implicit surfaces would be the equivalent of thinking only about clothes and rasterization when talking about meshes. With both you can model a variety of 3D models and both can be rendered in various ways (raytracing etc.). Another common mistake is to think of implicit surfaces a.k.a (signed) distance fields only as mathematical equations. This is wrong, for instance voxel grids can be used to store implicit surfaces.
Other popular games based on or using implicit surfaces:
Claybook (Real time raytracing of implicit surfaces with ambient occlusion and all) Dreams (Real time raytracing) Spore (In particular the character customization and the way you could freely add limbs) What they are not
Before defining what is an implicit surface I always like to define it's counterpart first:
explicit surfaces. An explicit surface can be triangular mesh, a parametric surface (e.g. splines, nurbs, ...) and so on, anything that explicitly represent a surface. For instance, take a look a the parametric function \( s : \mathbb R^2 \rightarrow \mathbb R^3\) this function takes in input 2 parameters \((u,v)\) and returns a point \((x,y,z)\) directly on the surface. We can explicitly and directly extract points on the surface.
On the other hand the volume is implicit, even if the parametric surface or mesh represent a closed object, determining if a point in space \( \vec p \) is inside or outside the object will be more tricky.
What they really are It's not a surface it's a volume!
Implicit surfaces are actually
explicit volumes they explicitly define what is the inside and outside of your object. Let's define it formally. Consider the function \( f:\mathbb R^3 \rightarrow \mathbb R \) this function takes a 3D point \( (x,y,z) \) in input and returns a single value. This value tells us explicitly if we are outside or inside the volume:
$$
\begin{array}{ll} f(x,y,z) > 0 & \color{red} \text{ are points outside the object }\\ f(x,y,z) < 0 & \color{blue} \text{ are points inside the object }\\ \end{array} $$
On the other hand, the surface is defined implicitly, there is no way to compute directly from \(f\) points on the surface or it will need some extra efforts. An implicit surface is then defined as follow:
$$ f(x,y,z) = 0 \color{grey} \text{ are points on the surface} $$
Let's define an implicit surface with the equation of a sphere whose center is \(\vec c \in \mathbb R^3\) and radius \( r \):
$$
\begin{array}{llll} f( \vec p ) & = & \| \vec p - \vec c \| - r & \\ f(x,y,z) & = & \sqrt{(x-c_x)^2 + (y-c_y)^2 + (z-c_z)^2 } - r& \\ \end{array} $$
For points \( \vec p \) at a distance \( r \) from the center \(\vec c\) of the sphere \(f(\vec p) = 0\):
Iso-surface
What \(f(\vec p) = 0\) defines is the \( 0\text{-isosurface} \). We can actually define an infinity of implicit surfaces from the sphere's equation. Consider \( f(\vec p) = iso\) then for \(iso = -2.3\) you have the \( -2.3\text{-isosurface} \), for \(iso = 0.1\) the \( 0.1\text{-isosurface} \) and so on.
Scalar-field
The function \( f \), which now we know defines a volume, is also sometimes called
scalar-field. It associates to every 3D points in space (the field) a single real value (a scalar). For instance imagine \( f \) indicates the temperature of a room at any point in space:
You could possibly extract the iso-surface for points at a 20C° temperature and see what area of the room is below 20C°. Sometimes instead of the word 'scalar' the term 'potential' is used, and \( f \) then describes a 'potential-field' but this is only a matter of taste.
Distance field
Yet another way to look at this function \(f : \mathbb R^3 \rightarrow \mathbb R\) is to interpret it as a distance field. For instance \( f( \vec p) = \| \vec p - \vec c \| - r \) actually returns the signed distance from the point \( \vec p \) to the surface of the sphere of radius \( r \).
Voxels
You can use a 3D grids (3D textures, voxels etc.) to store values of \( f \) (a sample). When sampling the function \( f \) on a regularly spaced grid: \( f(0,0,0) \), \( f(0.1,0,0) \), \( f(0.2,0,0) \) etc. you can then get the intermediate values (e.g. \(f(0.15,0,0)\)) with tri-linear interpolation.
A more concrete example are MRI where you get a stack of textures describing the density of the body and from which you could extract the surface of the organs.
Conclusion
An implicit surface is defined by the equation \(f(\vec p) = iso\) with \(f\) actually defining a volume and is known under many names: scalar / potential / (signed) distance / density field. \(f\) can be defined with mathematical equations (sphere, cylinder, ellipse...) but the same way we have triangle meshes to represent a surface we can store the volume of \( f \) into a 3D grid.
In future posts we'll see that contrary to explicit surface representations such as meshes, implicit surfaces are extremely easy and robust when it comes to combine and blend them together. This is usually called Constructive Solide Geometry or boolean modeling:
Lastly it is extremely useful for collision detection since you only have to check the sign of \(f\) to determine if you are inside or outside the considered object.
The main difficulty resides into rendering implicit surfaces, and can be computationally costly. Mainly two methods exists, conversion to a triangle mesh with the marching cube algorithm or ray marching.
No comments
|
Say for the following problem, suppose boundary of $\Omega$ is $C^{1,1}$: $$ \left\{ \begin{aligned} -\Delta \phi &= \mathrm{div} \,\vec{u}\quad \text{ in } \Omega \\ \phi&=0 \quad \text{ on }\partial \Omega \end{aligned} \right. $$ Could we deduce $\nabla \phi\cdot \vec{n}$ on $\partial \Omega$ by the following reasoning? Multiply a test function $v\in H^1(\Omega)$ and by doing integration by parts we have: $$ \int_{\Omega} \nabla \phi\cdot \nabla v -\int_{\partial \Omega} (\nabla \phi\cdot\vec{n})\,v = -\int_{\Omega} \vec{u}\cdot \nabla v +\int_{\partial \Omega} (\vec{u}\cdot\vec{n})\,v $$ Could we say that by the arbitrariness of $v$ that $\nabla \phi\cdot \vec{n} = -\vec{u}\cdot \vec{n}$?
I think you got the signs wrong. (It wouldn't have hurt to spell out the integration by parts).
No, you can't deduce this from the arbitrariness of $v$, because $\nabla v$ isn't independent of $v$. Also, you didn't, as the title suggests, deduce the Neumann boundary data from the Dirichlet boundary data; you tried to deduce them from the differential equation (at least I don't see where you used $\phi=0$). That can't work, since the boundary data are not determined by the differential equation.
Another way to see that this can't be right is that the solenoidal (divergence-free) part of $\vec u$ doesn't enter into the equation at all, whereas it does enter into the boundary condition you deduced.
In another sense, however, the Neumann boundary data are indeed determined by the Dirichlet boundary data, since both determine and are determined by the solution (up to an additive constant).
|
I am a beginner student of Algebraic Number Theory and I am starting to learn ramification theory (of global fields). My question asks for motivation for a definition I was given.
Let $K$ be an algebraic number field, $\mathcal{O}_{K}$ its ring of integers, $L/K$ a Galois extension and $\mathcal{O}_{L}$ the integral closure of $\mathcal{O}_{K}$ in $L$.
I know that the group $G=Gal(L/K)$ acts transitively on the set of prime ideals $\mathfrak{P}_{i}$ of $\mathcal{O}_{L}$ above a prime $\mathfrak{p}$ of $\mathcal{O}_{K}$ and it's just a natural thing to consider the decomposition group (of one of these ideals) $G^{Z}(\mathfrak{P})=\{\sigma\in G\:|\:\sigma(\mathfrak{P})=\mathfrak{P}\}$, which is the stabilizer of $\mathfrak{P}$ under this action.
Now, in the paper I am following, together with the decomposition group, it was defined the group \begin{equation} G^{T}(\mathfrak{P})=\{\sigma\in G\:|\:\sigma(\alpha)\equiv\alpha\mod \mathfrak{P}\:\:\forall\alpha\in\mathcal{O}_{L}\}, \end{equation}
and this one I want to understand better.
I was told that each element of $G^{Z}(\mathfrak{P})$ induces an automorphism in the quotient $\mathcal{O}_{L}/\mathfrak{P}$, which is pretty reasonable. This $G^{T}(\mathfrak{P})$ looks like the subgroup of elements of $G^{Z}(\mathfrak{P})$ that induce the identity in the quotient $\mathcal{O}_{L}/\mathfrak{P}$. From my spying on other books and papers, i recognize this group as the so called $\textbf{Inertia group}$.
My question is basically:
What does the Inertia group tells us? When we look at the index $(G:G^{Z}(\mathfrak{P}))$, it gives us a notion of "how many primes did $\mathfrak{p}$ split into in $\mathcal{O}_{L}$". What about the inertia group? What is its meaning? And it is something as natural as considering the stabilizer of a group action?
|
This answer is a hard-science expansion of this answer. Please read that other answer to get a description of the system I am proposing, as well as justification of its technical feasibility. That post also has lots of reference links for various design decisions. I will summarize the system here and numerically address the questions posed.
System summary
The power source is a pebble bed fission reactor. The fuel source is uranium nitride pellets coated in a pyrolitic carbon moderator. These fuel pellets are held in molybdenum 'pins' in a geometry that will make them supercritical if a neutron reflector is placed outside the reactor. Heat exchange is done directly with the working fluid to save mass.
The working fluid is helium, which is passed through the reactor core. Electrical power is generated through a Brayton-cycle turbine similar to a marine gas turbine used on ships, except replacing the combustion chamber with the reactor core. The helium is compressed by a compressor coupled to the gas generating turbine into the core, and then allowed to expand over the gas generating and power turbines. Exhaust will still be at ~700 K, and will then be run over various auxiliary systems to utilize this extra energy. The exhausted gas will then have its remaining energy bled off into space through heat exchangers and then fed back into the compressor. The rotational power generated by the power turbine is then coupled to an electrical dynamo to generate power for the vessel.
The main propulsion system is a magnetoplasmadynamic Lorentz Force Accelerator (LFA) arcjet thruster. Lithium fuel is ionized and fed into an acceleration chamber, where a combination of magnetic and electrical fields are applied. The induced current in the plasma, once the input power is in the MW range, will help maintain the magnetic field in the plasma while will then induce an electric current in a tungsten-barium cathode.
System Specifications
The reactor must produce 300 MW of heat energy. This is possible from a pebble bed reactor, the Chinese are building a pair of production 250 MW pebble bed reactors at Shidao Bay. From this thermal energy, gas generating turbines produce an output of 100 MWe at 33% efficiency. This is equivalent to the power output of 4 GE LM2500 marine gas turbines, which is the same energy source as an Arleigh Burke-class destroyer. The LM2500 has efficiency of about 40%, but we are losing efficiency due to the reactor core being cooler than a typical combustion chamber (our core is ~1750 K compared to ~2250 K in a marine gas turbine). The overall system mass estimate for the power generation portion is 0.4 kg/KWe (based on a NASA estimate), or 40,000 kg.
The size of the MPD thruster is much more conjectural, as no thruster of nearly the size required has been built. I have estimated the characteristics from the information available at the EPPD laboratory at Princeton. This design calls for a single 7.5 kN thruster at a fuel usage rate of 0.5 kg/s with an ISP of 15 km/s. There is an available high ISP mode where thrust drops to 1 kN at 0.01 kg/s with and ISP of 100 km/s. The mass of the thruster unit is 10,000 kg. I honestly do not have an good basis for this estimate, but it is needed to proceed.
Reactor Safety
The pebble bed fission power system is inherently safe. There are several avenues for a nuclear accident, the two most significant being an overpower casualty (Chernobyl) and a loss of coolant casualty (Three Mile Island, Fukushima).
An overpower casualty is not physically possible for a pebble bed reactor. The fuel source will use low-enriched Uranium, enough to achieve critical mass, but low enough that there are significant interactions between U-238 and neutrons in the core. As temperature of the fuel pellets increases, U-238 is affected by doppler broadening, causing it to absorb more neutrons. This lowers the number of neutrons available to cause fissions in U-235,thereby lowering the reaction rate and reducing power input. Therefore, the core is naturally moderated at an upper temperature controlled by the U-235/U-238 ratio, which will be engineered at 1750 K. At temperatures below this, with the reflectors (to be discussed later) in place, the temperature will increase to 1750 K. As fluid flow over the core is increased and heat removal increases, the reaction rate will increase to keep temperature stable, and this power output is naturally controlled by demand. At temperatures above 1750 K, power output will decrease due to U-238 absorption until temperate settles back at 1750 K.
Therefore, there is no human or computer based control of the reactor. Once started it simply outputs energy at the rate heat is removed from the core, moderating itself at 1750 K. This effect is trustworty; computer modeling in Strydom, 2004 indicates that the uncertainty band during a loss of forced cooling casualty will amount to less than 100 C even for a reactor shutting down from full power.
As an aside, we should discuss the way that the reactor is started and stopped. In the core's state as built, it is sub-critical. The core will be undergoing fission at a very low rate, but too many neutrons will be lost passing out of the core for a chain reaction to occur. This is changed by surrounding the core with beryllium reflectors. Once these reflectors are positioned in place, they reflect neutrons back into the core, as well as helping to moderate the high energy neutrons produced by fission. As a result the core will be super-critical and increase temperature until the upper limit described in the last paragraph. By removing the beryllium reflectors, the core can be shut down.
A loss of coolant casualty is the most dangerous remaining one. However, and simplest strategy for this risk is to ignore it. On Earth, reactor casualties are costly because they leave radiation that no one wants to deal with. In space, probably no one cares. Sure, you lose the ship, but people shipped plenty of things in the Age of Sail while the risks of losing the ship were great. Transportation in space has more in common with the Age of Sail, what with month long travel times and low cargo capacities, than it does with modern shipping.
System complexity
As described above, there is no requirement for control systems for the reactor itself, only the activation of one safety system in case of emergency (removing the reflector for shutdown). The emergency heat removal system will be self activating.
The Brayton cycle gas generators will be designed to operate continuously for the duration of a mission. Already, ships at sea using marine gas turbines operate for 1 year + without the turbine enclosure or electrical generator enclosure being opened. The conditions at sea are far more challenging than space, what with salt and water both present. Long term maintenance can be performed at a (space)port between missions. Furthermore, the advantage of operating multiple turbine units in parallel is that the thruster will still be able to fire (if at a reduced power level) if turbine are offline, even when only one turbine is operational.
The MPD thruster is, again, the least developed part of this plan and the most conjectural, so I cannot make any statements about its reliability. However, it does have the advantage of no moving parts; power is generated and transferred through the movement of gas, current, and electromagnetic fields.
Power and Fuel Efficiency
Given the above specifics, we can calculate some burn times and travel times. Here is a list of delta-v needed for various Hohmann transfers.
Tsiolkovsky's rocket equation is solved for fuel mass, $m_f$, by $$m_f = m_0\left(\exp{\left(\frac{\Delta v}{v_e}\right)}-1\right).$$
Our parameters are $m_0$ (mass without fuel) is 50,000 kg plus cargo size; and, $v_e$ is either 15,000 m/s or 100,000 m/s depending on operating mode of the thruster.
The burn time can then be calculated by dividing fuel expended by mass flow rate. The mass flow rates are given as 0.5 kg/s or 0.01 kg/s, depending on the operating mode of the thruster.
Below is a table for required fuel mass and burn times for various configurations. A 3.0 delta-V will get you to Mars or Venus, 8.8 delta-V to Jupiter, and 12.3 anywhere in the Kuiper belt:
Cargo (tons) deltaV (km/s) V_e(km/s) Fuel(tons) Burn(days)
1000 3.0 15 232 5
1000 3.0 100 32 37
1000 8.8 15 838 19
1000 8.8 100 97 112
1000 12.3 15 1334 31
1000 12.3 100 137 159
10000 3.0 15 2225 52
10000 3.0 100 306 354
10000 8.8 15 8020 186
10000 8.8 100 924 1070
10000 12.3 15 12769 296
10000 12.3 100 1315 1522
100000 3.0 100 3047 3527
100000 8.8 100 9203 10652
100000 12.3 100 13095 15156
A few things to note. The optimal burn profile (how long to burn thrusters in which mode) is still an open question. I posted a question about that using similar numbers to this answer, but didn't get a great answer. I might take a stab at that question again later. The reason you have to calculate the optimal burn profile is that fuel has a cost. If you are moving 100,000 tons of raw lithium from Mars orbit to Earth orbit, not only does your burn take 10 years, but you also burn 13,000 tons of refined lithium doing it! That makes it seriously questionable whether moving bulk cargoes is going to be profitable in your solar system. Also note that the above calculations use a 100% fuel burn; you aught to leave at least something in reserve, which cuts further into your fuel efficiency.
I didn't post the scores for using the 15 km/s mode with cargos of 100,000 tons, because the fuel usage is ridiculous. As it is, those numbers are in tons of lithium fuel. Keep in mind world lithium reserves are estimated at about 34 million tons, so you can see how you'd burn through that quickly.
A big open question with this process is the availability of lithium for fuel. If it can be mined in commercial quantities from space rocks, then that sort of operation would be the equivalent of petro-states here on Earth. It may be possible to use alterative propellants, though there would likely be a loss in efficiency. Neon, Argon and Xenon are not very common, either, but hydrazine is another possible propellant. It could be that hydrazine refining in the orbit of the gas giants is the oil refining of your near-future solar system.
Conclusion
Here is a system for space propulsion that provides a reasonable ability to traverse the solar system using technology mostly already demonstrated today. The big exception is scaling up the magnetohydrodynamic propulsion system to kN power levels.
Most burns that you might imagine for a sublight space opera set in the solar system are feasible. Cargo capacity is relatively low, with the 100,000 tankers (roughly the size of large container ships today) being probably unfeasible for fuel cost reasons. Taking 1000 tons of cargo from Earth to the Kuiper Belt isn't that inefficient; you must burn 14% of your cargo mass in fuel, and the burn takes half a year, but what is half a year compared to the decade or more it will take to coast there?
Meanwhile, a quick hop to mars could be done in relatively fast time. If you skip a Hohmann transfer orbit and try something else, you could burn more fuel to get somewhere faster. For example, a max burn from Earth orbit with 1000 tons of cargo and 1000 tons of fuel in the high thrust mode can get you to Mars orbit in a matter of days. Of course, the problem is you have to stop. The point I'm trying to make is that for the lower delta-V transfers at lower distances, this spaceship is powerful enough to ignore Hohmann transfers and attempt some other orbital transfer that requires more energy. Now what that transfer might be sounds like the subject of a future post :)
|
Research kicks up, writing kicks back. So in this brief note, we examine a pair of methods to examine an integral. They’re both very clever, I think. We seek to understand $$ I := \int_0^{\pi/2}\frac{\sin(x)}{\sin(x) + \cos(x)} dx $$
We base our first idea on an innocuous-seeming integral identity.
For $latex {f(x)}$ integrable on $latex {[0,a]}$, we have $$ \int_0^a f(x) dx = \int_0^a f(a-x)dx. \tag{1}$$
The proof is extremely straightforward. Perform the substitution $latex {x \mapsto a-x}$. The negative sign from the $latex {dx}$ cancels with the negative coming from flipping the bounds of integration. $latex {\diamondsuit}$
Any time we have some sort of relationship that reflects into itself, we have an opportunity to exploit symmetry. Our integral today is very symmetric. As $latex {\sin(\tfrac{\pi}{2} – x) = \cos x}$ and $latex {\cos(\tfrac{\pi}{2} – x) = \sin x}$, notice that $$ I = \int_0^{\pi/2}\frac{\sin x}{\sin x = \cos x}dx = \int_0^{\pi/2}\frac{\cos x}{\sin x + \cos x }dx. $$
Adding these two together, we see that $$ 2I = \int_0^{\pi/2}\frac{\sin x + \cos x}{\sin x + \cos x} dx = \frac{\pi}{2}, $$ and so we conclude that $$ I = \frac{\pi}{4}. $$ Wasn’t that nice? $latex {\spadesuit}$
Let’s show another clever argument. Now we rely on a classic across all mathematics: add and subtract the same thing. \begin{align} I = \int_0^{\pi/2}\frac{\sin x}{\sin x + \cos x}dx &= \frac{1}{2} \int_0^{\pi/2} \frac{2\sin x + \cos x – \cos x}{\sin x + \cos x}dx \\
&= \frac{1}{2} \int_0^{\pi/2} \frac{\sin x + \cos x}{\sin x + \cos x}dx + \frac{1}{2}\int_0^{\pi/2}\frac{\sin x – \cos x}{\sin x + \cos x}dx. \end{align} The first term is easy, and evaluates to $latex {\tfrac{\pi}{4}}$. How do we handle the second term? In fact, we can explicitly write down its antiderivative. Notice that $latex {\sin x – \cos x = -\frac{d}{dx} (\sin x + \cos x)}$, and so the last term is of the form $$ -\frac{1}{2}\int_0^{\pi/2} \frac{f'(x)}{f(x)}dx $$ where $latex {f(x) = \sin x + \cos x}$. You may or may not remember that $latex {\frac{f'(x)}{f(x)}}$ is the logarithmic derivative of $latex {f(x)}$, or rather what you get if you differentiate $latex {\log f(x)}$. As we are integrating the derivative of $latex {\log f(x)}$, we see that $$ -\frac{1}{2} \int_0^{\pi/2}\frac{f'(x)}{f(x)}dx = -\frac{1}{2} \ln f(x) \bigg\rvert_0^{\pi/2}, $$ which for us is $$ -\frac{1}{2} \ln(\sin x + \cos x) \bigg\rvert_0^{\pi/2} = -\frac{1}{2} \left( \ln(1) – \ln(1) \right) = 0. $$
Putting these two together, we see again that $latex {I = \frac{\pi}{4}}$. $latex {\spadesuit}$
|
Difference between revisions of "User:Nikita2"
m
m
(15 intermediate revisions by the same user not shown) Line 3: Line 3: −
I am Nikita Evseev
+
I am Nikita Evseev Novosibirsk, Russia.
My research interests are in [[Mathematical_analysis | Analysis]] and [[Sobolev space|Sobolev spaces]].
My research interests are in [[Mathematical_analysis | Analysis]] and [[Sobolev space|Sobolev spaces]].
Line 49: Line 49:
[[Sobolev space]] |
[[Sobolev space]] |
[[Vitali theorem]] |
[[Vitali theorem]] |
+
== TeXing ==
== TeXing ==
I'm keen on turning up articles of EoM into better appearance by rewriting formulas and math symbols in TeX.
I'm keen on turning up articles of EoM into better appearance by rewriting formulas and math symbols in TeX.
−
Now there are '''
+
Now there are '''''' (out of 15,890) articles with [[:Category:TeX done]] tag.
− − − − − − − − − − − − − − − − − − − − − − − − − − − − −
<asy>
<asy>
size(0,150);
size(0,150);
−
real tex=
+
real tex=;
real all=15890;
real all=15890;
pair z0=0;
pair z0=0;
Line 99: Line 71:
arrow("still need TeX",dir(-1*(0.5*theta+d)),2E);
arrow("still need TeX",dir(-1*(0.5*theta+d)),2E);
</asy>
</asy>
− Latest revision as of 21:58, 17 June 2018 Pages of which I am contributing and watching
Analytic function | Cauchy criterion | Cauchy integral | Condition number | Continuous function | D'Alembert criterion (convergence of series) | Dedekind criterion (convergence of series) | Derivative | Dini theorem | Dirichlet-function | Ermakov convergence criterion | Extension of an operator | Fourier transform | Friedrichs inequality | Fubini theorem | Function | Functional | Generalized derivative | Generalized function | Geometric progression | Hahn-Banach theorem | Harmonic series | Hilbert transform | Hölder inequality | Lebesgue integral | Lebesgue measure | Leibniz criterion | Leibniz series | Lipschitz Function | Lipschitz condition | Luzin-N-property | Newton-Leibniz formula | Newton potential | Operator | Poincaré inequality | Pseudo-metric | Raabe criterion | Riemann integral | Series | Sobolev space | Vitali theorem |
TeXing
I'm keen on turning up articles of EoM into better appearance by rewriting formulas and math symbols in TeX.
Now there are
3040 (out of 15,890) articles with Category:TeX done tag.
$\quad \rightarrow \quad$ $\sum_{n=1}^{\infty}n!z^n$ Just type $\sum_{n=1}^{\infty}n!z^n$. Today You may look at Category:TeX wanted. How to Cite This Entry:
Nikita2.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Nikita2&oldid=29534
|
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1996 (22) (remove)
293
Tangent measure distributions were introduced by Bandt and Graf as a means to describe the local geometry of self-similar sets generated by iteration of contractive similitudes. In this paper we study the tangent measure distributions of hyperbolic Cantor sets generated by contractive mappings, which are not similitudes. We show that the tangent measure distributions of these sets equipped with either Hausdorff or Gibbs measure are unique almost everywhere and give an explicit formula describing them as probability distributions on the set of limit models of Bedford and Fisher.
274
This paper investigates the convergence of the Lanczos method for computing the smallest eigenpair of a selfadjoint elliptic differential operator via inverse iteration (without shifts). Superlinear convergence rates are established, and their sharpness is investigated for a simple model problem. These results are illustrated numerically for a more difficult problem.
280
This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are ill-posed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate gradient iteration is terminated when the residual has been reduced to a prescribed percentage. Under certain assumptions on the nonlinear operator it is shown that the algorithm converges and is stable if the discrepancy principle is used to terminate the outer iteration. These assumptions are fulfilled , e.g., for the inverse problem of identifying the diffusion coefficient in a parabolic differential equation from distributed data.
284
A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\). If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of lattices and present several applications in geometry.
271
The paper deals with parallel-machine and open-shop scheduling problems with preemptions and arbitrary nondecreasing objective function. An approach to describe the solution region for these problems and to reduce them to minimization problems on polytopes is proposed. Properties of the solution regions for certain problems are investigated. lt is proved that open-shop problems with unit processing times are equivalent to certain parallel-machine problems, where preemption is allowed at arbitrary time. A polynomial algorithm is presented transforming a schedule of one type into a schedule of the other type.
282
Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\).
285
On derived varieties (1996)
Derived varieties play an essential role in the theory of hyperidentities. In [11] we have shown that derivation diagrams are a useful tool in the analysis of derived algebras and varieties. In this paper this tool is developed further in order to use it for algebraic constructions of derived algebras. Especially the operator \(S\) of subalgebras, \(H\) of homomorphic irnages and \(P\) of direct products are studied. Derived groupoids from the groupoid \(N or (x,y)\) = \(x'\wedge y'\) and from abelian groups are considered. The latter class serves as an example for fluid algebras and varieties. A fluid variety \(V\) has no derived variety as a subvariety and is introduced as a counterpart for solid varieties. Finally we use a property of the commutator of derived algebras in order to show that solvability and nilpotency are preserved under derivation.
277
A convergence rate is established for nonstationary iterated Tikhonov regularization, applied to ill-posed problems involving closed, densely defined linear operators, under general conditions on the iteration parameters. lt is also shown that an order-optimal accuracy is attained when a certain a posteriori stopping rule is used to determine the iteration number.
270
301
We extend the methods of geometric invariant theory to actions of non reductive groups in the case of homomorphisms between decomposable sheaves whose automorphism groups are non recutive. Given a linearization of the natural actionof the group Aut(E)xAut(F) on Hom(E,F), a homomorphism iscalled stable if its orbit with respect to the unipotentradical is contained in the stable locus with respect to thenatural reductive subgroup of the automorphism group. Weencounter effective numerical conditions for a linearizationsuch that the corresponding open set of semi-stable homomorphismsadmits a good and projective quotient in the sense of geometricinvariant theory, and that this quotient is in additiona geometric quotient on the set of stable homomorphisms.
279
It is shown that Tikhonov regularization for ill- posed operator equation \(Kx = y\) using a possibly unbounded regularizing operator \(L\) yields an orderoptimal algorithm with respect to certain stability set when the regularization parameter is chosen according to the Morozov's discrepancy principle. A more realistic error estimate is derived when the operators \(K\) and \(L\) are related to a Hilbert scale in a suitable manner. The result includes known error estimates for ordininary Tikhonov regularization and also the estimates available under the Hilbert scale approach.
275
|
Trying to understand how rearrangements work. A very common example of rearrangements seems to be the alternating harmonic series,
$$\sum _{n=1}^{\infty} \frac {(-1)^{n+1}}{n}$$
Plugging in values of $n$ gives,
$$1-\frac{1}{2}+\frac {1}{3}-....$$ and so on. How can I rearrange this sum so that the first $10$ sum to $0$.
It seems that I must alter it in some way grouping together positive and negative terms so that the first 10 terms sum to 0.
|
Definition : Let $S$ be a subset of a metric space $(X,d)$. A point $x \in S$ is called an isolated point if $\exists \varepsilon >0 $ s.t $B(x,\varepsilon) \cap S \setminus \{x\}= \emptyset. $
Baire's Theorem : Let $ (X,d)$ be a complete metric space. Then, the interior of any countable union of closed subsets is nonempty implies the interior of one of the closed subsets is nonempty as well.
With the theorem and the definiton in mind, I am supposed to prove the following; Propositon : If $X$ is a complete metric space which has $\textbf{no}$ isolated points, then $X$ is uncountable.
I wrote down as follows : By having no isolated points , we have $\forall x \in X, \quad \forall \varepsilon> 0 \quad B(x,\varepsilon) \cap X\ \{x\} \ne \emptyset.$ And I assume the assertion of the proposition is false. In other words, $X $ is countable and it can be rewritten as $X= (S^c) \cup S$ for some $S^c $ which is countable and and $S = \bigcup_{n=1}^\infty S_n$ is uncountable and each $S_n$ is closed. I somehow must try to get a contradiction with the Baire's Theorem. But I could not. Help me if my starting point is wrong and if it is guide me to another direction.
|
Main question. Does there exist a smooth projective morphism $X\to$ Spec $\mathbf Z$ of relative dimension two such that the canonical sheaf $\omega_{X_{\mathbf Q}}$ of the generic fibre $X_{\mathbf Q}$ is ample?
Replacing "relative dimension two" by "relative dimension one", the answer is negative by a theorem of Abrashkin-Fontaine. I highly suspect the answer to be negative in this case too. Unfortunately, it is not known yet though as confirmed by Sándor.
Question 2. Does there exist a number field $K$ such that there are infinitely many $K$-isomorphism classes of smooth projective geometrically connected surfaces over $K$ with ample canonical sheaf and a smooth projective model over $O_K$?
The answer is positive if we replace "surfaces" by "curves". And as Will points out the answer is positive in the higher-dimensional case.
My main question is part of the arithmetic Shafarevich conjecture. As the terminology suggests, this conjecture is the
arithmetic analogue of a conjecture for geometric objects. The latter (geometric) conjecture has been resolved by Arakelov, Bedulev, Kovács, Lieblich, Möller, Parshin, Viehweg, Zuo, et al. (Edit: Please see the references in Sándor's answer.) Its arithmetic analogue remains widely open for relative dimension $\geq 2$ to my knowledge, and was resolved in 1983 by Faltings for relative dimension 1.
With my second question I would like to assure myself of the non-triviality of a higher-dimensional arithmetic Shafarevich conjecture. It turns out to be trivial.
Let me state the results (due to the before-mentioned) in algebraic geometry relevant to this question. The base field is an algebraically closed field $k$ of characteristic zero.
Theorem 1. (Higher-dimensional geometric analogue of main question) There are no smooth projective (strongly?) non-isotrivial morphisms $X\to \mathbf P^1_k$ such that the canonical sheaf of the generic fibre of $X\to \mathbf P^1_k$ is ample. Theorem 2. ("Folklore?" Higher-dimensional geometric analogue of second question) Fix $d\geq 0$. There exists a smooth projective connected curve $C$ such that there are infinitely many isomorphism classes of (strongly?) non-isotrivial smooth projective morphisms $X\to C$ of relative dimension $d$ whose generic fibre has ample canonical sheaf.
Now, Theorem 2 is one of the reasons that the following grand finiteness theorem is difficult.
Theorem 3. Let $C$ be a smooth projective connected curve and let $h$ be a polynomial. Then, there are only finitely many isomorphism classes of smooth projective (strongly?) non-isotrivial morphisms $X\to C$ whose generic fibre is canonically polarized with Hilbert polynomial $h$.
Let me note that I am considering function fields over a field of characteristic zero to be analogous to Spec $\mathbf O_K$. I know some of you prefer function fields over finite fields, but regarding these questions the analogy also "works" to a certain extent.
I might have stated Theorems 1-3 slightly incorrectly. In this case I apologize. (Also, I didn't state Theorem 3 in its full generality. The base curve doesn't need to be compact for instance.) Maybe, I should have only considered
deformation types of families over $C$ in the statements.
Finally, let me point out some related MO questions:
|
2019-10-11 06:14
Implementation of CERN secondary beam lines T9 and T10 in BDSIM / D'Alessandro, Gian Luigi (CERN ; JAI, UK) ; Bernhard, Johannes (CERN) ; Boogert, Stewart (JAI, UK) ; Gerbershagen, Alexander (CERN) ; Gibson, Stephen (JAI, UK) ; Nevay, Laurence (JAI, UK) ; Rosenthal, Marcel (CERN) ; Shields, William (JAI, UK) CERN has a unique set of secondary beam lines, which deliver particle beams extracted from the PS and SPS accelerators after their interaction with a target, reaching energies up to 400 GeV. These beam lines provide a crucial contribution for test beam facilities and host several fixed target experiments. [...] 2019 - 3 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW069 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW069 Detailed record - Similar records 2019-10-09 06:01
HiRadMat: A facility beyond the realms of materials testing / Harden, Fiona (CERN) ; Bouvard, Aymeric (CERN) ; Charitonidis, Nikolaos (CERN) ; Kadi, Yacine (CERN)/HiRadMat experiments and facility support teams The ever-expanding requirements of high-power targets and accelerator equipment has highlighted the need for facilities capable of accommodating experiments with a diverse range of objectives. HiRadMat, a High Radiation to Materials testing facility at CERN has, throughout operation, established itself as a global user facility capable of going beyond its initial design goals. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPRB085 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPRB085 Detailed record - Similar records 2019-10-09 06:01
Commissioning results of the tertiary beam lines for the CERN neutrino platform project / Rosenthal, Marcel (CERN) ; Booth, Alexander (U. Sussex (main) ; Fermilab) ; Charitonidis, Nikolaos (CERN) ; Chatzidaki, Panagiota (Natl. Tech. U., Athens ; Kirchhoff Inst. Phys. ; CERN) ; Karyotakis, Yannis (Annecy, LAPP) ; Nowak, Elzbieta (CERN ; AGH-UST, Cracow) ; Ortega Ruiz, Inaki (CERN) ; Sala, Paola (INFN, Milan ; CERN) For many decades the CERN North Area facility at the Super Proton Synchrotron (SPS) has delivered secondary beams to various fixed target experiments and test beams. In 2018, two new tertiary extensions of the existing beam lines, designated “H2-VLE” and “H4-VLE”, have been constructed and successfully commissioned. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW064 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW064 Detailed record - Similar records 2019-10-09 06:00
The "Physics Beyond Colliders" projects for the CERN M2 beam / Banerjee, Dipanwita (CERN ; Illinois U., Urbana (main)) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; Cholak, Serhii (Taras Shevchenko U.) ; D'Alessandro, Gian Luigi (Royal Holloway, U. of London) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) ; Rae, Bastien (CERN) et al. Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN’s accelerator complex up to 2040 and its scientific infrastructure through projects complementary to the existing and possible future colliders. Within the Conventional Beam Working Group (CBWG), several projects for the M2 beam line in the CERN North Area were proposed, such as a successor for the COMPASS experiment, a muon programme for NA64 dark sector physics, and the MuonE proposal aiming at investigating the hadronic contribution to the vacuum polarisation. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW063 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW063 Detailed record - Similar records 2019-10-09 06:00
The K12 beamline for the KLEVER experiment / Van Dijk, Maarten (CERN) ; Banerjee, Dipanwita (CERN) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; D'Alessandro, Gian Luigi (CERN) ; Doble, Niels (CERN) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) et al. The KLEVER experiment is proposed to run in the CERN ECN3 underground cavern from 2026 onward. The goal of the experiment is to measure ${\rm{BR}}(K_L \rightarrow \pi^0v\bar{v})$, which could yield information about potential new physics, by itself and in combination with the measurement of ${\rm{BR}}(K^+ \rightarrow \pi^+v\bar{v})$ of NA62. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW061 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW061 Detailed record - Similar records 2019-09-21 06:01
Beam impact experiment of 440 GeV/p protons on superconducting wires and tapes in a cryogenic environment / Will, Andreas (KIT, Karlsruhe ; CERN) ; Bastian, Yan (CERN) ; Bernhard, Axel (KIT, Karlsruhe) ; Bonura, Marco (U. Geneva (main)) ; Bordini, Bernardo (CERN) ; Bortot, Lorenzo (CERN) ; Favre, Mathieu (CERN) ; Lindstrom, Bjorn (CERN) ; Mentink, Matthijs (CERN) ; Monteuuis, Arnaud (CERN) et al. The superconducting magnets used in high energy particle accelerators such as CERN’s LHC can be impacted by the circulating beam in case of specific failure cases. This leads to interaction of the beam particles with the magnet components, like the superconducting coils, directly or via secondary particle showers. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPTS066 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPTS066 Detailed record - Similar records 2019-09-20 08:41
Shashlik calorimeters with embedded SiPMs for longitudinal segmentation / Berra, A (INFN, Milan Bicocca ; Insubria U., Varese) ; Brizzolari, C (INFN, Milan Bicocca ; Insubria U., Varese) ; Cecchini, S (INFN, Bologna) ; Chignoli, F (INFN, Milan Bicocca ; Milan Bicocca U.) ; Cindolo, F (INFN, Bologna) ; Collazuol, G (INFN, Padua) ; Delogu, C (INFN, Milan Bicocca ; Milan Bicocca U.) ; Gola, A (Fond. Bruno Kessler, Trento ; TIFPA-INFN, Trento) ; Jollet, C (Strasbourg, IPHC) ; Longhin, A (INFN, Padua) et al. Effective longitudinal segmentation of shashlik calorimeters can be achieved taking advantage of the compactness and reliability of silicon photomultipliers. These photosensors can be embedded in the bulk of the calorimeter and are employed to design very compact shashlik modules that sample electromagnetic and hadronic showers every few radiation lengths. [...] 2017 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 64 (2017) 1056-1061 Detailed record - Similar records 2019-09-20 08:41
Performance study for the photon measurements of the upgraded LHCf calorimeters with Gd$_2$SiO$_5$ (GSO) scintillators / Makino, Y (Nagoya U., ISEE) ; Tiberio, A (INFN, Florence ; U. Florence (main)) ; Adriani, O (INFN, Florence ; U. Florence (main)) ; Berti, E (INFN, Florence ; U. Florence (main)) ; Bonechi, L (INFN, Florence) ; Bongi, M (INFN, Florence ; U. Florence (main)) ; Caccia, Z (INFN, Catania) ; D'Alessandro, R (INFN, Florence ; U. Florence (main)) ; Del Prete, M (INFN, Florence ; U. Florence (main)) ; Detti, S (INFN, Florence) et al. The Large Hadron Collider forward (LHCf) experiment was motivated to understand the hadronic interaction processes relevant to cosmic-ray air shower development. We have developed radiation-hard detectors with the use of Gd$_2$SiO$_5$ (GSO) scintillators for proton-proton $\sqrt{s} = 13$ TeV collisions. [...] 2017 - 22 p. - Published in : JINST 12 (2017) P03023 Detailed record - Similar records 2019-04-26 08:32
Baby MIND: A magnetised spectrometer for the WAGASCI experiment / Hallsjö, Sven-Patrik (Glasgow U.)/Baby MIND The WAGASCI experiment being built at the J-PARC neutrino beam line will measure the ratio of cross sections from neutrinos interacting with a water and scintillator targets, in order to constrain neutrino cross sections, essential for the T2K neutrino oscillation measurements. A prototype Magnetised Iron Neutrino Detector (MIND), called Baby MIND, has been constructed at CERN and will act as a magnetic spectrometer behind the main WAGASCI target. [...] SISSA, 2018 - 7 p. - Published in : PoS NuFact2017 (2018) 078 Fulltext: PDF; External link: PoS server In : 19th International Workshop on Neutrinos from Accelerators, Uppsala, Sweden, 25 - 30 Sep 2017, pp.078 Detailed record - Similar records 2019-04-26 08:32 Detailed record - Similar records
|
Although the study of gemology requires no formal prior training, a high school diploma would make it easier to understand basic math. Knowledge of trigonometry especially might serve you well.
Below are some basic calculations that you may want to understand.
Cross-multiplication
Some people have trouble with cross multiplications, while it is fairly easy if you keep a simple equation in mind.
\[5 = \frac{10}{2}\]
which is the same as \[\frac{5}{1} = \frac{10}{2}\] because 5 divided by 1 = 5.
Let's say you want to bring the 10 to the left of the equation. Obviously 10 = 5 times 2, so you cross-multiply.
\[\frac{5}{1}\swarrow \frac{10}{2}\] we multiply 10 with 1 to get it to the left side and:
\[\frac{5}{1}\searrow \frac{10}{2}\] we multiply 5 with 2, so we get: \(10 * 1 = 5 * 2\) or \(10 = 5 * 2\)
This would probably make more sense in the following equation:
\[\frac{6}{3}=\frac{4}{2}\]
If you would cross-multiply, you would get \(4*3 = 12\) and \(6*2 = 12\), so \(12 = 12\).
Figure \(\PageIndex{1}\) We can do it easier with the aid of a simple diagram. In Figure \(\PageIndex{1}\), you see a triangle with the equation \(10 = 5 * 2\) (the "\(*\)" is left out). The double horizontal bars serve as the "\(=\)" sign OR as the "\(/\)" (division) sign. With this simple diagram in mind, you can solve most simple cross multiplications.
How to read the triangle:
You start at one number and then go the next and then to the 3rd You work your way up first, then down Examples: Say you start at 2 Then you go up and see the "\(=\)" sign. Now you have "\(2=\)" Then you move up further, you meet the 10, so you have "\(2=10\)" You can't go any further up, so you must go down. You meet the double lines again, but they can't be a second "\(=\)", so they serve as a division. Now you have "\(2=10/\)" You go down further and see the 5, that makes "\(2=10/5\)"
It works the same when you start with 5.
Now let's start with 10:
You start with 10, so "\(10\)" You can't go further up, so you must go down. You encounter the double lines. As they are the first time you see them, they are the "\(=\)", now you have "\(10=\)" Then you meet the 5 (or the 2 depending if you go clockwise or anti-clockwise), making "\(10=5\)" You can't go further down, so must go sideways. You see the 2, making the odd-looking "\(10=52\)". This is actually good math style but is confusing, so we need to place an "\(*\)" between them. The result is "\(10=5x2\)", which any prep school kid would agree on.
This is of course not much fun because all the answers are given. But this simple knowledge is basic when you want to solve an equation such as:
\[2.417 = \frac{300}{x}\]
Simply replace "\(10=5*2\)" with the numbers and the unknown "\(x\)" of the new equation in the triangle. (Hint: the "\(x\)" takes the place of the "\(2\)")
Give it a try and see if you can calculate the speed of light inside a diamond with the above equation (the 300 is short for 300,000 km/s, which is the speed of light in a vacuum).
If all else fails, keep \(5 = \frac{10}{2}\) in mind and substitute the numbers for the unknowns in the equation you need to solve.
Sine, cosine, and tangent
Figure \(\PageIndex{2}\)
The sine, cosine, and tangent are used to calculate angles.
In Figure \(\PageIndex{2}\), the 3 sides of a right triangle (seen from corner A) are labeled Adjacent side, Opposite side and Hypotenuse. The hypotenuse is always the slanted (and longest) side in a right triangle.
The opposite and adjacent sides are relative to corner A. If A would be at the other acute corner, they would be reversed.
Sine
Figure \(\PageIndex{3}\) Sine is usually abbreviated as sin.
You can calculate the sine of a corner in a right triangle by dividing the opposite side by the hypotenuse. For this you need to know two values:
1. the value of the opposite side 2. the value of the hypotenuse.
In Figure \(\PageIndex{3}\) those values are 3 and 5, the sine of A or better sin(A) is 3/5 = 0.6
\[\sin = \frac{opposite\ side}{hypotenuse} = \frac{3}{5} = 0.6\]
Now that you have the sine of corner A, you would like to know the angle of that corner.
The angle of corner A is the "inverse sine" (denoted as sin -1 or arcsin) of the sine and is done by complex calculation. Luckily we have electronic calculators to do the dirty work for us: type in 0.6 press the "INV" button press the "sin" button
This should give you approximately 36.87, so the angle of corner A is 36.87°
\[\arcsin \left(\sin A\right) = \arcsin \left(0.6\right) = 36.87\]
When you know the angle a corner makes, lets say 30°, you can calculate the sine as follows:
type in 30 press sin
That should give you 0.5
Practical use
If you know the angles of incidence and refraction in a gemstone, you can calculate the refraction index of that gemstone. Or do other fun things like:
\[index\ of\ refraction = \frac{sin\ i}{sin\ r}\]
Diamond has a refraction index of 2.417, so if the angle of incidence is 30°, the angle of refraction can be calculated as:
\[\sin r = \frac{\sin i}{n} = \frac{\sin 30}{2.417} = \frac{0.5}{2.417} = 0.207\]
so using the inverse sine:
\[\arcsin \left(\sin r \right) = \arcsin \left(0.207 \right) = 11.947 \Rightarrow angle\ of\ refraction = 11.947^\circ\]
It's not all rocket science. Read the page on refraction if you don't know what is meant by angle of incidence and angle of refraction.
Calculating the critical angle
Calculating the critical angle of a gemstone is pretty easy although the formula might scare you.
\[critical\ angle = \arcsin\left(\frac{1}{n}\right)\]
Where n is the refractive index of the gemstone.
The actual formula is \(\arcsin(n_2 / n_1)\), but as we gemologists usually are only concerned with the critical angle between air and the gem, n2 = 1.
The calculation of this formula is easy, we'll use quartz with n = 1.54 as an example.
When you use a windows calculator, make sure you are in scientific mode. Then press the following buttons: 1 / 1.54 =
Then check the "inv" checkbox and press the "sin" button. That should give you an approximate value of 40.493, so the critical angle for quartz is 40.5° (rounded down to one decimal).
Cosine
The cosine of a corner in a right triangle is similar to the sine, yet now calculation is done with the division of the adjacent side by the hypotenuse. The cosine is abbreviated as "cos"
In Figure \(\PageIndex{3}\), that would be 4 divided by 5 = 0.8
\[\cos = \frac{adjacent\ side}{hypotenuse} = \frac{4}{5} = 0.8\]
Again as with the sine, the inverse of the cosine is the arccos or cos
-1: type in 0.8 press INV press cos
This should give you 36.87 as well, so the angle remains 36.87° (as expected).
\[\arccos \left(\cos A\right) = \arccos \left(0.8\right) = 36.87\]
Tangent
The 3rd way to calculate an angle is through the tangent (or shortened to "tan"). The tangent of an angle is opposite side divided by adjacent side.
\[\tan = \frac{opposite\ side}{adjacent\ side}\]
For Figure \(\PageIndex{3}\), that will be 3/4 = 0.75
Calculation of the angle is as above, but using the arctan or tan -1: type in 0.75 press INV press tan
This should give you 36.87, so through this method of calculation the angle of corner A is again 36.87°.
\[\arctan \left(\tan A\right) = \arctan \left(0.75\right) = 36.87\]
A simple bridge to remember which sides you need in the calculations is the bridge SOH-CAH-TOA. SOH = Sine-Opposite-Hypotenuse CAH = Cosine-Adjacent-Hypotenuse TOA = Tangent-Opposite-Adjacent Degrees, minutes and seconds
When we think of degrees we usually associate it with temperature and we consider minutes and seconds as attributes of time. However, in trigonometry, they are used to describe angles of a circle and we refer to them as the
radian values.
A full circle has 360 degrees, or 360°.
Every degree can be divided into 60 minutes (like in a clock) instead of the 10 decimal subdivisions. Minutes are notated with a ', as in 26'. The individual minutes are further divided into 60 seconds and they are described with '', as in 23''.
This may look odd at first, but it's not very hard to understand.
If you have an angle of 24°26'23'' (24 degrees, 26 minutes and 23 seconds), this means that the decimal value is:
24° 26 divided by 60 or 26/60 = 0.433° 23/(60 * 60) or 23/3600 = 0.0063°
This totals as 24 + 0.433 + 0.0063 = 24.439° in the decimal value (which is the decimal value of the critical angle of diamond).
When you want to calculate the
radian value of 24.439°, you do the following: the 24 stays 24 (because that doesn't change) you try to find how many times 0.439 times 60 fits in the degree by: 60 times 0.439 = 26.34, so that is 26 full minutes (0.34 left over) you calculate the seconds through 60 times 0.34 = 20.4 (or 20 full seconds because we don't count lower than seconds).
This gives 24°26'20'' (24° + 26' + 20'') instead of the 24°26'23''. The 3-second difference is caused by the rounding down to 3 decimals in the prior calculation. In gemology, we usually don't even mention the seconds, so it will be rounded down to ≈ 24°26'.
Even though you may not need this knowledge often, it is important that you at least know of its existence as you may get confused when reading articles. Sometimes values are given in decimal degrees, at other times in radian values.
|
Summary 29886 - Plenary Session: QM talk rehearsal , 10:00 $\Lambda$($K_{S}^{0}$)-$h^{\pm}$ Azimuthal Correlations with Respect to Reaction Plane and Searches for CME and CVE Presenter : Feng Zhao 30321 - Light Flavor Spectra Parallel Session - III , 09:30 A Fixed-Target Program for STAR: Extending the Low Energy Reach of the RHIC Beam Energy Scan Presenter : Brooke Haag 29981 - Plenary Session: Welcome and STAR status , 14:30 Analysis and paper status Presenter : Frank Geurts
Plenary Session: QM flash talk preparation , 16:15
Presenter : John Campbell
Plenary Session: QM talk rehearsal , 15:00
Presenter : Patrick Huck
Light Flavor Spectra Parallel Session - I , 09:00
Presenter : Patrick Huck
http://downloads.the-huck.com/star/emails_notes/updates_2014-05-13.html
Plenary Session: QM talk rehearsal , 09:30
Presenter : Qi-ye Shou 30315 - Light Flavor Spectra Parallel Session - II , 14:30 Charged particle $p_{\mathrm{T}}$ spectra measured at mid-rapidity in the Beam Energy Scan from STAR and comparisons to models Presenter : Stephen Horvat
Heavy Flavor II , 16:15
Presenter : Mustafa Mustafa (EVO) 30363 - Jet-correlation PWG parallel session , 09:00 Di-Jet Imbalance Measurements and Semi-Inclusive Recoil Jet Distributions in Central Au+Au Collisions in STAR Presenter : Jörn Putschke
Light Flavor Spectra Parallel Session - I , 10:00
Presenter : Yi Guo
Light Flavor Spectra Parallel Session - II , 16:00
Presenter : Kefeng Xin 29875 - Plenary Session: QM talk rehearsal , 14:00 Direct virtual photon and dielectron production in Au+Au collisions at $\sqrt{s_{NN}} $ = 200 GeV at STAR Presenter : Chi Yang 30307 - Light Flavor Spectra Parallel Session - I , 09:30 Direct virtual photon and dielectron production in Au+Au collisions at $\sqrt{s_{NN}} $ = 200 GeV at STAR Presenter : Chi Yang 29883 - Plenary Session: QM talk rehearsal , 09:00 Flow Measurements and selection of body-body and tip-tip enhanced samples in U+U collisions at STAR Presenter : Hui Wang 29896 - Plenary Session: QM flash talk preparation , 16:30 Heavy Quark Interactions with the Medium as Measured with Electron-Hadron Correlations in $Au+Au$ Collisions in STAR Presenter : Jay Dunkelberger 30383 - Heavy Flavor III , 09:00 Heavy Quark Interactions with the Medium as Measured with Electron-Hadron Correlations in $Au+Au$ Collisions in STAR Presenter : Jay Dunkelberger
|
Let $X$ and $Y$ be two independent random variables, exponentially distributed with parameter $\lambda=1$ and let $U=X$ and $V=X+Y$.
Determine the joint CDF of the random pair $(U,V)$ and determine the CDF of the random variable V.
I know that $f_{(U,V)}(U=u,V=v)=f_{(X,Y)}(X=u,Y=v-u)*1$, $1$ being the Jacobian. So given that $f_{(X,Y)}(X=x,Y=y)=e^{-(x+y)}$,
$f_{(U,V)}(U=u,V=v)=e^{-v}$ for $u>0, v>u$.
Now to get the joint CDF of the random pair $(U,V)$ I was thinking: $F_{(U,V)}(U \le a, V \le b)= \int_0^a\int_u^be^{-v}dudv$ (hopefully I am integrating over the correct region in the $UV$ plane)
My questions for this part is:
Is there another way to do it? We proved in class that $f_{(U,V)}(U=u,V=v)=f_{(X,Y)}(X=u,Y=v-u)*|J(u,v-u)|$ but don't have anything like that when it comes to the CDF of a random pair.
Now to determine the CDF of the random variable $V,$ $V$ follows a gamma distribution of parameters $\alpha=2$ and $\lambda=1$ since X and Y are independent. So from there we can just find the CDF. There is no need to go through the joint CDF of $U$ and $V$ to find the marginal density of $V$?
|
I have the following nasty expression that I would like to expand in powers of $\frac{1}{N}$:
\begin{align} \frac{2^{\frac{3}{2}} 3^{\frac{1}{2}} \Biggl[ \sqrt{u} \cdot \Gamma\left(\frac{2+N}{4}\right) \cdot {}_1F_1 \left( \frac{2+N}{4},\frac{1}{2},\frac{3r^2}{2u} \right) -\sqrt{6} r \cdot \Gamma \left( \frac{4+N}{4} \right) \cdot {}_1F_1 \left( \frac{4+N}{4},\frac{3}{2},\frac{3r^2}{2u} \right) \Biggr] }{N \cdot u^{\frac{1}{2}} \Biggl[ \sqrt{u} \cdot \Gamma\left(\frac{N}{4}\right) \cdot {}_1F_1 \left( \frac{N}{4},\frac{1}{2},\frac{3r^2}{2u} \right) -\sqrt{6} r \cdot \Gamma \left( \frac{2+N}{4} \right) \cdot {}_1F_1 \left( \frac{2+N}{4},\frac{3}{2},\frac{3r^2}{2u} \right) \Biggr]} \end{align}
where you can assume that $N \in \mathbb{N}$ (but could be analytically continued to $\mathbb{R}^+$), $u \in \mathbb{R}^+$, and $r \in \mathbb{R}^+$. Furthermore, ${}_1F_1$ is the confluent hypergeometric function sometimes written as $M(a,b,z)$.
Using a different route I have obtained a value for the limit $N \to \infty$, but I'd like to a) reproduce this result using the above expression and b) find the $O\left(\frac{1}{N}\right)$ corrections. So far I have tried numerous identities from the NIST Handbook of Mathematical Functions, but I simply seem to lack the experience to make real progress. If anyone knows a solution or has an idea of how to proceed next, I'd greatly appreciate their help.
With best regards,
Jan
|
You have several preformed misconceptions about relativity in your question. I will try to address them all here.
1- Time Dilation Due To Relativity/Speed
It happens that time dilation due to relativity becomes really evident at extremely high speeds. The time dilation experienced due to velocity is expressed as $$\Delta t = \frac{t}{\sqrt{1-\frac{v^2}{c^2}}}$$
where v is the velocity of the object under question and c is the speed of light in vacuum. If you solve the equation for various values of v, you would observe that while there is definitely some time dilation for every moving object, the effect is negligible, until we approach at least 10% the speed of light.
Now let us examine the speeds at which we (humans on Earth) are travelling. Earth is moving around the sun at a speed of nearly 30 km/s. While being much, much faster than a rifle bullet, it is nothing when compared to the speed of light, which is 300000000 m/s. So we are moving at 0.01% of the speed of light that way. Next is our speed in the solar system, moving around the galactic center. A quick google search tells me this speed is 828,000 km/h or 230 km/s (source). One again, while being mind bogglingly fast, it is only about 0.076% of the speed of light. Time dilation at these speeds is negligible for all practical purposes.
While our galaxy is also moving along in the cluster of galaxies and the cluster is adrift in the universe, but these speeds can hardly be calculated accurately due to the expansion of space in the universe which is increasing at the distance between far flung galaxies at velocities
faster than the speed of light. So relativity equations don't really apply here. 2- If our star system happened to be close to the center of the galaxy, wouldn't it make sense to send a colony to slower parts of the galaxy and have them develop technology to be brought back to us?
While star systems near the galactic center have time dilated for them (not due to higher speeds, but due to gravitation), it would be a completely impractical idea to try and settle a colony at regions of faster time so that they develop technology faster and that be brought back to us. Some of the reasons I can think of, are following:
1- Galactic center is a
really really supermassive blackhole with extremely high gravity. Sending a spaceship to large distances away from this monster would be very very difficult, specially considering that you want to send the spaceship from near the galactic center to the outer reaches of the galaxy.
2- Before you send the huge spaceship containing thousands of people to start a colony on another planet in the outskirts of the galaxy, you first have to
find a habitable planet in the outskirts of the galaxy. That is not an easy task, considering the startlingly long distances involved, the next-to-nothing technology we have for detailed mapping of all star systems and their planets at such vastness and how we cannot tell about habitability of planets at those distances. There is 99.9999999999999999% chance we would send our pioneers to certain death.
3- Considering that the galaxy is really, really huge and interstellar space looking dark to the eye is fraught with horrors like blackholes and neutron stars with horrific magnetic and gravitation fields, the spaceship would be in for nearly certain doom when considering travelling from galactic center to galactic outskirts.
4- And when those pioneers would finally reach the outskirts of the galaxy (in tens of millions of years, even if they travel at 5% of the speed of light, which is a
very fast speed, for human standards), the people who would be landing on the exoplanet would probably be biologically different than us and definitely have a completely different psychology. And you can be 100% certain they would not be interested at all at sending back the results of technological advancement they achieve. They would no longer be emotionally, culturally or biologically linked to us anymore.
5- Also, forget any meaningful communication between the planets. Our galaxy is 100,000 light years across (source) meaning that it would take at least 50,000 years just for one message to reach either side. And then you would have to consider
where to focus your message at (considering that the position of the star systems and planets would be very different after 50,000 years) and then process the signal to undo the red or blue shift. Then there is gravitational lensing effect which might bend the communication waves away from their designated straight path. In short, forget any communication at all.
Edit to add: In response to Michael Kjörling
You have stated the correct statistics, but I'm afraid you have ended up with a limited view of the journey and the dangers it holds. While a
straight trip from the galactic center to the outskirts of our galaxy would indeed take a few multiples of 50,000 years (when traveling at a significant fraction of the speed of light), travel is anything, but straight (due to gravitational and magnetic fields of stars).
For one, you have not considered the possibility of the source civilization living on north side of the galactic center and the destination start system occurring on the southern outstretch of the galaxy. You cannot travel through the galactic center. You would have to go around it, and considering its immense gravity and vast event horizon, travelling at 5% the speed of light, you would have to make very very large turn around it, taking tens of thousands of years.
Furthermore, stellar density is much greater near the galactic center, implying that the spaceship would have to endure gravitational tugs from multiple sources, the moment it reaches interstellar space. Neglecting the impossible fuel requirements, travel would hardly (if ever possible at all) be in a straight line, making the route very complex and lengthy.
Thirdly, you have not accounted the fact that in case (which is highly likely) the spaceship the spaceship is a couple dozen thousand years late, it will find that its destination stellar system has moves millions of miles ahead and will have to actually
chase it to reach it, further increasing journey time.
|
The R package
mlergm, appropriately named“Multilevel Exponential-Family Random Graph Models,” aims to provide a convenient platform for estimating exponential-family random graph models (ERGMs) with multilevel structure. Presently, the beta release of the package supports estimation of ERGMswith non-overlapping block structure and local dependence(see the work of Schweinberger & Handcock (JRSS-B, 2015)),with plans to expand the coverage to overlapping block structure with local dependenceand to structures with multiple levels and higher-order interactions in future updates.
The syntax of
mlergm aims to mirror other network package interfaces, namely that of the
ergm framework, to provide as small a learning curve as possible to users already acquainted with the
ergm and
network package framework. In the following sections, we aim to highlight how to use
mlergm and some of the key features.
Before we proceed into demonstrating how to use
mlergm and the functionality it has for multilevel networks analysis, we first review background on multilevel networks,as well as the main statistical model the package contains at present.
Multilevel networks come in many different forms, and we point readers to the introductory chapter written by Snijders in the monograph “Multilevel network analysis for the social sciences: theory, methods and applications” (Lazega & Snijders, Eds., 2016).
In the simplest form, a multilevel network has a set of nodes \(\mathcal{N}\) (e.g., persons, brain regions, research articles) partitioned into \(K\) blocks \(\mathcal{A}_{1}, \ldots, \mathcal{A}_{K} \subseteq \mathcal{N}\) (e.g., departments within a university, individual patient brains, research journals), and a set of edges \(\mathcal{E}\) which represent interactions, relationships, or connections between nodes (e.g., advice seeking, functional connectivity, citation). Network data are typically represented by an adjacency matrix \(\boldsymbol{X}\), where in the case of a binary, undirected network, \(X_{i,j} = 1\) if \(\{i,j\} \in \mathcal{E}\), and \(X_{i,j} = 0\) otherwise. The within-block subgraphs are denoted by \(\boldsymbol{X}_{\mathcal{A}_k, \, \mathcal{A}_k}\) (\(k = 1 , \ldots, K\)) and the between-block subgraphare denoted by \(\boldsymbol{X}_{\mathcal{A}_k, \, \mathcal{A}_l}\) (\(1 \leq k < l \leq K\)).
In practice, researchers are usually interested in super-population inference, where a network \(X\), define on a finite population of nodes \(\mathcal{N}\), is assumed to have been generated from a distribution \(\mathbb{P}_{\mathcal{N}, \, \boldsymbol{\theta}}\), and the goal is to estimate \(\boldsymbol{\theta}\) in order to learn about mechanisms driving edge formation.
Past procedures have taken to estimating models for such network data by
Both of these approaches fail to take into account the natural structure of networks with block structure. The first is unable to adaptively model differences in within- and between-block edge formations, while the second may overfit to the data by estimating separate parameters for each within-block subgraph, and does not use the additional structure to help estimate a general model.
In
mlergm, we estimate statistical models for networks \(X\) of the form \[\begin{aligned}\mathbb{P}_{\mathcal{N}, \, \boldsymbol{\theta}, \, \boldsymbol{\beta}}\left(\boldsymbol{X} = \boldsymbol{x}\right) \;\;&= \;\; \prod_{k=1}^{K} \, \mathbb{P}_{\mathcal{A}_k, \, \boldsymbol{\theta}} \left( \boldsymbol{X}_{\mathcal{A}_k, \, \mathcal{A}_k} = \boldsymbol{x}_{\mathcal{A}_k, \, \mathcal{A}_k}\right) \; \prod_{l \neq k}^{K} \mathbb{P}_{\{\mathcal{A}_k, \, \mathcal{A}_l\}, \, \beta}\left(\boldsymbol{X}_{\mathcal{A}_k, \, \mathcal{A}_l} = \boldsymbol{x}_{\mathcal{A}_k, \, \mathcal{A}_l}\right),\end{aligned}\]where \[\begin{align}\mathbb{P}_{\{\mathcal{A}_k, \, \mathcal{A}_l\}, \, \boldsymbol{\beta}}\left(\boldsymbol{X}_{\mathcal{A}_k, \, \mathcal{A}_l} = \boldsymbol{x}_{\mathcal{A}_k, \, \mathcal{A}_l}\right)\;\;&= \;\; \prod_{i \in \mathcal{A}_k} \; \prod_{j \in \mathcal{A}_l} \; \mathbb{P}_{\{i,\,j\}, \, \boldsymbol{\beta}}\left( X_{i,\,j} = x_{i,\,j}\right).\end{align} \]
The key model assumption is that the within-neighborood subgraphs \(\boldsymbol{X}_{\mathcal{A}_k, \, \mathcal{A}_k}\) (\(k = 1, \ldots, K\)) are mutually independent and that the between-block edges do not depend on the within-block edges. Dependence within the model is local in that it is restricted to blocks. Note as well that the \(\boldsymbol{\theta}\) vector governs the within-block edges while the \(\boldsymbol{\beta}\) vector governs the between-block edges, and the two are assumed to be variation independent. For more details on the model assumptions, see Schweinberger & Handcock (JRSS-B, 2015).
R package
mlergm aims to provide an easy-to-use framework and interface for estimating models of the above form. In the coming sections, we will show how to get started with
mlergm and will attempt to highlight key functionality that will help network scientists analyze such network data.
mlergm
In order to get acquainted with
mlergm, let us consider a simple example: a network with \(K = 3\) blocks, each with \(30\) unique nodes.
# Load R package mlergmlibrary(mlergm)# Networks can be created in the same was as other packages net <- network.initialize(90, directed = FALSE)# The difference with mlergm is that we also have a block membership structure node_memb <- c(rep(1, 30), rep(2, 30), rep(3, 30))
A network (
net) and a vector of block memberships (
node_memb) is all we need to start working with
mlergm. We note that
node_memb does not need to be strictly numeric, as will be the case in later examples. For obtaining
network objects from adjacency matrices, edgelists, or other data structure, we refer readers to the R package
network which provides this functionality. We will assume that the network
net is always a
network object.
Currently,
net is an empty graph, which is uninteresting. We will show how synethetic networks can be simulated from a specified model using the
simulate_mlnet function.
# Simulate a network from the edge + gwesp model net <- simulate_mlnet(form = net ~ edges + gwesp, node_memb = node_memb, seed = 123, theta = c(-3, 1, .5))plot(net)
The function
simulate_mlnet returns a network which is both of class
mlnet and
network, and the
mlergm package contains plotting methods for
mlnet class networks, which allow for easy plotting of network data with block structure.
For real network data with block memberships, the network data can be converted to an
mlnet class using the
mlnet function.
# Let us use the sampson data set as an example data(sampson)sampson_net <- mlnet(network = samplike, node_memb = get.vertex.attribute(samplike, "group"))plot(sampson_net, arrow.size = 2.5, arrow.gap = 0.025)
The plotting functions of
mlergm use the
GGally package which extends the plotting capabilities of
ggplot2. Specifically, the
mlnet plotting method is a wrapper for the
ggnet2 function and can take most of the same plot parameters.
Estimation of specific models can be carried out via the
mlergm function.
# Estimate the edge + gwesp model for the simulated network model_est <- mlergm(net ~ edges + gwesp, verbose = 0, seed = 123)
We can then view the results by calling the
summary function, which has a method for
mlergm objects.
summary(model_est)#> #> ============================ Summary of model fit ============================#> #> Formula: net ~ edges + gwesp(fixed = FALSE)#> #> Number of blocks: 3#> #> Quantiles of block sizes:#> 0% 25% 50% 75% 100% #> 30 30 30 30 30 #> #> #> Monte Carlo MLE Results:#> #> Estimate Std. Error p-value Sig.#> edges -3.7690 0.4499 <0.00001 #> gwesp 1.4700 0.3797 0.00011 ***#> gwesp.decay 0.4360 0.1014 0.00002 ***#> -----------------------------------------------------------------#> Between block MLE does not exist.#> #> Sig. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1#> #> BIC: 1449.904#> * Note: BIC is based on the within-block model, and ignores the between-block model.
The
summary function has a method for estimated objects of class
mlergm, which has the following information:
Note that when we simulated this network, we did not specify an edge parameter for the between-block edges. As such, the output of the
summary function also is telling us that the between block MLE does not exist,because the number of between-block edges is precisely zero. Presently,
mlergm attempts to estimate an edge coefficient when the number of between block edges is not extreme.
We can evaluate goodness-of-fit of a fitted model of class
mlergm by calling the
gof method:
# We can call the gof.mlergm method directly by calling 'gof' on an object of class 'mlergm'gof_res <- gof(model_est)plot(gof_res, cutoff = 15, pretty_x = TRUE)
The plot method argument
cutoff specifies the maximum range to plot for the boxplots, and the argument
pretty_x is a logical argument which indicates whether the
pretty function should be used to decide the x-axis breaks for the boxplot, which can be helpful when the range is large.
mlergm
The function
mlergm has a number of different options. Firstly, the function is capable of doing two different parameterizations:
This can be done by setting
parameterization == "offset" in the
mlergm call:
offset_est <- mlergm(sampson_net ~ edges + mutual, seed = 123, parameterization = "offset")
We can inspect the results, again, using the
summary function.
summary(offset_est)#> #> ============================ Summary of model fit ============================#> #> Formula: sampson_net ~ edges + mutual#> Parameterization set to 'offset'#> #> Number of blocks: 3#> #> Quantiles of block sizes:#> 0% 25% 50% 75% 100% #> 4.0 5.5 7.0 7.0 7.0 #> #> #> Monte Carlo MLE Results:#> Within-block edge parameter = edges - log(Block size)#> Within-block mutual parameter = mutual + log(Block size)#> #> Estimate Std. Error p-value Sig.#> edges 1.9460 0.4213 <0.00001 #> mutual -0.9504 0.6252 0.12848 #> -----------------------------------------------------------------#> Between edges -2.0010 0.2131 <0.00001 #> #> #> Sig. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1#> #> BIC: 129.613#> * Note: BIC is based on the within-block model, and ignores the between-block model.
The
summary output includes additional information when
parameterization == "offset" notably including a reminder about the edge parameters.
The
mlergm function can also take an initial parameter value through the argument
theta_init,which can be useful when starting points are challenging to find or a procedure did not run to convergence. Lastly, the argument
verbose has three levels:
verbose = 0: (default) no output is printed to the console.
verbose = 1: minimal output is printed to the console informing which steps of the procedure the estimation method is in.
verbose = 2: maximal output is printed to the console.
The
set_options function is used in all code which involves estimation or simulation,which includes
mlergm,
simulate_mlnet, and the method
gof.mlergm. It is included as an argument
options = set_options(). A description of all possible functionality can be seen by using
help(set_options), however, here we highlight a few key parameters.
For simulation (both in simulating networks and for MCMC estimation),the
burnin,
interval, and
sample_size arguments are relevant and can be specified through set_options:
mlergm(net ~ edges + gwesp, options = set_options(burnin = 5000, interval = 500, sample_size = 2500))
One of the primary benefits of using locally dependent block models is that simulation and computations of the blocks can be done independently and in parallel. The
set_options function can control the number of cores used for computation and simulation. The default is
number_cores = detectCores(all.tests = FALSE, logical = TRUE) - 1, which will typically be one less than the maximum number of available cores.
mlergm(net ~ edges + gwesp, options = set_options(number_cores = 3))
The
number_cores argument is relevant for both simulation and estimation procedures.
For estimation procedures,the Fisher Scoring method of Hunter & Handcock (JCGS, 2006) is used. Netwon-based optimization methods can be sensitive to the step length used. These options can be changed by using the
step_len argument. Additionally, there is an option to use a naive adaptive step length by setting
adaptive_step_len == TRUE.
# Adjust the step length manuallymlergm(net ~ edges + gwesp, options = set_options(step_len = 0.25))# Use the naive adaptive step length mlergm(net ~ edges + gwesp, options = set_options(adaptive_step_len == TRUE))
The adaptive step length uses step lengths equal to the reciprocal of the \(L_2\) norm of the increment, i.e., \[ \begin{equation} \text{Step length} \;\;\;=\;\;\; \frac{1}{||\,\text{Increment}\,||_2} \end{equation} \] The outcome is a step length which automatically adjusts depending on the size of the increment. When the updates to the parameter vector \(\boldsymbol{\theta}\) are small, the step length will be greater, encouraging faster convergence when near the solution. When the changes are larger, then the step length will be smaller as a result, and will more conservatively iterate towards the solution, which can help improve convergence, especially for estimation of curved ERGMs.
The number of maximum iterations performed can be adjusted for both the MCMLE procedure and the Newton-based Fisher Scoring algorithm, as well as the tolerance for the Fisher scoring convergence.
mlergm(net ~ edges + gwesp, options = set_options(MCMLE_max_iter = 10, NR_max_iter = 100, NR_tol = 1e-4))
The other options can be viewed in
help(set_options).
|
Journal of Differential Geometry J. Differential Geom. Volume 67, Number 2 (2004), 289-333. Momentum Maps and Morita Equivalence Abstract
We introduce quasi-symplectic groupoids and explain their relation with momentum map theories. This approach enables us to unify into a single framework various momentum map theories, including ordinary Hamiltonian $G$-spaces, Lu's momentum maps of Poisson group actions, and the group-valued momentum maps of Alekseev–Malkin–Meinrenken. More precisely, we carry out the following program:
(1) We define and study properties of quasi-symplectic groupoids.
(2) We study the momentum map theory defined by a quasi-symplectic groupoid $\Gamma \rightrightarrows P$. In particular, we study the reduction theory and prove that $J^{-1} \mathcal {O})/ \Gamma$ is a symplectic manifold for any Hamiltonian $\Gamma$-space $(X \stackrel {J} \rightarrow P, \omega_X$ (even though $\omega_X \in \Omega^2 (X)$ may be degenerate), where $\mathcal{O} \in P$ is a groupoid orbit. More generally, we prove that the intertwiner space $(X_1 \times_{P} \overline{X_2}$ between two Hamiltonian $\Gamma$-spaces $X_1$ and $X_2$ is a symplectic manifold (whenever it is a smooth manifold).
(3) We study Morita equivalence of quasi-symplectic groupoids. In particular, we prove that Morita equivalent quasi-symplectic groupoids give rise to equivalent momentum map theories. Moreover the intertwiner space $(X_1 \times_P \overline{X_2}$depends only on the Morita equivalence class. As a result, we recover various well-known results concerning equivalence of momentum maps including the Alekseev–Ginzburg–Weinstein linearization theorem and the Alekseev–Malkin–Meinrenken equivalence theorem between quasi-Hamiltonian spaces and Hamiltonian loop group spaces.
Article information Source J. Differential Geom., Volume 67, Number 2 (2004), 289-333. Dates First available in Project Euclid: 8 December 2004 Permanent link to this document https://projecteuclid.org/euclid.jdg/1102536203 Digital Object Identifier doi:10.4310/jdg/1102536203 Mathematical Reviews number (MathSciNet) MR2153080 Zentralblatt MATH identifier 1106.53057 Citation
Xu, Ping. Momentum Maps and Morita Equivalence. J. Differential Geom. 67 (2004), no. 2, 289--333. doi:10.4310/jdg/1102536203. https://projecteuclid.org/euclid.jdg/1102536203
|
Lines and planes are perhaps the simplest of curves and surfaces in three dimensional space. They also will prove important as we seek to understand more complicated curves and surfaces.
The equation of a line in two dimensions is $ax+by=c$; it is reasonable to expect that a line in three dimensions is given by $ax + by +cz = d$; reasonable, but wrong—it turns out that this is the equation of a plane.
A plane does not have an obvious "direction'' as does a line. It ispossible to associate a plane with a direction in a very useful way,however: there are exactly two directions perpendicular to aplane. Any vector with one of these two directions is called
normal to the plane.So while there are many normal vectors to a given plane, they are allparallel or anti-parallel to each other.
Suppose two points $\ds (v_1,v_2,v_3)$ and $\ds (w_1,w_2,w_3)$ are in a plane;then the vector $\ds \langle w_1-v_1,w_2-v_2,w_3-v_3\rangle$ is parallelto the plane; in particular, if this vector is placed with its tail at$\ds (v_1,v_2,v_3)$ then its head is at $\ds (w_1,w_2,w_3)$ and it lies in theplane. As a result, any vector perpendicular to the plane isperpendicular to $\ds \langle w_1-v_1,w_2-v_2,w_3-v_3\rangle$. In fact, itis easy to see that the plane consists of
precisely those points$\ds (w_1,w_2,w_3)$ for which $\ds \langle w_1-v_1,w_2-v_2,w_3-v_3\rangle$ isperpendicular to a normal to the plane, as indicated in figure 14.5.1. Turning this around, supposewe know that $\langle a,b,c\rangle$ is normal to a plane containingthe point $\ds (v_1,v_2,v_3)$. Then $(x,y,z)$ is in the plane if and onlyif $\langle a,b,c\rangle$ is perpendicular to $\ds \langlex-v_1,y-v_2,z-v_3\rangle$. In turn, we know that this is trueprecisely when $\ds \langle a,b,c\rangle\cdot\langlex-v_1,y-v_2,z-v_3\rangle=0$. That is, $(x,y,z)$ is in the plane if andonly if$$\eqalign{ \langle a,b,c\rangle\cdot\langle x-v_1,y-v_2,z-v_3\rangle&=0\cr a(x-v_1)+b(y-v_2)+c(z-v_3)&=0\cr ax+by+cz-av_1-bv_2-cv_3&=0\cr ax+by+cz&=av_1+bv_2+cv_3.\cr}$$Working backwards, note that if $(x,y,z)$ is a point satisfying $ax+by+cz=d$ then$$\eqalign{ ax+by+cz&=d\cr ax+by+cz-d&=0\cr a(x-d/a)+b(y-0)+c(z-0)&=0\cr \langle a,b,c\rangle\cdot\langle x-d/a,y,z\rangle&=0.\cr}$$Namely, $\langle a,b,c\rangle$ is perpendicular to the vector withtail at $(d/a,0,0)$ and head at $(x,y,z)$. This means that the points$(x,y,z)$ that satisfy the equation $ax+by+cz=d$ form a planeperpendicular to $\langle a,b,c\rangle$. (This doesn'twork if $a=0$, but in that case we can use $b$ or $c$ in the role of$a$. That is, either $a(x-0)+b(y-d/b)+c(z-0)=0$ or $a(x-0)+b(y-0)+c(z-d/c)=0$.)
Thus, given a vector $\langle a,b,c\rangle$ we know that all planes perpendicular to this vector have the form $ax+by+cz=d$, and any surface of this form is a plane perpendicular to $\langle a,b,c\rangle$.
Example 14.5.1 Find an equation for the plane perpendicular to $\langle 1,2,3\rangle$ and containing the point $(5,0,7)$.
Using the derivation above, the plane is $1x+2y+3z=1\cdot5+2\cdot0+3\cdot7=26$. Alternately, we know that the plane is $x+2y+3z=d$, and to find $d$ we may substitute the known point on the plane to get $5+2\cdot0+3\cdot7=d$, so $d=26$.
Example 14.5.2 Find a vector normal to the plane $2x-3y+z=15$.
One example is $\langle 2, -3,1\rangle$. Any vector parallel or anti-parallel to this works as well, so for example $-2\langle 2, -3,1\rangle=\langle -4,6,-2\rangle$ is also normal to the plane.
We will frequently need to find an equation for a plane given certain information about the plane. While there may occasionally be slightly shorter ways to get to the desired result, it is always possible, and usually advisable, to use the given information to find a normal to the plane and a point on the plane, and then to find the equation as above.
Example 14.5.3 The planes $x-z=1$ and $y+2z=3$ intersect in a line. Find a third plane that contains this line and is perpendicular to the plane $x+y-2z=1$.
First, we note that two planes are perpendicular if and only if their normal vectors are perpendicular. Thus, we seek a vector $\langle a,b,c\rangle$ that is perpendicular to $\langle 1,1,-2\rangle$. In addition, since the desired plane is to contain a certain line, $\langle a,b,c\rangle$ must be perpendicular to any vector parallel to this line. Since $\langle a,b,c\rangle$ must be perpendicular to two vectors, we may find it by computing the cross product of the two. So we need a vector parallel to the line of intersection of the given planes. For this, it suffices to know two points on the line. To find two points on this line, we must find two points that are simultaneously on the two planes, $x-z=1$ and $y+2z=3$. Any point on both planes will satisfy $x-z=1$ and $y+2z=3$. It is easy to find values for $x$ and $z$ satisfying the first, such as $x=1, z=0$ and $x=2, z=1$. Then we can find corresponding values for $y$ using the second equation, namely $y=3$ and $y=1$, so $(1,3,0)$ and $(2,1,1)$ are both on the line of intersection because both are on both planes. Now $\langle 2-1,1-3,1-0\rangle=\langle 1,-2,1\rangle$ is parallel to the line. Finally, we may choose $\langle a,b,c\rangle=\langle 1,1,-2\rangle\times \langle 1,-2,1\rangle=\langle -3,-3,-3\rangle$. While this vector will do perfectly well, any vector parallel or anti-parallel to it will work as well, so for example we might choose $\langle 1,1,1\rangle$ which is anti-parallel to it.
Now we know that $\langle 1,1,1\rangle$ is normal to the desired plane and $(2,1,1)$ is a point on the plane. Therefore an equation of the plane is $x+y+z=4$. As a quick check, since $(1,3,0)$ is also on the line, it should be on the plane; since $1+3+0=4$, we see that this is indeed the case.
Note that had we used $\langle -3,-3,-3\rangle$ as the normal, we would have discovered the equation $-3x-3y-3z=-12$, then we might well have noticed that we could divide both sides by $-3$ to get the equivalent $x+y+z=4$.
So we now understand equations of planes; let us turn to lines. Unfortunately, it turns out to be quite inconvenient to represent a typical line with a single equation; we need to approach lines in a different way.
Unlike a plane, a line in three dimensions does have an obvious direction, namely, the direction of any vector parallel to it. In fact a line can be defined and uniquely identified by providing one point on the line and a vector parallel to the line (in one of two possible directions). That is, the line consists of exactly those points we can reach by starting at the point and going for some distance in the direction of the vector. Let's see how we can translate this into more mathematical language.
Suppose a line contains the point $\ds (v_1,v_2,v_3)$ and is parallelto the vector $\langle a,b,c\rangle$. If we place the vector $\ds\langle v_1,v_2,v_3\rangle$ with its tail at the origin and its headat $\ds (v_1,v_2,v_3)$, and if we place the vector $\langlea,b,c\rangle$ with its tail at $\ds (v_1,v_2,v_3)$, then the head of$\langle a,b,c\rangle$ is at a point on the line. We can get to
any point on the line by doing the same thing, except using$t\langle a,b,c\rangle$ in place of $\langle a,b,c\rangle$, where $t$is some real number. Because of the way vector addition works, thepoint at the head of the vector $t\langle a,b,c\rangle$ is the pointat the head of the vector $\ds \langle v_1,v_2,v_3\rangle+t\langlea,b,c\rangle$, namely $\ds (v_1+ta,v_2+tb,v_3+tc)$; seefigure 14.5.2.
In other words, as $t$ runs through all possible real values, thevector $\ds \langle v_1,v_2,v_3\rangle+t\langle a,b,c\rangle$ points toevery point on the line when its tail is placed at the origin. Anothercommon way to write this is as a set of
parametric equations:$$ x= v_1+ta\qquad y=v_2+tb \qquad z=v_3+tc.$$It is occasionally useful to use this form of a line even in twodimensions; a vector form for a line in the $x$-$y$ plane is$\ds \langle v_1,v_2\rangle+t\langle a,b\rangle$, which is the same as$\ds \langle v_1,v_2,0\rangle+t\langle a,b,0\rangle$.
Example 14.5.4 Find a vector expression for the line through $(6,1,-3)$ and $(2,4,5)$. To get a vector parallel to the line we subtract $\langle 6,1,-3\rangle-\langle2,4,5\rangle=\langle 4,-3,-8\rangle$. The line is then given by $\langle 2,4,5\rangle+t\langle 4,-3,-8\rangle$; there are of course many other possibilities, such as $\langle 6,1,-3\rangle+t\langle 4,-3,-8\rangle$.
In two dimensions, two lines either intersect or are parallel; in three dimensions, lines that do not intersect might not be parallel. In this case, since the direction vectors for the lines are not parallel or anti-parallel we know the lines are not parallel. If they intersect, there must be two values $a$ and $b$ so that $\langle 1,1,1\rangle+a\langle 1,2,-1\rangle= \langle 3,2,1\rangle+b\langle -1,-5,3\rangle$, that is, $$\eqalign{ 1+a&=3-b\cr 1+2a&=2-5b\cr 1-a&=1+3b\cr }$$ This gives three equations in two unknowns, so there may or may not be a solution in general. In this case, it is easy to discover that $a=3$ and $b=-1$ satisfies all three equations, so the lines do intersect at the point $(4,7,-2)$.
Example 14.5.6 Find the distance from the point $(1,2,3)$ to the plane $2x-y+3z=5$. The distance from a point $P$ to a plane is the shortest distance from $P$ to any point on the plane; this is the distance measured from $P$ perpendicular to the plane; see figure 14.5.3. This distance is the absolute value of the scalar projection of $\ds \overrightarrow{\strut QP}$ onto a normal vector $\bf n$, where $Q$ is any point on the plane. It is easy to find a point on the plane, say $(1,0,1)$. Thus the distance is $$ {\overrightarrow{\strut QP}\cdot {\bf n}\over|{\bf n}|}= {\langle 0,2,2\rangle\cdot\langle 2,-1,3\rangle\over|\langle 2,-1,3\rangle|}= {4\over\sqrt{14}}. $$
Example 14.5.7 Find the distance from the point $(-1,2,1)$ to the line $\langle 1,1,1\rangle + t\langle 2,3,-1\rangle$. Again we want the distance measured perpendicular to the line, as indicated in figure 14.5.4. The desired distance is $$ |\overrightarrow{\strut QP}|\sin\theta= {|\overrightarrow{\strut QP}\times{\bf A}|\over|{\bf A}|}, $$ where $\bf A$ is any vector parallel to the line. From the equation of the line, we can use $Q=(1,1,1)$ and ${\bf A}=\langle 2,3,-1\rangle$, so the distance is $$ {|\langle -2,1,0\rangle\times\langle2,3,-1\rangle|\over\sqrt{14}}= {|\langle-1,-2,-8\rangle|\over\sqrt{14}}={\sqrt{69}\over\sqrt{14}}. $$
Exercises 14.5
Ex 14.5.1Find an equation of the plane containing $(6,2,1)$ andperpendicular to $\langle 1,1,1\rangle$.(answer)
Ex 14.5.2Find an equation of the plane containing $(-1,2,-3)$ andperpendicular to $\langle 4,5,-1\rangle$.(answer)
Ex 14.5.3Find an equation of the plane containing $(1,2,-3)$,$(0,1,-2)$ and $(1,2,-2)$.(answer)
Ex 14.5.4Find an equation of the plane containing $(1,0,0)$,$(4,2,0)$ and $(3,2,1)$.(answer)
Ex 14.5.5Find an equation of the plane containing $(1,0,0)$ and theline $\langle 1,0,2\rangle + t\langle 3,2,1\rangle$.(answer)
Ex 14.5.6Find an equation of the plane containing the line ofintersection of $x+y+z=1$ and $x-y+2z=2$, and perpendicular to the$x$-$y$ plane.(answer)
Ex 14.5.7Find an equation of the line through $(1,0,3)$ and $(1,2,4)$.(answer)
Ex 14.5.8Find an equation of the line through $(1,0,3)$ and perpendicular to the plane $x+2y-z=1$.(answer)
Ex 14.5.9Find an equation of the line through the originand perpendicular to the plane $x+y-z=2$.(answer)
Ex 14.5.10Find $a$ and $c$ so that $(a,1,c)$ is on the line through$(0,2,3)$ and $(2,7,5)$.(answer)
Ex 14.5.11Explain how to discover the solution inexample 14.5.5.
Ex 14.5.12Determine whether the lines $\langle 1,3,-1\rangle+t\langle1,1,0\rangle$ and $\langle 0,0,0\rangle+t\langle 1,4,5\rangle$ areparallel, intersect, or neither.(answer)
Ex 14.5.13Determine whether the lines $\langle 1,0,2\rangle+t\langle-1,-1,2\rangle$ and $\langle 4,4,2\rangle+t\langle 2,2,-4\rangle$ areparallel, intersect, or neither.(answer)
Ex 14.5.14Determine whether the lines $\langle 1,2,-1\rangle+t\langle1,2,3\rangle$ and $\langle 1,0,1\rangle+t\langle 2/3,2,4/3\rangle$ areparallel, intersect, or neither.(answer)
Ex 14.5.15Determine whether the lines $\langle 1,1,2\rangle+t\langle1,2,-3\rangle$ and $\langle 2,3,-1\rangle+t\langle 2,4,-6\rangle$ areparallel, intersect, or neither.(answer)
Ex 14.5.16Find a unit normal vector to each of the coordinate planes.
Ex 14.5.17Show that $\langle 2,1,3 \rangle + t \langle 1,1,2 \rangle$ and$\langle 3, 2, 5 \rangle + s \langle 2, 2, 4 \rangle$ are the sameline.
Ex 14.5.18Give a prose description for each of the following processes:
a. Given two distinct points, find the line that goes through them.
b. Given three points (not all on the same line), find the plane that goes through them. Why do we need the caveat that not all points be on the same line?
c. Given a line and a point not on the line, find the plane that contains them both.
d. Given a plane and a point not on the plane, find the line that is perpendicular to the plane through the given point.
Ex 14.5.19Find the distance from $(2,2,2)$ to $x+y+z=-1$.(answer)
Ex 14.5.20Find the distance from $(2,-1,-1)$ to $2x-3y+z=2$.(answer)
Ex 14.5.21Find the distance from $(2,-1,1)$ to $\langle 2,2,0\rangle+t\langle 1,2,3\rangle$.(answer)
Ex 14.5.22Find the distance from $(1,0,1)$ to $\langle 3,2,1\rangle+t\langle 2,-1,-2\rangle$.(answer)
Ex 14.5.23Find the cosine of the anglebetween the planes $x+y+z=2$ and $x+2y+3z=8$.(answer)
Ex 14.5.24Find the cosine of the anglebetween the planes $x-y+2z=2$ and $3x-2y+z=5$.(answer)
|
The short answer is that the $2 \pi$ comes from the inversion formula.
Here is an informal perspective which gives as hint as to how this can be mademore rigorous:
A distribution is defined by itsaction on a space of nicely behaved test functions.
For a function $f$ from a restricted class of ordinary functions, we can define a distribution $T_f$ by $T_f(\phi) = \int f \phi$. For the function$t \mapsto 1$, we get $T_1(\phi) = \int \phi$.
The '$\delta$ function' is defined by the distribution$T_\delta(\phi) = \phi(0)$. That is, it takes a test function $\phi$ andreturns its value at $0$.
The Fourier transform of a distribution is defined by$\hat{T_f}(\phi) = T_f(\hat{\phi})$, where $\hat{\phi}$ is the ordinary Fourier transform of $\phi$.
We see that ${\hat{T}_1}(\phi) = T_1(\hat{\phi}) = \int \hat{\phi}$.
The standard inversion formula shows that $\int \hat{\phi} = 2 \pi \phi(0)$,which gives ${\hat{T}_1}(\phi) = 2 \pi T_\delta(\phi)$, or more succinctly,${\hat{T}_1} = 2 \pi T_\delta$.
The same sort of analysis shows that${\hat{T}_{t \mapsto e^{iat}}}(\phi) = 2 \pi \phi(a) = 2 \pi T_\delta(\omega \mapsto \phi(\omega+a))$. The last expression may be written informally as$T_{ \omega \mapsto 2 \pi \delta(\omega-a) } (\phi)$, which is the desired result.
|
I'm trying to prove the following statement:
Let $F$ be a finite field of prime characteristic $p$ and let $E$ be the field generated by $F$ and the elements $\{t^{1/p},n \geq 1\}$, where $t$ is an indeterminate. Then, any algebraic extension of $E$ is separable.
I've already proved that the Frobenius Endomorphism on $E$ is surjective. I'm trying to use the following fact:
Given $\alpha$ an element of an algebraic extension of $E$, then $\alpha$ is separable if, and only if, the derivate of $Irr(\alpha,E)$ is not zero.
Then, I know that the irreducible polynomial can be expressed function of $x^p$,
$$Irr(\alpha, E) = p(x^p)\in F[x].$$
However, I got stuck here and I don't know what else to do. Could someone give me and advice?
|
Beside the wonderful examples above, there should also be counterexamples, where visually intuitive demonstrations are actually wrong. (e.g. missing square puzzle)
Do you know the other examples?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Beside the wonderful examples above, there should also be counterexamples, where visually intuitive demonstrations are actually wrong. (e.g. missing square puzzle)
Do you know the other examples?
The never ending chocolate bar!
If only I knew of this as a child..
The trick here is that the left piece that is three bars wide grows at the bottom when it slides up. In reality, what would happen is that there would be a gap at the right between the three-bar piece and the cut. This gap is is three bars wide and one-third of a bar tall, explaining how we ended up with an "extra" piece.
Side by side comparison:
Notice how the base of the three-wide bar grows. Here's what it would look like in reality$^1$:
1: Picture source https://www.youtube.com/watch?v=Zx7vUP6f3GM
A bit surprised this hasn't been posted yet. Taken from this page:
Visualization can be misleading when working with alternating series. A classical example is \begin{align*} \ln 2=&\frac11-\frac12+\frac13-\frac14+\;\frac15-\;\frac16\;+\ldots,\\ \frac{\ln 2}{2}=&\frac12-\frac14+\frac16-\frac18+\frac1{10}-\frac1{12}+\ldots \end{align*} Adding the two series, one finds \begin{align*}\frac32\ln 2=&\left(\frac11+\frac13+\frac15+\ldots\right)-2\left(\frac14+\frac18+\frac1{12}+\ldots\right)=\\ =&\frac11-\frac12+\frac13-\frac14+\;\frac15-\;\frac16\;+\ldots=\\ =&\ln2. \end{align*}
Here's how to trick students new to calculus (applicable only if they don't have graphing calculators, at that time):
$0$. Ask them to find inverse of $x+\sin(x)$, which they will unable to. Then,
$1$. Ask them to draw graph of $x+\sin(x)$.
$2$. Ask them to draw graph of $x-\sin(x)$
$3$. Ask them to draw $y=x$ on both graphs.
Here's what they will do :
$4$. Ask them, "What do you conclude?". They will say that they are inverses of each other. And then get
very confused.
Construct a rectangle $ABCD$. Now identify a point $E$ such that $CD = CE$ and the angle $\angle DCE$ is a non-zero angle. Take the perpendicular bisector of $AD$, crossing at $F$, and the perpendicular bisector of $AE$, crossing at $G$. Label where the two perpendicular bisectors intersect as $H$ and join this point to $A$, $B$, $C$, $D$, and $E$.
Now, $AH=DH$ because $FH$ is a perpendicular bisector; similarly $BH = CH$. $AH=EH$ because $GH$ is a perpendicular bisector, so $DH = EH$. And by construction $BA = CD = CE$. So the triangles $ABH$, $DCH$ and $ECH$ are congruent, and so the angles $\angle ABH$, $\angle DCH$ and $\angle ECH$ are equal.
But if the angles $\angle DCH$ and $\angle ECH$ are equal then the angle $\angle DCE$ must be zero, which is a contradiction.
Proof : Let $O$ be the intersection of the bisector $[BC]$ and the bisector of $\widehat{BAC}$. Then $OB=OC$ and $\widehat{BAO}=\widehat{CAO}$. So the triangles $BOA$ and $COA$ are the same and $BA=CA$.
Another example :
From "Pastiches, paradoxes, sophismes, etc." and solution page 23 : http://www.scribd.com/JJacquelin/documents
A copy of the solution is added below. The translation of the comment is :
Explanation : The points A, B and P are not on a straight line ( the Area of the triangle ABP is 0.5 ) The graphical highlight is magnified only on the left side of the figure.
I think this could be the goats puzzle (Monty Hall problem) which is nicely visually represented with simple doors.
Three doors, behind 2 are goats, behind 1 is a prize.
You choose a door to open to try and get the prize, but before you open it, one of the other doors is opened to reveal a goat. You then have the option of changing your mind. Should you change your decision?
From looking at the diagram above, you know for a fact that you have a 1/3rd chance of guessing correctly.
Next, a door with a goat in is opened:
A cursory glance suggests that your odds have improved from 1/3rd to a 50/50 chance of getting it right. But the truth is different...
By calculating all possibilities we see that if you change, you have a higher chance of winning.
The easiest way to think about it for me is, if you choose the car first, switching is guaranteed to be a goat. If you choose a goat first, switching is guaranteed to be a car. You're more likely to choose a goat first because there are more goats, so you should always switch.
A favorite of mine was always the following:
\begin{align*} \require{cancel}\frac{64}{16} = \frac{\cancel{6}4}{1\cancel{6}} = 4 \end{align*}
I particularly like this one because of how simple it is and how it gets the right answer, though for the wrong reasons of course.
A recent example I found which is credited to Martin Gardner and is similar to some of the others posted here but perhaps with a slightly different reason for being wrong, as the diagonal cut really is straight.
I found the image at a blog belonging to Greg Ross.
Spoilers
The triangles being cut out are not isosceles as you might think but really have base $1$ and height $1.1$ (as they are clearly similar to the larger triangles). This means that the resulting rectangle is really $11\times 9.9$ and not the reported $11\times 10$.
Squaring the circle with Kochanski's Approximation
1
One of my favorites:
\begin{align} x&=y\\ x^2&=xy\\ x^2-y^2&=xy-y^2\\ \frac{(x^2-y^2)}{(x-y)}&=\frac{(xy-y^2)}{(x-y)}\\ x+y&=y\\ \end{align}
Therefore, $1+1=1$
The error here is in dividing by x-y
That $\sum_{n=1}^\infty n = -\frac{1}{12}$. http://www.numberphile.com/videos/analytical_continuation1.html
The way it is presented in the clip is completely incorrect, and could spark a great discussion as to why.
Some students may notice the hand-waving 'let's intuitively accept $1 -1 +1 -1 ... = 0.5$.
If we accept this assumption (and the operations on divergent sums that are usually not allowed) we can get to the result.
A discussion that the seemingly nonsense result directly follows a nonsense assumption is useful. This can reinforce why it's important to distinguish between convergent and divergent series. This can be done within the framework of convergent series.
A deeper discussion can consider the implications of allowing such a definition for divergent sequences - ie Ramanujan summation - and can lead to a discussion on whether such a definition is useful given it leads to seemingly nonsense results. I find this is interesting to open up the ideas that mathematics is not set in stone and can link to the history of irrational and imaginary numbers (which historically have been considered less-than-rigorous or interesting-but-not-useful).
\begin{equation} \log6=\log(1+2+3)=\log 1+\log 2+\log 3 \end{equation}
Here is one I saw on a whiteboard as a kid... \begin{align*} 1=\sqrt{1}=\sqrt{-1\times-1}=\sqrt{-1}\times\sqrt{-1}=\sqrt{-1}^2=-1 \end{align*}
I might be a bit late to the party, but here is one which my maths teacher has shown to me, which I find to be a very nice example why one shouldn't solve an equation by looking at the hand-drawn plots, or even computer-generated ones.
Consider the following equation: $$\left(\frac{1}{16}\right)^x=\log_{\frac{1}{16}}x$$
At least where I live, it is taught in school how the exponential and logarithmic plots look like when base is between $0$ and $1$, so a student should be able to draw a plot which would look like this:
Easy, right? Clearly there is just one solution, lying at the intersection of the graphs with the $x=y$ line (the dashed one; note the plots are each other's reflections in that line).
Well, this is clear at least until you try some simple values of $x$. Namely, plugging in $x=\frac{1}{2}$ or $\frac{1}{4}$ gives you two more solutions! So what's going on?
In fact, I have intentionally put in an incorrect plots (you get the picture above if you replace $16$ by $3$). The real plot looks like this:
You might disagree, but to be it still seems like it's a plot with just one intersection point. But, in fact, the part where the two plots meet has all three points of intersection. Zooming in on the interval with all the solutions lets one
barely see what's going on:
The oscillations are truly minuscule there. Here is the plot of the
difference of the two functions on this interval:
Note the scale of the $y$ axis: the differences are on the order of $10^{-3}$. Good luck drawing that by hand!
To get a better idea of what's going on with the plots, here they are with $16$ replaced by $50$:
Here is a measure theoretic one. By 'Picture', if we take a cover of $A:=[0,1]∩\mathbb{Q}$ by open intervals, we have an interval around every rational and so we also cover $[0,1]$; the Lebesgue measure of [0,1] is 1, so the measure of $A$ is 1. As a sanity check, the complement of this cover in $[0,1]$ can't contain any intervals, so its measure is surely negligible.
This is of course wrong, as the set of all rationals has Lebesgue measure $0$, and sets with no intervals need not have measure 0: see the fat Cantor set. In addition, if you fix the 'diagonal enumeration' of the rationals and take $\varepsilon$ small enough, the complement of the cover in $[0,1]$ contains $2^{ℵ_0}$ irrationals. I recently learned this from this MSE post.
There are two examples on Wikipedia:Missing_square_puzzle Sam Loyd's paradoxical dissection, and Mitsunobu Matsuyama's "Paradox". But I cannot think of something that is not a dissection.
This is my favorite.
\begin{align}-20 &= -20\\ 16 - 16 - 20 &= 25 - 25 - 20\\ 16 - 36 &= 25 - 45\\ 16 - 36 + \frac{81}{4} &= 25 - 45 + \frac{81}{4}\\ \left(4 - \frac{9}{2}\right)^2 &= \left(5 - \frac{9}{2}\right)^2\\ 4 - \frac{9}{2} &= 5 - \frac{9}{2}\\ 4 &= 5 \end{align}
You can generalize it to get any $a=b$ that you'd like this way:
\begin{align}-ab&=-ab\\ a^2 - a^2 - ab &= b^2 - b^2 - ab\\ a^2 - a(a + b) &= b^2 -b(a+b)\\ a^2 - a(a + b) + \frac{a + b}{2} &= b^2 -b(a+b) + \frac{a + b}{2}\\ \left(a - \frac{a+b}{2}\right)^2 &= \left(b - \frac{a+b}{2}\right)^2\\ a - \frac{a+b}{2} &= b - \frac{a+b}{2}\\ a &= b\\ \end{align}
It's beautiful because visually the "error" is obvious in the line $\left(4 - \frac{9}{2}\right)^2 = \left(5 - \frac{9}{2}\right)^2$, leading the observer to investigate the reverse FOIL process from the step before, even though this line is valid. I think part of the problem also stems from the fact that grade school / high school math education for the average person teaches there's only one "right" way to work problems and you always simplify, so most people are already confused by the un-simplifying process leading up to this point.
I've found that the number of people who can find the error unaided is something less than 1 in 4. Disappointingly, I've had several people tell me the problem stems from the fact that I started with negative numbers. :-(
Solution
When working with variables, people often remember that $c^2 = d^2 \implies c = \pm d$, but forget that when working with concrete values because the tendency to simplify everything leads them to turn squares of negatives into squares of positives before applying the square root. The number of people that I've shown this to who can find the error is a small sample size, but I've found some people can carefully evaluate each line and find the error, and then can't explain it even after they've correctly evaluated $\left(-\frac{1}{2}\right)^2=\left(\frac{1}{2}\right)^2$.
To give a contrarian interpretation of the question I will chime in with Goldbach's comet which counts the number of ways an integer can be expressed as the sum of two primes:
It is mathematically "wrong" because there is no proof that this function doesn't equal zero infitely often, and it is visually deceptive because it appears to be unbounded with its lower bound increasing at a linear rate.
This is essentially the same as the chocolate-puzzle. It's easier to see, however, that the total square shrinks.
This is a fake visual proof that a sphere has Euclidean geometry. Strangely enough, in a 3 dimensional hyperbolic space, the amount of curve a sphere will have approaches a nonzero amount and if you have an infinitely large object with exactly the amount of a curve a sphere approaches as its size approaches infinity, it will have Euclidean geometry and appear sort of the way that image appears.
I don't know about you but to me, it looks like the hexagons are stretched horizontally. If you also see it that way and you trust your eyes, then you could take that as a visual proof that $\tan\frac{7}{4} < 60^\circ$. If that's how you saw it, then it's an optical illusion because the hexagons are really stretched vertically. Unlike some optical illusions of images that appear different than they are but are still mathematically possible, this is an optical illusion of a mathematically impossible image. The math shows that $\tan^{-1} 60^\circ = \sqrt{3}$ and $\sqrt{3} < \frac{7}{4}$ because $7^2 = 49$ but $3 \times 4^2$ = 48. It's just like it's mathematically impossible for something to not be moving when it is moving but it's theoretically possible for your eyes to stop sending movement signals to your brain and have you not see movement in something that is moving which would look creepy for those who have not experienced it because your brain could still tell by a more complex method than signals from the eyes that it actually is moving.
To draw a hexagonal grid over a square grid more accurately, only the math and not your eye signals can be trusted to help you do it accurately. The math shows that the continued fraction of $\sqrt{3}$ is [1; 1, 2, 1, 2, 1, 2, 1 ... which is less than $\frac{7}{4}$, not more.
I do not think this really qualify as "visually intuitive", but it is definitely funny
They do such a great job at dramatizing these kind of situations. Who cannot remember of an instance in which he has been either a "Billy" or a "Pa' and Ma'"? Maybe more "Pa' and Ma'" instances on my part...;)
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
作者:Bo Li , Meilin Lu ... 来源:[J].Nanoscale Research Letters(IF 2.524), 2017, Vol.12 (1)Springer 摘要:Semiconductor quantum dots (QDs) are widely used in light-emitting diodes and solar cells. Electrochemical modulation is a good way to understand the electrical and optical properties of QDs. In this work, the effects of electrochemical control on photoluminescence (PL) spectra i...
作者:Bo Li , Minfeng Liao , Baode Li 来源:[J].Journal of Inequalities and Applications(IF 0.822), 2017, Vol.2017 (1)Springer 摘要:Let \(\varphi:\mathbb{R}^{n}\times[0, \infty) \to[0, \infty)\) satisfy that \(\varphi(x, \cdot)\) , for any given \(x\in\mathbb{R}^{n}\) , is an Orlicz function and \(\varphi(\cdot, t)\) is a Muckenhoupt \(A_{\infty}\) weight uniformly in \(t\in(0, \infty)\) . The Musielak-O...
作者:Bo Li , Jun Li ... 来源:[J].BMC Anesthesiology(IF 1.188), 2017, Vol.17 (1)Springer 摘要:Mivacurium is the shortest acting nondepolarizing muscle relaxant currently available; however, the effect of different dosages and injection times of intravenous mivacurium administration in children of different ages has rarely been reported. This study was aimed to evaluate th...
作者:... Qiyi He , Bo Li , Xiaodong Yu 来源:[J].BMC Biochemistry(IF 1.776), 2017, Vol.18 (1)Springer 摘要:Mice were bitten by five-pace vipers ( Deinagkistrodon acutus ), and then envenomed. It was well-known that the snake venom mainly disturbed the blood homeostasis of the envenomed victims. Ocassionally, we found that the venom of D. acutus could inhibit the contraction tension of...
作者:Bo Li , Jie Wei ... 来源:[J].BMC Pulmonary Medicine(IF 2.76), 2017, Vol.17 (1)Springer 摘要:Acute respiratory failure (ARF) is still one of the most severe complications in immunocompromised patients. Our previous systematic review showed noninvasive mechanical ventilation (NIV) reduced mortality, length of hospitalization and ICU stay in AIDS/hematological malignancy p...
作者:Bo Li , Tian-yi Zhao ... 来源:[J].Trials(IF 2.206), 2017, Vol.18 (1)Springer 摘要:Previous studies have shown that acupuncture is beneficial for the alleviation of chemotherapy-induced nausea and vomiting. However, there is a lack of clinical evidence concerning the effects of acupoint-matching on chemotherapy-induced nausea and vomiting.
作者:Bo Li , Yan-Fen Qi ... 来源:[J].Trials(IF 2.206), 2017, Vol.18 (1)Springer 摘要:Pancreatic extracorporeal shock wave lithotripsy (P-ESWL) is the first-line therapy for large pancreatic duct stones. Although it is a highly effective and safe procedure for the fragmentation of pancreatic stones, it is still not complication-free. Just like endoscopic retrograd...
作者:Bo Li , Fei Wu ... 来源:[J].BMC Complementary and Alternative Medicine(IF 2.082), 2017, Vol.17 (1)Springer 摘要:Bitter herbs are important in Traditional Chinese Medicine and the Electronic Tongue (e-Tongue) is an instrument that can be trained to evaluate bitterness of bitter herbs and their constituents. The aim of this research was to evaluate bitterness of limonoids and alkaloids from ...
作者:Bo Li , Yingfeng Zhang ... 来源:[J].Journal of Venomous Animals and Toxins including Tropical Diseases(IF 0.545), 2017, Vol.23 (1)Springer 摘要:The five-paced pit viper ( Deinagkistrodon acutus ), endemic to China and northern Vietnam, is responsible for most snakebites in the Chinese territory. Antivenom produced from horses is the main treatment for snakebites, but it may cause numerous clinical side effects and have o...
我们正在为您处理中,这可能需要一些时间,请稍等。
|
What are the solutions of the functional equation $f(x) + f(x^2) = 2$? Will they be one to one or many to one? Will they be periodic or not?
closed as off-topic by LutzL, Crostul, Thomas Andrews, Servaes, user223391 Nov 9 '15 at 0:12
This question appears to be off-topic. The users who voted to close gave this specific reason:
" This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Crostul, Thomas Andrews, Servaes, Community
Select some positive $q\ne 1$ and $g(t)=f(q^{2^t})-1$. Then $$ g(t+1)=f((q^{2^t})^2)-1=-g(t)=g(t-1) $$ Thus $g$ is $2$-periodic. But note that $q$ may be arbitrarily close to $1$.
In the end this means that you can divide the positive numbers in equivalence classes $\{q^{4^n}:n\in\Bbb Z\}$ where $f$ is necessarily constant, $f(q^{4^n})=f(q)$. The classes of $q$ and $q^2$ are connected by the functional equation and in general the function values of every equivalence class are determined by the values over $(1/4,1/2]$ and $[2,4)$ for the positive axis and thus for the full real line, adding the trivial values $f(\pm1)=1=f(0)$.
Only by demanding some continuity condition do you get the unique solution which is the constant function $1$.
Let $x=2^{2^t}$ and $f(x)=g(t)$.
Then,
$$g(t)+g(t+1)=2,$$ which is solved by $$g(t)=1+C(-1)^t.$$
As only values one unit apart are related to each other, we can write
$$g(t)=1+c(t-\lfloor t\rfloor)(-1)^{\lfloor t\rfloor}$$ where $c(t)$ is arbitrary in $[0,1)$.
Thus $$f(x)=1+h(\sqrt[\lfloor \text{lg}(\text{lg}(x))\rfloor]x)(-1)^{\lfloor \text{lg}(\text{lg}(x))\rfloor},$$
where $h(x)$ is arbitrary in $[2,4)$.
|
Modelling and inverse Problems in Nanometrology
Nanometrology is the science of measurement of distances and displacements of objects at the nanoscale. Mathematical modelling and the numerical simulation in nanometrology supports evaluations of measurements and estimations of uncertainties at the nanoscale level.
Scatterometry Introduction
Scatterometry is the investigation of micro- or nanostructured surfaces regarding their geometry and dimension by measurement and analysis of light diffraction from these surfaces. An example of the experimental setup for scatterometry is shown in Fig. 1a. Scatterometry is an indirect optical measurement, i.e., the sought geometrical parameters of the investigated periodic surface structures have to be reconstructed from the measured light diffraction pattern (inverse problem). In a first step the numerical simulation of the diffraction process for 2D periodic structures is realized by the finite element method (FEM) of Mawell's equations. Then the inverse problem is expressed as a non-linear operator equation where the operator maps the sought mask parameters to the efficiencies of the diffracted plane wave modes and the deviation of the calculated from the measured efficiencies are minimized.
Mathematical model
Non-imaging metrology methods like scatterometry are in contrast to optical methods not diffraction limited. They give access to the geometrical parameters of periodic structures like structure width (CD), pitch, side-wall angle or line height (cf. Fig. 2). However, scatterometry requires apriori information. Typically, the surface structure needs to be specified as member of a certain class of gratings and is described by a finite number of parameters, which are confined to certain intervals. The inverse diffraction problem has to be solved to determine the structure parameters from a measured diffraction pattern.
The conversion of measurement data into desired geometrical parameters depends crucially on a high precision rigorous solution of Maxwell's equations, which can be reduced to the two-dimensional Helmholtz equation if geometry and material properties are invariant in one direction. The typical transmission conditions of electro-magnetic fields yield continuity and jump conditions for the transverse field components; the radiation conditions at infinity are well established. For the numerical solution, a lot of methods have been developed. We use the finite element method (FEM) and truncate the infinite domain of computation to a finite one by coupling it with boundary elements (cf. Fig. 2). Inverse Problem: Least Squares
Apart from the forward computations of the Helmholtz equation, the solution of the inverse problem, i.e. the reconstruction of the grating profiles and interfaces from measured diffraction data, is the essential task in scatterometry. FEM-based optimization procedures, for example included in the DIPOG software of the Weierstrass-Institute for Applied Analysis and Stochastics in Berlin, can be used to reconstruct the profile parameters. The problem is equivalent to the minimization of an objective functional describing the difference between the calculated and the measured efficiency pattern in dependence of the assumed model parameters.
Fig. 3a shows the shape of the objective functional calculated by varying the heights of the Cr- and the CrO-layer of a Chrome on glass mask. For the selected admissible range of the two heights the coordinates of the minimum values of the objective functional are very near to the expected values of 50 nm for hCr and 18 nm for hCrO. However, they were only found if the initial value for hCrO is set to a value smaller than 60 nm where the functional has a ridge parallel to the hCr axis. Otherwise the (second) local minimum of the functional on the other side of this maximum is found. Fig. 3b shows a similar situation for a TaN-EUV mask where line width ($p_2$) and line height ($p_6$) were varied. For gradient-based optimization methods the admissible range of model parameters and their initial values can have a strong influence on the accuracy of the reconstruction results. Maximum likelihood Method
The classical method to determine the profile parameters from measured light diffraction patterns by optimization is the least squares (LSQ) method. The norm of the difference between the simulated and the measured data is minimized. The right choice of the weights accounting for the variances in the measured data plays a crucial role. Inappropriate weights of the components in the LSQ sum may result in incorrect reconstructed profile parameters and furthermore an overestimation of the associated uncertainties.
The maximum likelihood estimation (MLE) overcomes this pitfall by addressing the variances of the measurement data $\sigma_j^2=\left(a\cdot f_j\left(\mathbf{p}\right)\right)^2+b^2$ as sought parameters too.
From the assumption that the measurement errors $\sigma_j$ are normally distributed, the likelihood function $\mathcal{L}\left(a,b,\mathbf{p}\right)$ can be written as a function of the error parameters $a$ and $b$ and the geometry parameters $\mathbf{p}$:
\[
\mathcal{L}\left(a,b,\mathbf{p}\right)=\prod_{j=1}^{m}\left(2\pi\left(\left(a\cdot f_j\left(\mathbf{p}\right)\right)^2+b^2\right)\right)^{-1/2}\exp\left[-\frac{(f_j\left(\mathbf{p}\right)-y_j)^2}{2\left(\left(a\cdot f_j\left(\mathbf{p}\right)\right)^2+b^2\right)}\right] \]
The maximation of the likelihood function $\mathcal{L}\left(a,b,\mathbf{p}\right)$ yields the estimates of the sought parameters $(a,b,\mathbf{p})$.
Line roughness
To get reliable simulations and reconstructions, line edge roughness (LER) of the line-space structures of the lithographic masks has to be taken into account. The periodicity of the line-space structrues are disturbed due to LER and significant impacts on the measured light diffraction pattern are expected. Investigations with stochastically disturbed edge positions of the absorber lines reveal that the mean efficiencies of the scattering light are damped exponentially in dependence on the standard deviation of the roughness amplitude $\sigma_r$ and the diffraction order $n_j$:
\[ \widetilde{f}_j\left(\sigma_\mathrm{r},\mathbf{p}\right)=\exp(-\sigma_\mathrm{r}^2
k_j^2)\cdot f_j\left(\mathbf{p}\right) \]
where $k_j = 2 \pi n_j/d$, $n_j$ is the order of the diffractive mode and $d$ the period of the line-space structure. $\sigma_\mathrm{r}^2$ denotes the variance of the edge position. In rigorous finite element simulations this expression for the analytical damping factor of the mean scattered efficiencies was confirmed by using computational FEM domains of large periods for the cross section of the lithographic mask containing many line-space pairs with stochastically chose widths, but still with straight edges. Fig. 4 depicts the scheme of the applied 1D LER model.
To verify the proposed systematic impact of LER on the measured efficiencies in terms of the order of the diffractive mode and standard deviation of roughness amplitudes, we go up to investigations on randomly perturbed 2D binary grating. Their edge positions are controlled by an exponentially decaying autocorrelation function allowing a significantly more realistic modelling of line edge roughness. We simulate the diffraction of line-space gratings by that of arrays of strip shaped slits (apertures). In order to create apertures with rough boundary lines, we apply an autocorrelation function to describe the variations along the line edges. Considering lines $\{(x(y),y):\;y\in\mathbb{R}\}$ with random variables $x(y)$, we assume a constant mean value $\langle x(y) \rangle=x_0$ and that the correlation \begin{equation*}
x(y_1,y_2):=\frac{\big\langle [x(y_1)-x_0] [x(y_2)-x_0] \big\rangle }{x_0^2} \end{equation*} depends on the distance $r=|y_1-y_2|$ only, i.e. $x(y_1,y_2)=x(r)$. Furthermore, we assume the exponentially decaying autocorrelation function \begin{equation*} x(r)=\sigma^2 e^{-(r/\xi)^{2\alpha}},\end{equation*} where $\sigma$ is the standard deviation of the edge positions, $\xi$ is the linear correlation length along the line, and $\alpha$ is called roughness exponent. Randomized line edge profiles $x$ are generated by calculating or approximating the associated power spectrum density function $\mathrm{P\!S\!D}(r^{-1})$ that belongs to the autocorrelation function $x(r)$, and then applying to the calculated $\rm P\!S\!D$ a random complex exponential phase term, being uniformly distributed in the range of $[0,2\pi]$. Subsequently, the inverse Fourier transform of that disturbed $\rm P\!S\!D$ provides a rough line edge profile $x$. By this means the rough line edge shown in Fig. 5 was generated for a standard deviation $\sigma$=3 nm, a roughness exponent $\alpha$=0.5 and a correlation length $\xi$=10 nm.
Applying the Fraunhofer approximation the irradiance pattern of illuminated rough apertures far away from the source plane is numerically calculated very efficiently as the 2D-Fourier transform of the light distribution in the aperture plane and then compared to those of the unperturbed, 'non-rough' aperture. Many different ensembles of rough apertures representing variant roughness patterns characterized by different values of $\sigma$, $\xi$, and $\alpha$ were examined. Only a slight increase within a range of maximal 5% were found for the determined $\sigma_r$ compared to the imposed standard deviation $\sigma$ of the associated rough ensemble. Fig. 6 summarizes these findings for many different ensembles up to a correlation length $\xi$ of 150 nm and for two different values of $\alpha$ = 1.0 and $\alpha$ = 0.5.
In summary it can be said, therefore, that the proposed exponential damping factor is quite robust with respect to different values of the correlation length $\xi$ and the roughness exponent $\alpha$.
Publications
•
H. Gross, S. Heidenreich and M. Bär
Impact of different stochastic line edge roughness patterns on measurements in scatterometry - a simulation study. Measurement, 98
339--346,
2017.
•
S. Heidenreich, H. Gross and M. Bär
Bayesian approach to the statistical inverse problem of scatterometry: Comparison of three surrogate models. International Journal for Uncertainty Quantification,
511,
2015.
•
H. Bosse, B. Bodermann, G. Dai, J. Flügge, C. G. Frase, H. Gross, W. Häßler-Grohne, P. Köchert, R. Könning, F. Scholze and C. Weichert
Technisches Messen, 82
346-358,
2015.
•
H. Groß, S. Heidenreich, M.-A. Henn, M. Bär and A. Rathsfeld
Cont. Dyn. S. - S, 8
497-519,
2015.
[DOI: 10.3934/dcdss.2015.8.497]
•
S. Heidenreich, H. Gross, M.-A. Henn, C. Elster and M. Bär
J. Phys. Conf. Ser., 490(1),
012007,
2014.
•
M.-A. Henn, H. Gross, S. Heidenreich, F. Scholze, C. Elster and M. Bär
Improved reconstruction of critical dimensions in extreme ultraviolet scatterometry by modeling systematic errors. Measurement Science and Technology, 25(4),
044003,
2014.
•
H. Groß, S. Heidenreich, M.-A. Henn, G. Dai, F. Scholze and M. Bär
Modelling line edge roughness in periodic line-space structures by Fourier optics to improve scatterometry. J. Europ.Opt. Soci.Rap. Pub., 9
14003,
2014.
[DOI: 10.2971/jeos.2014.14003]
•
M.-A. Henn
, 2013
•
B. Bodermann, F. Scholze, J. Flügge, H. Groß and H. Bosse
Nanometrology at PTB in support of process control of nanoscale features in semiconductor manufacturing. International Journal of Nanomanufacturing, 8(1),
2012.
•
M.-A. Henn, S. Heidenreich, H. Groß, A. Rathsfeld, F. Scholze and M. Bär
Improved grating reconstruction by determination of line roughness in extreme ultraviolet scatterometry. Opt. Lett., 37(24),
5229--5231,
2012.
[DOI: 10.1364/OL.37.005229]
•
M.-A. Henn, H. Gross, F. Scholze, M. Wurm, C. Elster and M. Bär
Optics Express, 20(12),
12771-86,
2012.
[DOI: 10.1364/OE.20.012771]
•
H. Groß, M.-A. Henn, S. Heidenreich, A. Rathsfeld and M. Bär
Modeling of line roughness and its impact on the diffraction intensities and the reconstructed critical dimensions in scatterometry. Appl. Opt., 51(30),
7384--94,
2012.
[DOI: 10.1364/AO.51.007384]
•
H. Groß, M.-A. Henn, A. Rathsfeld and M. Bär
In F. Pavese, M. Bär, J.-R. Filtz, A. B. Forbes, L. Pendrill, K. Shirono, editor,
Publisher: World Scientific New Jersey,
, 2012
•
B. Bodermann, P.-E. Hansen, S. Burger, M.-A. Henn, H. Gross, F. Scholze, J. Endres and M. Wurm
SPIE Proc.
, 2012
•
B. Bodermann, J. Flügge and H. Groß
PTB-Mitteilungen 2/2011 "Themenschwerpunkt Nanometrologie"
, 2011
•
B. Bodermann, E. Buhr, H.-U. Danzebrink, M. Bär, F. Scholze, M. Krumrey, M. Wurm, P. Klapetek, P.-E. Hansen, V. Korpelainen, M. van Veghel, A. Yacoot, S. Siitonen, O. El Gawhary, S. Burger, T. Saastamoinen, D. G. Seiler, A. C. Diebold, R. McDonald, A. Chabli and E. M. Secula
AIP Conf. Proc. Volume 1395, page 319--323
, 2011
[DOI: 10.1063/1.3657910]
•
M.-A. Henn, H. Groß, F. Scholze, C. Elster and M. Bär
Improved geometry reconstruction and uncertainty evaluation for extreme ultraviolet (EUV) scatterometry based on maximum likelihood estimation.
SPIE Proc. 80830N
, 2011
•
H. Gross, J. Richter, A. Rathsfeld and M. Bär
Investigations on a robust profile model for the reconstruction of 2D periodic absorber lines in scatterometry. J. Europ. Opt. Soc. Rap. Public., 5
10053,
2010.
[DOI: 10.2971/jeos.2010.10053]
•
H. Gross, A. Rathsfeld, F. Scholze and M. Bär
Profile reconstruction in extreme ultraviolet (EUV) scatterometry: modeling and uncertainty estimates. Measurement Science and Technology, 20(10),
105102,
2009.
•
M.-A. Henn, R. Model, M. Bär, M. Wurm, B. Bodermann, A. Rathsfeld and H. Groß
SPIE Proc. 7390
, 2009
•
J. Elschner, H. Hinder, A. Rathsfeld and G. Schmidt
•
R. Model, A. Rathsfeld, H. Gross, M. Wurm and B. Bodermann
Journal of Physics: Conference Series, 135(1),
012071,
2008.
•
H. Groß and A. Rathsfeld
Sensitivity analysis for indirect measurement in scatterometry and the reconstruction of periodic grating structures. Waves in Random and Complex Media, 18(1),
129--149,
2008.
[DOI: 10.1080/17455030701481823]
•
H. Gross, A. Rathsfeld, F. Scholze, R. Model and M. Bär
, page 6995OT-1 – 6995OT-9
Publisher: Proc. SPIE6995,
, 2008
•
H. Groß, R. Model, F. Scholze, M. Wurm, B. Bodermann, M. Bär and A. Rathsfeld
Modellbildung, Bestimmung der Messunsicherheit und Validierung für diskrete inverse Probleme am Beispiel der Scatterometrie.
Sensoren und Messsystem 2008, page 337--346
, 2008
•
H. Gross and A. Rathsfeld
Annual Research Report 2007
, 2007
•
M. Wurm, B. Bodermann, R. Model and H. Groß
SPIE Proc. 6617
, 2007
•
H. Groß, A. Rathsfeld, F. Scholze, M. Bär and U. Dersch
SPIE Proc. 6617
, 2007
•
H. Groß, R. Model, M. Bär, M. Wurm, B. Bodermann and A. Rathsfeld
Measurement, 39(9),
782--794,
2006.
•
R. Model, H. Groß, W. Haberkorn and M. Bär
PTB-Mitteilungen 3/2006
, 2006
•
H. Groß and A. Rathsfeld
Sensitivity Analysis for Indirect Measurement in Scatterometry and the Reconstruction of Periodic Grating Structures.
WIAS Preprint No. 1164
, 2006
•
M. Wurm, B. Bodermann, F. Scholze, C. Laubis, H. Groß and A. Rathsfeld
Untersuchungen zur Eignung der EUV-Scatterometrie zur quantitativen Charakterisierung periodischer Strukturen auf Photolithographiemasken.
DGaO-Proc.
, 2006
|
Hydrogen peroxide is said to be unstable, for it undergoes auto-oxidation on standing/heating:
$$\ce{2H_2O_2 -> 2H_2O + O_2}$$
where $\Delta S=\pu{70.5 J {mol}^{-1}K^{-1}}$ and $\Delta H^{\Theta} = \pu{-98.2 kJ {mol}^{-1}}$.
I speculate if the decomposition can be reversed in some way under suitable conditions, particularly at infinite pressure and near absolute zero temperature. At once this seems to be a problem of entropy reversal. But I seek a more rigorous treatment of the problem with the help of quantum theory. Consider the molecular wavefunctions $\Psi_{1A}$ , $\Psi_{1B}$ of water (2 molecules) and $\Psi_2$ of dioxygen respectively. By studying the interaction potentials of these three wavefunctions as a function of different thermodynamic coordinates, it can be possible to find the desired suitable conditions.
However, the problem begins here – this is not a case of just two molecules where we have a number of methods of analysing the system, I am talking about three molecules, so the calculation is not simple. Moreover, I doubt if I need to study the wavefunction of hydrogen peroxide too in some respect. Overall, I am lacking some theoretical and computational knowledge. Any help is greatly appreciated.
|
Suppose that $X$ is a compact metric space, $\mathcal{H}$ is a Hilbert space, and that $A: \mathcal{B}(X) \rightarrow \mathcal{B}(\mathcal{H})$ is a Positive Operator Valued Measure (POVM), where $\mathcal{B}(X)$ denotes the Borel $\sigma$-algebra of subsets of $X$. That is,
$A(\Delta)$ is a positive operator in $\mathcal{B}(\mathcal{H})$ for all $\Delta \in \mathcal{B}(X)$;
$A(\emptyset) = 0$ and $A(X) = \text{id}_{\mathcal{H}}$ (the identity operator on $\mathcal{H}$);
If $\{ \Delta_n \}_{n=1}^{\infty}$ is a sequence of pairwise disjoint sets in $\mathcal{B}(X)$, and if $g,h \in \mathcal{H}$, then
$$ \left \langle A\left( \bigcup_{n=1}^{\infty} \Delta_n \right)g , h \right \rangle = \sum_{n=1}^{\infty} \langle A(\Delta_n)g, h \rangle.$$
If $g,h \in \mathcal{H}$, consider the complex measure $A_{g,h}(\cdot) := \langle A(\cdot)g, h \rangle$. I am trying to show that the total variation norm of this measure is less than or equal to $||g||\cdot||h||$. This is true if $A$ is a projection valued measure, and I am pretty sure it is also true for POVM's, but I am not sure how to prove it for the POVM case. Any help or references would be appreciated! Thanks!
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
10:00 --- Meson Distribution Amplitudes --- 10:00 On exclusive hard processes with light-mesons - Dr Kornelija Passek-Kumericki (IRB Zagreb) () 10:30 Twist-3 effects in electroproduction of pions - Peter Kroll (University of Wuppertal) () 11:00 --- Coffee Break --- 11:30 --- Meson DA's and GPD's --- 11:30 Phenomenological access to GPDs today - Kresimir Kumericki (University of Zagreb) ()
09:30 --- Collider QCD --- 09:30 Meson Distribution Amplitudes from Lattice QCD - Fabian Hutzler (University Regensburg) () 10:00 Recent QCD results for Higgs and Drell-Yan production at the LHC - Emanuele Re (CERN) () 10:30 A brief look inside jet substructure - Gregory Soyez (IPhT, CEA Saclay) () 11:00 --- Coffee Break --- 11:30 High-energy scattering with high parton densities: Gluon saturation and the Color Glass Condensate - Edmond Iancu (Université Paris-Saclay (FR)) () 11:30 --- TMD and... ---
09:30 --- QCD in muon's (g-2) --- 09:30 Theoretical status of hadronic contributions to $(g-2)_\mu$ - Prof. Gilberto Colangelo (Universität Bern) () 10:00 Hadronic corrections to the muon anomalous magnetic moment from lattice QCD - Dr Antoine Gérardin (Johannes Gutenberg University - Mainz) () 10:30 An updated determination of HVP contributions to muon $g-2$ and $\alpha(M^2_Z)$ - Zhiqing Philippe Zhang (LAL, Orsay (FR)) () 11:00 --- Coffee Break --- 11:30 Heavy-quarkonium production: NLO vs. experiment - Bernd Kniehl (University of Hamburg) () 11:30 --- Quarkonia as fine probes of QCD ---
12:00 Accessing Generalized Parton Distributions through the photoproduction of a photon-meson pair - Dr Samuel Wallon () 12:30 Light-cone distribution amplitudes of the B-meson - Thorsten Feldmann (Unknown) () 13:00 --- Lunch Break --- 14:15 Recent progress on infrared singularities in QCD scattering amplitudes - Prof. Einan GARDI (University of Edinburgh) () 14:15 --- Recent progress... --- 14:45 The distribution of angular momentum in QCD - Dr Cédric Lorcé (Ecole Polytechnique) () 15:15 Nucleon structure with the open source PARTONS framework - Hervé MOUTARDE () 15:45 --- Coffee Break --- 16:15 --- Status... --- 16:15 The quark and gluon structure of the proton in the high-precision LHC era - Dr Juan Rojo (VU Amsterdam and Nikhef) () 16:45 Overview of TMDs - Daniel Boer ()
12:00 Extracting gluon Transverse Momentum Dependent (TMD) distributions with quarkonia - Florent Scarpa (IPN Orsay - Paris-Sud U. - CNRS/IN2P3) () 12:30 Cutting-edge NLO description of processes with saturation effects - Renaud Boussarie (IFJ Krakow) () 13:00 --- Lunch Break --- 14:15 Lattice QCD for precision flavour physics - Carlos Roberto Pena Ruano (Universidad Autonoma de Madrid (ES)) () 14:15 --- QCD Uncertainties in BSM Searches - lattices --- 14:45 Lattice QCD form factors of the $D \to \pi(K)$ semileptonic decays and determination of $|V_{cd}|$ and $|V_{cs}|$ - Dr Giorgio Salerno (Università degli Studi di Roma Tre) () 15:15 Interplay between hadronic uncertainties and NP effects in the context of $b \to s$ anomalies - Marco Fedele (INFN RM1) () 15:45 --- Coffee Break --- 16:15 QCD Sum Rules: (short) Status Report - Pietro Colangelo (Unknown) () 16:15 --- QCD Uncertainties in BSM Searches - sum rules --- 16:45 Uncertainties in $b \to s (d) \ell\ell$ decays due to the form factors - Aoife Bharucha (CNRS) () 17:15 $Bc\to J/\psi$ form factor and lepton flavor universality violation in $R(J/\psi)$ - Domagoj Leljak (IRB Zagreb) () 20:00 Social Dinner at Restaurant MARTY ()
12:00 Double quarkonium hadroproduction as a probe of double-parton hard scatterings - Huasheng Shao (Centre National de la Recherche Scientifique (FR)) () 12:30 Charmonium resonances and bound states from Lattice QCD - Daniel Mohler (Fermilab) () 13:00 --- Lunch break --- 14:15 --- Dealing with soft photons -- QCD perspective --- 14:15 Soft Photon Contributions to Hadronic Processes from Lattice QCD - Dr Francesco Sanfilippo (INFN Roma Tre) () 14:45 Electromagnetic corrections and strong interactions - Marc KNECHT (CNRS) () 15:15 --- Closing drinks ---
|
Practice Paper 4 Question 12
Consider the square \(ABCD\) of side \(x,\) and the equilateral triangle \(BCE\) as in the figure shown. The square rotates clockwise around \(B\) until \(A\) overlaps \(E,\) then rotates around \(E\) until \(D\) overlaps \(C,\) and so on, until \(A\) retakes its initial position. Sketch the path traced by A and find its length. Give the length of the longest horizontal segment with end points on this path.
Related topics Warm-up Questions Find the perimeter of a circle sector with central angle \(\frac{\pi}{4}\) and a radius of \(2.\) Calculate the length of the chord between the two ends of the arc of that sector. Compute the sides ratio of a triangle with angles of \(30^\circ, 60^\circ\) and \(90^\circ.\) Hints Hint 1Have you tried sketching the movement? Hint 2Do you notice any repeating patterns? Hint 3How far has point \(A\) travelled, in terms of its distance to get back to its initial position, after 4 rotations? Hint 4Why not compute the length of each of the 4 arcs traced by \(A\)? Hint 5Have you identified the longest horizontal segment? Hint 6To find its length, could you find a triangle with the segment as one of its side? Hint 7Why not calculate one angle in the triangle to solve for its side length? Solution
We can construct a sketch of the path by first drawing the triangle and drawing 3 squares that overlap each side of the triangle as shown in the diagram. All the vertices of the squares and triangle in this diagram are visited in a clockwise fashion.
Notice that after rotating four times, we arrive at a situation in which point \(A\) has completed one-third of its journey around the triangle, and is taking the position of point \(F.\) In the first rotation, the length of its path is the length of arc \(AE,\) which is \(\frac{\pi x}{6}.\) In the second rotation, point \(A\) does not move. In the third rotation, its path length is \(\frac{\pi x}{6}.\) In the fourth rotation, its path length is \(\frac{\sqrt{2} \pi x}{6}.\) Adding these up and multiplying by three gives us the total path length of \(\frac{\pi x(2+\sqrt{2})}{2}.\)
From the diagram it is apparent that the longest horizontal chord is \(\overline{IF}.\) We calculate this by first noting \(\angle{BEC}\) is \(\frac{\pi}{3}\) as it is an internal angle of an equilateral triangle. \(\angle{IEB}\) and \(\angle{CEF}\) are both \(\frac{\pi}{6}\) as each of the that created the arcs is \(\frac{\pi}{6}.\) Therefore, \(\angle{IEF}\) is \(\frac{2\pi}{3}.\) Using the fact that the length of \(\overline{IE}\) and \(\overline{EF}\) is \(x,\) we can solve the triangle \(\Delta IEF\) to find the length of \(\overline{IF}\) which gives the length of the longest horizontal segment is \(\sqrt{3}x.\)
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
Answer
When we sketch the graph of these two functions, we can see that the graphs are the same. This is an identity: $tan(\frac{\pi}{2}-\theta) = cot~\theta$
Work Step by Step
$y = tan(\frac{\pi}{2}-\theta)$ $y = cot~\theta$ When we sketch the graph of these two functions, we can see that the graphs are the same. We can use the following identities: $sin~(-a) = -sin~a$ $cos~(-b) = cos~b$ We can demonstrate the identity: $tan(\frac{\pi}{2}-\theta) = \frac{sin(\frac{\pi}{2}-\theta)}{cos(\frac{\pi}{2}-\theta)}$ $tan(\frac{\pi}{2}-\theta) = \frac{-sin(\theta-\frac{\pi}{2})}{cos(\theta-\frac{\pi}{2})}$ $tan(\frac{\pi}{2}-\theta) = \frac{cos~\theta}{sin~\theta}$ $tan(\frac{\pi}{2}-\theta) = cot~\theta$ This is an identity: $tan(\frac{\pi}{2}-\theta) = cot~\theta$
|
Practice Paper 4 Question 13
Sketch \((1+x)^y=e\) for all real values. Take care to point out all key points and key behaviour.
Related topics Warm-up Questions Sketch the graph of \(y=\frac{x-1}{x+1}\). Find the second derivative of \(f(x)=2^{x}.\) Evaluate \(\lim_{x\to\frac{\pi}{2}} \frac{1}{\tan x}.\) Hints Hint 1Have you tried rearranging the equation into a form that is easier to work with? Hint 2Try exploring what happens to \(y\) near some interesting values of \(x.\) Hint 3Are there any other key points on this graph? Hint 4... in particular, any stationary or inflection points? Hint 5Remember the original equation is given in a different form, and you may have made an assumption about \(x\) when simplifying it. Hint 6Does this permit any more values of \(x\) and \(y?\) Hint 7... more specifically, when \(x<-1.\) Solution
We first rearrange the equation into something we can work with more easily. Assuming \(x+1>0\) and taking log on both sides gives us \(\ln{(x+1)^y}=1.\) We can write \(\ln{(x+1)^y}\) as \(y\ln{(x+1)}\) so the original equation can be expressed as \(f(x)=\frac{1}{\ln{(x+1)}}.\)
Now let's see what happens around interesting values of \(x,\) namely as it tends to \(\infty,\) \(-1\) and \(0.\) We can find all these limits by seeing what happens to \(\ln(x+1)\) at these values (a sketch may help). The results are: \[ \lim_{x\to\infty}f(x)=0 \\ \lim_{x\to-1}f(x)=0 \\ \lim_{x\to0^-}f(x)=-\infty \\ \lim_{x\to0^+}f(x)=\infty \] Next we take derivatives to identify any stationary and inflection points. The first derivative is \(f'(x)=\frac{1}{(x+1)\ln^2(x+1)},\) which has no zeroes.
The second derivative is \(f''(x)= \frac{2\ln(x+1)+\ln^2(x+1)}{(x+1)^2\ln^4(x+1)} = \frac{\ln(x+1)+2}{(x+1)^2\ln^3(x+1)}.\) This has a zero at \(x=e^{-2}-1.\) Substituting this back into the original equation, we get \(y=-1/2,\) so we know the graph has an inflection point at \(\left(e^{-2}-1,-1/2\right).\)
For \(x+1 \lt 0,\) \((x+1)^y > 0\) if and only if \(y\) is even. Let \(y = 2k.\) We take the \((2k)^{th}\) root of \(e\) in the initial equation to get \(1+x=\pm\sqrt[2k]{e}.\) We choose the negative solution, as that is what's still unaccounted for by the original method, so we get \(\left(-1-\sqrt[2k]{e}, 2k\right).\)
We can put all this information together into a sketch resembling the graph:
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
In order to apply mathematical methods to a physical or “real life” problem, we must formulate the problem in mathematical terms; that is, we must construct a mathematical model for the problem. Many physical problems concern relationships between changing quantities. Since rates of change are represented mathematically by derivatives, mathematical models often involve equations relating an unknown function and one or more of its derivatives. Such equations are differential equations. They are the subject of this book.
Much of calculus is devoted to learning mathematical techniques that are applied in later courses in mathematics and the sciences; you wouldn’t have time to learn much calculus if you insisted on seeing a specific application of every topic covered in the course. Similarly, much of this book is devoted to methods that can be applied in later courses. Only a relatively small part of the book is devoted to the derivation of specific differential equations from mathematical models, or relating the differential equations that we study to specific applications. In this section we mention a few such applications. The mathematical model for an applied problem is almost always simpler than the actual situation being studied, since simplifying assumptions are usually required to obtain a mathematical problem that can be solved. For example, in modeling the motion of a falling object, we might neglect air resistance and the gravitational pull of celestial bodies other than Earth, or in modeling population growth we might assume that the population grows continuously rather than in discrete steps.
A good mathematical model has two important properties:
It’s sufficiently simple so that the mathematical problem can be solved. It represents the actual situation sufficiently well so that the solution to the mathematical problem predicts the outcome of the real problem to within a useful degree of accuracy. If results predicted by the model don’t agree with physical observations,the underlying assumptions of the model must be revised until satisfactory agreement is obtained.
We will now give examples of mathematical models involving differential equations. We willreturn to these problems at the appropriate times, as we learn how to solve the various types of differential equations that occur in the models. All the examples in this section deal with functions of time, which we denote by \(t\). If \(y\) is a function of \(t\), \(y'\) denotes the derivative of \(y\) with respect to \(t\); thus,
\[y' = \dfrac{dy}{dt}.\]
Population Growth and Decay
Although the number of members of a population (people in a given country, bacteria in a laboratory culture, wildflowers in a forest, etc.) at any given time t is necessarily an integer, models that use differential equations to describe the growth and decay of populations usually rest on the simplifying assumption that the number of members of the population can be regarded as a differentiable function \(P = P(t)\). In most models it is assumed that the differential equation takes the form
\[P' = a(P)P \label{1.1.1}\]
where \(a\) is a continuous function of \(P\) that represents the rate of change of population per unit time per individual. In the
Malthusian model, it is assumed that \(a(P)\) is a constant, so Equation \ref{1.1.1} becomes
\[P' = aP. \label{1.1.2}\]
(When you see a name in blue italics, just click on it for information about the person.) This model assumes that the numbers of births and deaths per unit time are both proportional to the population. The constants of proportionality are the birth rate (births per unit time per individual) and the death rate (deaths per unit time per individual); a is the birth rate minus the death rate. You learned in calculus that if \(c\) is any constant then
\[P = ce^{at} \label{1.1.3}\]
satisfies Equation \ref{1.1.2}, so Equation \ref{1.1.2} has infinitely many solutions. To select the solution of the specific problem that we are considering, we must know the population \(P_0\) at an initial time, say \(t = 0\). Setting \(t = 0\) in Equation \ref{1.1.3} yields \(c = P(0) = P_0\), so the applicable solution is
\[P(t) = P_0e^{at}.\]
This implies that
\[\lim_{t\to\infty}P(t)=\left\{\begin{array}{cl}\infty&\mbox{ if }a>0,\\ 0&\mbox{ if }a<0; \end{array}\right.\]
that is, the population approaches infinity if the birth rate exceeds the death rate, or zero if the death rate exceeds the birth rate.
To see the limitations of the Malthusian model, suppose we are modeling the population of a country, starting from a time \(t = 0\) when the birth rate exceeds the death rate (so \(a > 0\)), and the country’s resources in terms of space, food supply, and other necessities of life can support the existing population. Then the prediction \(P = P_0e^{at}\) may be reasonably accurate as long as it remains within limits that the country’s resources can support. However, the model must inevitably lose validity when the prediction exceeds these limits. (If nothing else, eventually there will not be enough space for the predicted population!) This flaw in the Malthusian model suggests the need for a model that accounts for limitations of space and resources that tend to oppose the rate of population growth as the population increases.
Perhaps the most famous model of this kind is the
Verhulst model, where Equation \ref{1.1.2} is replaced by
\[\label{eq:1.1.4} P'=aP(1-\alpha P),\]
where \(\alpha\) is a positive constant. As long as \(P\) is small compared to \(1/\alpha\), the ratio \(P'/P\) is approximately equal to \(a\). Therefore the growth is approximately exponential; however, as \(P\) increases, the ratio \(P'/P\) decreases as opposing factors become significant.
Equation \ref{eq:1.1.4} is the
logistic equation. You will learn how to solve it in Section 1.2. (See Exercise 2.2. [exer:2.2.28].) The solution is
\[P={P_0\over\alpha P_0+(1-\alpha P_0)e^{-at}},\]
where \(P_0=P(0)>0\). Therefore \(\lim_{t\to\infty}P(t)=1/\alpha\), independent of \(P_0\).
Figure \(\PageIndex{1}\) shows typical graphs of \(P\) versus \(t\) for various values of \(P_0\).
Figure \(\PageIndex{1}\): Newton’s Law of Cooling
According to
Newton’s law of cooling, the temperature of a body changes at a rate proportional to the difference between the temperature of the body and the temperature of the surrounding medium. Thus, if \(T_m\) is the temperature of the medium and \(T = T(t)\) is the temperature of the body at time \(t\), then
\[T_0 = −k(T −T_m) \label{1.1.5}\]
where \(k\) is a positive constant and the minus sign indicates; that the temperature of the body increases with time if it is less than the temperature of the medium, or decreases if it is greater. We will see in Section 4.2 that if \(T_m\) is constant then the solution of Equation \ref{1.1.5} is
\[T = T_m + (T_0 −T_m)e^{−kt} \label{1.1.6}\]
where \(T_0\) is the temperature of the body when \(t = 0\). Therefore
\[\lim_{t→∞} T(t) = T_m \nonumber\]
independent of \(T_0\) (Common sense suggests this. Why?).
Figure \(\PageIndex{2}\) shows typical graphs of \(T\) versus \(t\) for various values of \(T_0\).
Figure \(\PageIndex{2}\): Temperature according to Newton’s Law of Cooling
Assuming that the medium remains at constant temperature seems reasonable if we are considering a cup of coffee cooling in a room, but not if we are cooling a huge cauldron of molten metal in the same room. The difference between the two situations is that the heat lost by the coffee isn’t likely to raise the temperature of the room appreciably, but the heat lost by the cooling metal is. In this second situation we must use a model that accounts for the heat exchanged between the object and the medium. Let \(T = T(t)\) and \(T_m = T_m(t)\) be the temperatures of the object and the medium respectively, and let \(T_0\) and \(T_m0\) be their initial values. Again, we assume that T and Tm are related by Equation \ref{1.1.5}. We also assume that the change in heat of the object as its temperature changes from \(T_0\) to \(T\) is \(a(T −T_0)\) and the change in heat of themedium as its temperature changes from \(T_{m0}\) to \(T_m\) is \(a_m(T_m−T_{m0})\), where a and am are positive constants depending upon the masses and thermal properties of the object and medium respectively. If we assume that the total heat of the in the object and the medium remains constant (that is, energy is conserved), then
\[a(T −T_0) + a_m(T_m −T_{m0}) = 0. \nonumber\]
Solving this for \(T_m\) and substituting the result into Equation \ref{1.1.6} yields the differential equation
\[T ^ { \prime } = - k \left( 1 + \frac { a } { a _ { m } } \right) T + k \left( T _ { m 0 } + \frac { a } { a _ { m } } T _ { 0 } \right) \nonumber\]
for the temperature of the object. After learning to solve linear first order equations, you’ll be able to show (Exercise 4.2.17) that
\[T = \frac { a T _ { 0 } + a _ { m } T _ { m 0 } } { a + a _ { m } } + \frac { a _ { m } \left( T _ { 0 } - T _ { m 0 } \right) } { a + a _ { m } } e ^ { - k \left( 1 + a / a _ { m } \right) t }\]
Glucose Absorption by the Body
Glucose is absorbed by the body at a rate proportional to the amount of glucose present in the blood stream. Let \(λ\) denote the (positive) constant of proportionality. Suppose there are \(G_0\) units of glucose in the bloodstream when \(t = 0\), and let \(G = G(t)\) be the number of units in the bloodstream at time \(t > 0\). Then, since the glucose being absorbed by the body is leaving the bloodstream, \(G\) satisfies the equation
\[G_0 = −λG. \label{1.1.7}\]
From calculus you know that if \(c\) is any constant then
\[G = ce^{−λt} \label{1.1.8}\]
satisfies Equation 1.1.7), so Equation \ref{1.1.7} has infinitely many solutions. Setting \(t = 0\) in Equation \ref{1.1.8} and requiring that \(G(0) = G_0\) yields \(c = G_0\), so
\[G(t) = G_0e^{−λt}.\]
Now let’s complicate matters by injecting glucose intravenously at a constant rate of \(r\) units of glucose per unit of time. Then the rate of change of the amount of glucose in the bloodstream per unit time is
\[G' = −λG + r \label{1.1.9}\]
where the first term on the right is due to the absorption of the glucose by the body and the second term is due to the injection. After you’ve studied Section 2.1, you’ll be able to show that the solution of Equation \ref{1.1.9} that satisfies \(G(0) = G_0\) is
\[G = \frac { r } { \lambda } + \left( G _ { 0 } - \frac { r } { \lambda } \right) e ^ { - \lambda t }\]
Graphs of this function are similar to those in Figure \(\PageIndex{2}\). (Why?)
Spread of Epidemics
One model for the spread of epidemics assumes that the number of people infected changes at a rate proportional to the product of the number of people already infected and the number of people who are susceptible, but not yet infected. Therefore, if \(S\) denotes the total population of susceptible people and \(I = I(t)\) denotes the number of infected people at time \(t\), then \(S −I\) is the number of people who are susceptible, but not yet infected. Thus, \(I_0 = rI(S −I)\), where \(r\) is a positive constant. Assuming that \(I(0) = I_0\), the solution of this equation is
\[I =\dfrac{SI_0}{I_0 + (S −I_0)e^{−rSt}}\]
(Exercise 2.2.29). Graphs of this function are similar to those in Figure 1.1.1. (Why?) Since limt→∞ I(t) = S, this model predicts that all the susceptible people eventually become infected.
Newton’s Second Law of Motion
According to Newton’s second law of motion, the instantaneous acceleration a of an object with constant mass \(m\) is related to the force \(F\) acting on the object by the equation \(F = ma\). For simplicity, let’s assume that \(m = 1\) and the motion of the object is along a vertical line. Let \(y\) be the displacement of the object from some reference point on Earth’s surface, measured positive upward. In many applications, there are three kinds of forces that may act on the object:
A force such as gravity that depends only on the position y, which we write as \(−p(y)\), where \(p(y) > 0\) if \(y ≥ 0\). A force such as atmospheric resistance that depends on the position and velocity of the object, which we write as \(−q(y,y')y'\), where \(q\) is a nonnegative function and we’ve put \(y'\) “outside” to indicate that the resistive force is always in the direction opposite to the velocity. A force \(f = f(t)\), exerted from an external source (such as a towline from a helicopter) that depends only on \(t\).
In this case, Newton’s second law implies that
\[y'' = −q(y,y')y' −p(y) + f(t), \nonumber\]
which is usually rewritten as
\[y'' + q(y,y')y' + p(y) = f(t). \nonumber\]
Since the second (and no higher) order derivative of \(y\) occurs in this equation, we say that it is a second order differential equation.
Interacting Species: Competition
Let \(P=P(t)\) and \(Q=Q(t)\) be the populations of two species at time \(t\), and assume that each population would grow exponentially if the other did not exist; that is, in the absence of competition we would have
\[\label{eq:1.1.10} P'=aP \quad \text{and} \quad Q'=bQ,\]
where \(a\) and \(b\) are positive constants. One way to model the effect of competition is to assume that the growth rate per individual of each population is reduced by an amount proportional to the other population, so Equation \ref{eq:1.1.10} is replaced by
\[\begin{align*} P' &= aP-\alpha Q\\[4pt] Q' &= -\beta P+bQ,\end{align*}\]
where \(\alpha\) and \(\beta\) are positive constants. (Since negative population doesn’t make sense, this system works only while \(P\) and \(Q\) are both positive.) Now suppose \(P(0)=P_0>0\) and \(Q(0)=Q_0>0\). It can be shown (Exercise 10.4. [exer:10.4.42]) that there’s a positive constant \(\rho\) such that if \((P_0,Q_0)\) is above the line \(L\) through the origin with slope \(\rho\), then the species with population \(P\) becomes extinct in finite time, but if \((P_0,Q_0)\) is below \(L\), the species with population \(Q\) becomes extinct in finite time. Figure \(\PageIndex{3}\) illustrates this. The curves shown there are given parametrically by \(P=P(t), Q=Q(t),\ t>0\). The arrows indicate direction along the curves with increasing \(t\).
Figure \(\PageIndex{3}\): Populations of competing species
|
This is the problem :
Given two arrays of $n$ elements A and B, let's define their sum as $$ A + B = \{ a + b \mid a \in A \text{ and } b \in B \}. $$ Calculate $A + A$ in optimal time given that $A[i] \in [1,10n^{1.5}]$ for all $i$.
In order to solve this problem I used FFT by representing a polynomial in the following way:
$$ p(x) = \sum_{i=0}^n x^{A[i]}. $$
After multiplying this polynomial with itself using FFT in linear time, I get the desired result in $\Theta(n^{1.5} \log n)$ time.
However, basic combinatorics gives us that there are $O(n^2)$ elements in the sum of $A + A$, How can I possibly produce them and copy them to a result array all in less than $O(n^2)$ using FFT?
|
A blast of such despair that it can send destruction even into other dimensions. Dissidia 012 Final Fantasy Supernova (スーパーノヴァ, Sūpānova?), also known as Super Nova and Sunburst, is a recurring enemy ability. It generally deals a large amount of non-elemental magic damage to all opponents. The same spell is also used by several enemies as a powerful move intended to finish off the player party. Appearances
Super Nova is a powerful attack used by Safer∙Sephiroth. Its animation in the original Japanese version was short and it didn't deal fractional damage, instead dealing damage in the ~2000 range. The animation was greatly extended in the worldwide release, and the attack's properties were changed. The original Super Nova animation showed three bodies slowly encompassed in bubbles of white light, which completely disappear after the light fades. The extended Super Nova animation is the longest in the game, showing the destruction of most of the planets, including the dwarf planet Pluto in the solar system (Saturn, despite losing its rings, is spared from destruction, as well as Uranus, Neptune and Mars) by use of a comet that originated from outside the Milky Way.
The extended version inflicts massive damage and status effects, but cannot actually kill a target as it does Gravity-based damage reducing the player's HP by 15/16th of their current HP, and may cause Confuse, Silence, and Slow. The attack can be used more than once and accompanies every use of the move Pale Horse.
Jason Greenberg, the only artist working on the original PC port, recalled a crash bug that happened during Sephiroth's Super Nova summon. Near the end of the development cycle, many team members were done with their work and simply helped test the game as much as possible. Greenberg has mused he spent at least a full 24 hours playing that one battle.
[1]
Sunburst is a Mix ability that deals fixed 19,998 damage to all opponents.
The following items when mixed result in Sunburst:
Blessed Gem + Mana Tablet, Mana Tonic. Dark Matter + Accuracy Sphere, Amulet, Attribute Sphere, Blessed Gem, Blk Magic Sphere, Clear Sphere, Dark Matter, Defense Sphere, Designer Wallet, Door to Tomorrow, Evasion Sphere, Fire Gem, Fortune Sphere, Friend Sphere, Gambler's Spirit, HP Sphere, Ice Gem, Lightning Gem, Luck Sphere, Lv. 1 Key Sphere, Lv. 2 Key Sphere, Lv. 3 Key Sphere, Lv. 4 Key Sphere, Magic Def Sphere, Magic Sphere, Master Sphere, MP Sphere, Return Sphere, Shining Gem, Skill Sphere, Special Sphere, Agility Sphere, Strength Sphere, Supreme Gem, Teleport Sphere, Underdog's Secret, Warp Sphere, Water Gem, Wht Magic Sphere, Wings to Discovery, Winning Formula Gambler's Spirit + Underdog's Secret, Winning Formula Shining Gem + Supreme Gem, Three Stars Supreme Gem + Amulet, Blessed Gem, Door to Tomorrow, Gambler's Spirit, Mana Tablet, Mana Tonic, Supreme Gem, Three Stars, Underdog's Secret, Warp Sphere, Wings to Discovery, Winning Formula Three Stars + Blessed Gem, Three Stars Underdog's Secret + Underdog's Secret, Winning Formula Wings to Discovery + Gambler's Spirit, Underdog's Secret, Winning Formula Winning Formula + Winning Formula
Sunburst is a Mix ability, which deals roughly 6,000 non-elemental damage to all enemies.
The following combinations result in Sunburst:
Supreme Gem + Potion, Hi-Potion, X-Potion, Mega-Potion, Ether, Turbo Ether, Phoenix Down, Mega Phoenix, Elixir, Megalixir, Antidote, Soft, Eye Drops, Echo Screen, Holy Water, Remedy, Budget Grenade, Grenade, S-Bomb, M-Bomb, L-Bomb, Sleep Grenade, Silence Grenade, Dark Grenade, Petrify Grenade, Bomb Fragment, Bomb Core, Fire Gem, Antarctic Wind, Arctic Wind, Ice Gem, Electro Marble, Lightning Marble, Lightning Gem, Fish Scale, Dragon Scale, Water Gem, Shadow Gem, Shining Gem, Blessed Gem, Supreme Gem, Poison Fang, Silver Hourglass, Gold Hourglass, Candle of Life, Farplane Shadow, Dark Matter, Chocobo Feather, Chocobo Wing, Lunar Curtain, Light Curtain, Star Curtain, Healing Spring, Mana Spring, Stamina Spring, Soul Spring, Dispel Tonic, Stamina Tablet, Mana Tablet, Stamina Tonic, Mana Tonic, Twin Stars, Three Stars, Hero Drink, Gysahl Greens, Sylkis Greens, Mimett Greens, Pahsana Greens
Ereshkigal can use Super Nova in battle. In the final battle against Bhunivelze's third form, he uses an ability called Hypernova, which is based on Supernova. Hypernova deals heavy damage and dispels all status effects and status ailments from himself and Lightning.
Nael deus Darnus uses Super Nova during the final phase of the fight against her inside the Holocharts of the Second Coil of Bahamut. The ability targets a single player to create an area-of-effect field that slows the player's movement speed and inflicts heavy damage to the target and other players near them.
Supernova is the level 4 offensive-type Special move for G.Swords. It allows the user to attack all foes with a powerful light-based attack. For the duration of the music, all allies' P.Atk is raised by 50%. Music bonuses will stack with other attribute bonuses.
Ruination to all.
Super Nova is Sephiroth's EX Burst. Its animation is similar to its
Final Fantasy VII incarnation, although the animation is much shorter. It consists of Sephiroth posing while the mathematical formula used in the original attack is shown.
If the player completes the EX Burst correctly the enemy is thrown into the expanding sun. Scenes like the comet appearing, the solar system being destroyed, and the sun exploding are not present. If the player fails to fill the bar entirely, Sephiroth simply performs an exploding slash with his katana. Super Nova is labeled as a Limit Break.
Super Nova is Sephiroth's EX Burst. On perfect execution it has slightly less power than in
Dissidia, but is otherwise identical.
Supernova is the enemy ability that is used by Safer Sephiroth during battle.
Super Nova is Sephiroth's Champion summon ability. It inflict neutral physical damage to one enemy, ignoring defense. It also grants Strength+ to all allies. When used, Sephiroth appears surrounded by flames, reminiscent of the Nibelheim Incident, then rises up into the air, covered in formulas ranging in complexity to the area of a circle, to equations that appear to be from celestial mechanics and relativity. Sephiroth then unleashes Meteor, referring to the summoning of Meteor in
Final Fantasy VII. It is unlocked as downloadable content (pre-order bonus also included with Day One Edition) and costs 1★ gauge to use. When used, "One-Winged Angel" from Final Fantasy VII plays. Non- Final Fantasy appearances
Supernova is an ability used by Sephiroth in Kingdom Hearts. He drops meteorites around himself, spins them with himself at the center, then fires the energy forward and detonates it.
Gallery Final Fantasy X-2, Final Fantasy XI, Lightning Returns: Final Fantasy XIIIand Final Fantasy XIVadded. You can help the Final Fantasy Wiki by uploading images. Etymology
supernova is an astronomical event that occurs during the last stellar evolutionary stages of a massive star's life, whose dramatic and catastrophic destruction is marked by one final titanic explosion. For a short time, this causes the sudden appearance of a 'new' bright star, before slowly fading from sight over several weeks or months.A
sunburst is a design or figure commonly used in architectural ornaments and design patterns. It consists of rays or "beams" radiating out from a central disk in the manner of sunbeams.A
Trivia The equations used at the start of Sephiroth's Super Nova are of no coincidence and are genuine equations used in Astrophysics. They are as follows: $ \Phi\,\! = WU\gamma\,\! + RU\rho\,\! + SU\gamma\,\!U\rho\,\! $ $ W = -SU\gamma\,\!\Phi\,\! $ $ AU = (GMek^{-2})^{1/3} $ $ n = \pi\,\!r^2 $ The first two equations calculate the sun's and planet's potential attractive forces. The third equation calculates the Earth's potential attractive forces and the fourth misstates that the asteroid is working on a two-dimensional level, as it is the equation for the area of a circle, all to direct the energy used to cause the supernova. However, it should be noted that the sun cannot actually supernova, being too small to explode in such a fashion. Ptolemy's diagram of the celestial spheres, also known as the classical planets, or the "Seven Heavens", is used in the Super Nova animation. Since there are seven, this can be seen as another allusion the game has to the number seven. Pluto is among the planets destroyed by Super Nova, as it was still considered a planet until eight years after the game's release. However, neither Pluto, Neptune, nor Uranus had been discovered when Ptolemy conceived of the celestial spheres. The Ptolemy celestial diagram also appears during Eden's summon animation in Final Fantasy VIII. In the video game Disgaea 4, the animation for the spell "Tera Star" parodies the Final Fantasy VIIanimation for Supernova. When the spell is cast, the scene pans into space and names the planets Mars and Jupiter as they pass by, but the third object is a gigantic robot construct. The text humorously names it "Saturn?!" as the camera snaps back to center on the robot, and the robot fires a gigantic laser at the Earth, the planets swirling around the beam as it strikes its target.
|
While I was working on my stuff, another question suddenly came to mind, the one you see below
$$\int_0^\infty \frac{ \left(\sum_{n=1}^\infty\sin\left(\frac{x}{2^n}\right)\right)-\sin(x)}{x^2} \ dx$$
Which way should I look at this integral?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
While I was working on my stuff, another question suddenly came to mind, the one you see below
$$\int_0^\infty \frac{ \left(\sum_{n=1}^\infty\sin\left(\frac{x}{2^n}\right)\right)-\sin(x)}{x^2} \ dx$$
Which way should I look at this integral?
You can write you integral as $$\int_0^\infty t^{-2} \sum\limits_{\nu \geqslant 1} g_\nu (t)dt=\int_0^\infty t^{-2}g(t)dt$$ where $g_\nu(t)=\sin(t/2^\nu)-\sin(t)/2^{\nu-1}$ and $g(t)=\sum\limits_{\nu\geqslant 1}g_\nu(t)$. Using an equation relating $g(2t)$ and $g(t)$ and a change of variables $t=2u$ in the integral I get that $$\int_0^\infty \frac{g(t)}{t^2}dt=\int_0^{\infty}\frac{2\sin t-\sin 2t}{t^2}dt$$
This is a Frullani type integral which you can evaluate to $2\log 2$.
Note that $$ \begin{align} \int_0^\infty\frac{\lambda\sin(x)-\sin(\lambda x)}{x^2}\mathrm{d}x &=\lim_{a\to0}\left(\int_a^\infty\frac{\lambda\sin(x)}{x^2}\mathrm{d}x-\int_{a\lambda}^\infty\frac{\lambda\sin(x)}{x^2}\mathrm{d}x\right)\\ &=\lambda\lim_{a\to0}\int_a^{a\lambda}\frac{\sin(x)}x\frac{\mathrm{d}x}x\\[6pt] &=\lambda\log(\lambda)\tag{1} \end{align} $$ Applying $(1)$ to the question gives $$ \begin{align} \int_0^\infty\frac1{x^2}\left(\left(\sum_{n=1}^\infty\sin\left(\frac{x}{2^n}\right)\right)-\sin(x)\right)\mathrm{d}x &=\sum_{n=1}^\infty\int_0^\infty\frac{\sin(2^{-n}x)-2^{-n}\sin(x)}{x^2}\mathrm{d}x\\ &=-\sum_{n=1}^\infty2^{-n}\log\left(2^{-n}\right)\\ &=\log(2)\sum_{n=1}^\infty n2^{-n}\\[6pt] &=2\log(2)\tag{2} \end{align} $$
|
I have found answers on how to calculate the self-inductance of toroid of rectangular cross section, however my question says that "The winding are seen as a thin homogeneous currentlayer around the core" (excuse the translation). What does that mean for N? Does it mean N=1?
I am an undergrad physics major in my final semester currently taking Intro to Thermodynamics. As a final project, each student must choose a topic related to thermodynamics that is more advanced than what is covered in the curriculum and write a paper and present our findings to the class on...
What happens when you apply power to a toroidal solenoid with a iron ring inside?Does the ring move? Does the speed of movement depend on the amount of power?Sorry if this is too easy, I have no education in physic.
1. Homework Statement2. Homework Equations##\oint_{C} Bd\ell = \mu I_{enc}, B_{normal}## continuous across boundary, ##H_{parallel}## continuous across boundary3. The Attempt at a Solution$$\oint_{C} Bd\ell = \mu I_{enc} \rightarrow B = \frac{\mu NI}{2\pi r}$$Any help much...
Hi,I heard it was possible if you symmetrically wind a toroid that you can get near total internal confinement of the magnetic field in the axial plane inside the toroid.How is this possible? I imagine a section of a closed loop of wire on the face of the toroid core, yet I still imagine those...
1. Homework StatementTake a steel core (K_m = 2500) electromagnet, bend it into a loop with a small air gap, and determine the B field in the gap. The cross-sectional area of the toroid is 4cm^2, and the air gap is 2.5mm. The current through the coil's 120 turns is 15 amps. The radius of...
|
Practice Paper 4 Question 17
The figure shows a non-overlapping trace on a \(4\times4\) grid which visits all points exactly once. Imagine the same type of trace on an \(n\times n\) grid, where \(n\) can be arbitrarily large. Using the fact that \(\sum_{k=1}^\infty {1\over k}\) diverges to infinity, show that the sum of all acute angles of the trace also diverges to infinity when \(n\) tends to infinity, despite the angles tending to 0 the closer the path gets to the grid's diagonal.
Related topics Warm-up Questions
Let's prove that \(\sum_{k=1}^{\infty}{\frac{1}{k}}\) diverges. For each \(n,\) how many terms in the series are between \(\frac{1}{2^n}\) (inclusive) and \(\frac{1}{2^{n+1}}\) (exclusive)? By bounding each term \(\frac{1}{k}\) in the original series with the largest \(\frac{1}{2^n}\) such that \(\frac{1}{2^n} \leq \frac{1}{k},\) compute this new sum. Hence argue why \(\sum_{k=1}^{\infty}{\frac{1}{k}}\) diverges.
(Source: Wikipedia) Using this diagram, express \(\sin \theta, \cos \theta\) and \(\tan \theta\) in terms of \(x.\) Hence show that \(\cos(\arcsin x) = \sqrt{1-x^2}.\) Find a similar expression for \(\cos(\arctan x).\)
Hints Hint 1To show the sum of all the angles diverges, it suffices to show the sum of a subset of the angles diverges. Hint 2Express the \(i^{th}\) largest angle \(\theta_i\) that touches the top of the square in terms of \(i\). Hint 3The angle \(\theta_i\) can be expressed as the difference between two angles. Hint 4The larger angle is \(\frac{\pi}{4}\), and the tangent of the smaller angle can be expressed as the ratio between the sides of the triangle along the grid. Hint 5Remember that the function \(\tan x\) is concave in the region \((0,\frac{\pi}{4}).\) Hint 6For positive integer \(i,\) \(0 < \frac{i}{i+1} < 1,\) so \(\arctan \frac{i}{i+1}\) will lie in the region \((0,\frac{\pi}{4}).\) Hint 7By drawing the graphs \(y = ({\frac{\pi}{4}})^{-1}x\) and \(y = \tan x,\) you can see that \(\tan x\) lies below the line in the region \((0,\frac{\pi}{4}).\) Hint 8This means that \(\arctan \frac{i}{i+1} \leq \frac{\pi}{4}\frac{i}{i+1}.\) Solution
The full sum \(S\) is greater than the sum of only the angles that touch the top of the square. Call this subsequence of angles \(\theta_i\). By considering the right-angled triangles with sides \(i\) and \(i+1, \theta_i\) can be derived as: \[ \begin{align} \theta_i &= \frac{\pi}{4} - \arctan{\frac{i}{i+1}} \\ &\geq \frac{\pi}{4} - \frac{\pi}{4}\frac{i}{i+1} \\ &= \frac{\pi}{4}(\frac{1}{i+1}) \end{align} \] where the inequality is due to the concavity of \(\tan x\) in the region \((0, \frac{\pi}{4})\) and \(\tan \frac{\pi}{4}=1.\) We then have that: \(S \geq \sum_{i=0}^\infty \theta_i \geq \frac{\pi}{4}\sum_{i=0}^{\infty}\frac{1}{i+1},\) and the latter diverges. Hence \(S\) diverges.
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
The terms can mean almost anything, but I will try to present here
one way in which the terms "parallel algorithms" and "distributed algorithms" are understood. Here we interpret "distributed algorithms" from the perspective of "network computing" (think: algorithms that keep the Internet running).
I will use as a running example the problem of finding a
proper 3-colouring of a directed path (linked list). I will first describe the problem from the perspective of "traditional" algorithms — those are also known as centralised algorithms, to emphasise that they are not distributed, or sequential algorithms, to emphasise that they are not parallelised. Centralised sequential algorithms
The model of computing is e.g. the familiar
RAM model.
The input is a linked list that is stored in the main memory of the computer. There is a read-only array $x$ with $n$ elements; node number $x[i]$ is the successor of node number $i$.
The output will be also stored in the main memory of the computer. There is a write-only array $y$ with $n$ elements.
We need to find a proper colouring of the list with $3$ colours. That is, for each index $i$ we must choose a colour $y[i] \in \{1,2,3\}$ such that $y[i] \ne y[j]$ whenever node $j$ is the successor of node $i$.
There is a single processor that can directly access any part of the main memory. In one time unit, the processor can read from main memory, write to main memory, or perform elementary operations such as arithmetic or comparisons. The
running time of the algorithm is defined to be the number of time units until the algorithms stops.
Clearly, the problem can be solved in time $O(n)$, and this is optimal. For the upper bound, follow the linked list and colour the nodes with e.g. colours $1,2,1,2,\dotsc$. For the lower bound, observe that we need to write $\Omega(n)$ elements of output.
Parallel algorithms
The only difference between parallel and sequential algorithms is that we will use the
PRAM model instead of the RAM model. In the PRAM model we can consider any number of processors, but here a particularly interesting case is what happens if there are precisely $n$ processors.
While we will have
multiple processors, there is still just one main memory. As before, the input is stored as a single array in the main memory, and the output will be written in a single array in the main memory.
Now in one time unit, each processor in parallel can read from main memory, write to main memory, or perform elementary operations such as arithmetic or comparisons. Some care is needed with memory accesses that may conflict. For the sake of concreteness, let us focus on the CREW PRAM model: the processors may freely read any part of the memory, but concurrent writes are forbidden.
Now in this setting it is not at all obvious what is the time complexity of $3$-colouring linked lists. Perhaps we could solve the problem in $O(1)$ time, as we have $n$ processors, and only $n$ units of input to read and $n$ units of output to write?
However, it turns out that the time complexity of this problem is precisely $\Theta(\log \log^* n)$. So it can be solved in
almost constant time, but not quite. Distributed algorithms
Now things change radically. The model of computing is e.g. the
LOCAL model, which has very little resemblance to RAM or PRAM.
There is no "main memory". There are no "arrays".
We are only given a
computer network that consists of $n$ nodes. Each node is labelled with a unique identifier (say, a number from $\{1,2,\dotsc,n\}$). Each node has two communication ports: one port that connects the node with its successor, and one port that connects it with its predecessor.
The same (unknown) computer network is both our input and the tool that we are supposed to use to solve the problem. Each node is a computational entity that has to output its own colour, and the colours have to form a proper colouring of the network (i.e., my colour has to be different from the colours of my neighbours).
Note that everything is distributed: no single entity holds the entire input, and no single entity needs to know the entire output.
All nodes run the same algorithm. In one time unit, all nodes in parallel can send messages to their neighbours, receive messages from their neighbours, or perform elementary operations. The
running time of the algorithm is defined to be the number of time units until all nodes have stopped and produced their local outputs.
Again, it is not at all obvious what is the time complexity of $3$-colouring. It turns out that it is precisely $\Theta(\log^* n)$.
From this perspective:
Research on parallel algorithms is primarily about
understanding how to harness the computational power of a massively parallel computer. For practical applications, consider high-performance computing, number-crunching, multicore, GPU computing, OpenMP, MPI, grids, clouds, clusters, etc.
Research on distributed algorithms is primarily about
understanding which tasks can be solved efficiently in a distributed system. For practical applications, consider computer networks, communication networks, social networks, markets, biological systems, chemical systems, physical systems, etc.
For example:
If you want to know how to multiply two huge matrices efficiently with modern computer hardware, it may be a good idea to first have a look at research related to "parallel algorithms".
If you want to know if there is any hope people could form stable marriages in their real-world social network, by just exchanging information with those whom they know, it may be a good idea to first have a look at research related to "distributed algorithms".
Once again, I emphasise that this is just one way in which the terms are used. There are many other interpretations. However, this is perhaps the most interesting interpretation in the sense that e.g. PRAM and LOCAL are radically different models.
As many other answers show, another possible interpretation is to understand "distributed algorithms" from the perspective of e.g. distributed high-performance computing (computer clusters, cloud computing, MPI, etc.). Then you could indeed say that distributed algorithms are not necessarily that different from e.g. I/O efficient parallel algorithms. At least if we put aside e.g. issues related to fault tolerance.
Incidentally, there is apparently some interest in the community to make the terminology slightly less confusing. People occasionally use the term
distributed graph algorithms (cf. http://adga.hiit.fi/) or the term network computing to emphasise the perspective that I described here. However, there is not that much pressure to do that, as we can use formally precise terms such as "LOCAL" and "CONGEST" for distributed graph algorithms, "PRAM" for parallel algorithms, and e.g. "congested clique" and "BSP" (bulk synchronous parallel) for various in-between cases. References
|
I'm been working on a theory, though my math is weak. Let's say I've managed to determine that I can arrive at an answer A by always using the formula
BCD / D. Of course this evaluates to BC after canceling out D. However sometimes D can be zero 0 which results in an undefined answer. My question is theoretical in nature: Are there any mathematical theories that permit for the D's to cancel out even if D is zero?
I'm been working on a theory, though my math is weak. Let's say I've managed to determine that I can arrive at an answer A by always using the formula
Yes, if one knows that the answer has
polynomial form then one can perform such cancellations. As a simple example, if we wish to solve $\, x f(x) = x^2\,$ and we know the solution $\,f\,$ is a polynomial in $\,x\,$ then the solution is $\,f(x) = x\,$ This can lead to very efficient solutions in less trivial contexts. For example, see this slick proof of Sylvester's determinant identity $\rm\, det (I+AB)=det(I+BA)\, $ that proceeds by universally cancelling $\rm\ det\, A\ $ from the $\rm\, det\, $ of $\rm \ (1+A\ B)\, A\, =\, A\, (1+B\ A),\,$ thus trivially eliminating the "apparent singularity" at $\rm\ det\, A\, =\, 0.\,$ Further discussion is here.
As another example, one can
algebraically define derivatives of polynomials by a formula involving universal cancellation. By the Factor Theorem we know that $\,x-y\mid f(x)-f(y)\,$ in $\,R[x,y]\,$ for any ring $\,R.\,$ Let the quotient be the polynomial $\,g(x,y)\in R[x,y].\,$ Then one easily shows using linearity that the derivative of $\,f(x)\,$ w.r.t. $\,x\,$ is $\,f'(x) = g(x,x),\,$ i.e.
$$\begin{eqnarray}{}& g(x,y)\ &=&\ \frac{f(x)-f(y)}{x-y}\ \in\ R[x,y]\\ \Rightarrow\ & g(x,x)\ &=&\ f'(x)\ \in\ R[x] \end{eqnarray}$$
For example, $\,f(x) = x^n$ $\,\Rightarrow$ $\,g(x,y) = \dfrac{(x^n\!-y^n)}{(x\!-\!y)} = x^{n-1}\! + x^{n-2}y+\cdots+xy^{n-2}\!+y^{n-1}$
therefore $\,\ g(x,x) = x^{n-1} + \cdots + x^{n-1} = n x^{n-1} = f'(x).$
Simple answer is that $\frac{ab}{b} = a$ whenever $b \not=0$. If $b=0$, the expression simply has no defined value.
A longer answer is that $\frac{f(x)}{g(x)}$ may yet be defined even when $f(a) \to 0$ and $g(a) \to 0$ for some $x=a$. The classic example is
$$\lim_{x \to 0} \frac{\sin{x}}{x} = 1$$
Even though both numerator and denominator tend to 0. Of course, inserting $x=0$ giving $\frac{\sin{0}}{0}$ is nonsense.
|
Practice Paper 4 Question 18
A poll with 2 choices is run among \(n>2\) participants. Each participant chooses at random. The poll shows the results for the 2 options as percentages rounded to the nearest integer, i.e. \(x\) rounded is \(\lfloor x+0.5\rfloor\), where \(\lfloor x\rfloor\) is the greatest integer less than or equal to \(x\). For example, 1/3 would be shown as 33%. What is the probability that the sum of the 2 shown percentages does not add to 100?
Related topics Warm-up Questions Why does \(\lfloor x+0.5\rfloor\) round \(x\) to the nearest integer? Use the ceiling function \(\lceil \cdot \rceil\) to create a rounding function. Under what circumstances is \(\lfloor a+1 \rfloor = \lceil a \rceil\) true? Boris flips \(8\) fair coins. What is the probability that at least \(6\) of them are tails? Hints Hint 1How are the two percentages related? Hint 2Under what circumstances do the two rounded percentages not add up to 100? Try expressing it in an equation with integer solutions. Hint 3The only scenarios in which the rounded percentages do not add up to 100 are when one of the percentages end in \(.5,\) causing both percentages to round up and their sum to be 101. Hint 4Try phrasing the above condition in terms of \(n, k\) and \(j,\) where \(k\) is the number of votes for the first option and \(j\) is the integer of that percentage. Hint 5Why not rearrange to get rid of all fractions and decimal points? Hint 6Consider the prime factors of the terms in your equation. What can you deduce about the divisibility of \(n?\) Hint 7\(n\) must be divisible by \(8.\) By substituting \(n = 8m,\) could you derive a new equation in terms of \(n, k\) and \(j?\) Hint 8What are the possible factors of \(m?\) How about doing a case split? Hint 9By considering divisibility of \(5,\) determine which values of \(j\) are possible. Hint 10Given the possible values for \(j,\) what are the possible values for \(k?\) Hint 11What is the probability that the \(n\) voters will choose one of the suitable \(k?\) Solution
It suffices to consider the answers picking one of the options; suppose there are \(k\) of these. Then the rounded percentages of the two answers are \(\lfloor \frac{100k}{n} + 0.5\rfloor\) and \(\lfloor \frac{100(n-k)}{n} + 0.5\rfloor\) respectively. Adding these together we get \(100 + \lfloor \frac{100k}{n} + 0.5 \rfloor\) \(+ \lfloor 0.5 - \frac{100k}{n}\rfloor,\) which evaluates to 101 when the fractional part of \(\frac{100k}{n}\) is exactly \(0.5.\)
Thus, we need to solve \(\frac{100k}{n} = j + 0.5\) for \(j \in \{0, \dots, 99\}.\) Rearrange to obtain \(200k = n(2j+1).\) Since \(200 = 2^3 \cdot 5^2\) and \(2j+1\) must be odd, it must be the case that \(n\) is a multiple of \(2^3 = 8.\) Writing \(n=8m,\) our expression becomes \(5^2k=m(2j+1)\) for \(j\in \{0, \ldots, 99\}.\) There are three cases:
\(m\) is not a multiple of \(5.\) This means that \(2j+1\) must be a multiple of \(25,\) hence the only possible values for \(2j+1\) are \(S_1 = \{25, 75, 125, 175\}.\) \(m\) is a multiple of \(5\) but not a multiple of \(25.\) This means that \(2j+1\) must be a multiple of \(5,\) hence the only possible values for \(2j+1\) are \(S_2 = \{5,15,25,35\ldots,195\}.\) \(m\) is a multiple of \(25.\) There is no restriction on \(j\) except \(j<100,\) hence the only possible values for \(2j+1\) are \(S_3 = \{1,3,5,7\ldots,199\}.\)
Each \(x \in S_i,\) \(K(x) = \frac{nx}{200}\) will result in the rounded percentages to not add up to \(100.\) Thus, the probability is:
\[ \begin{equation*} p(n)= \begin{cases} 0 & \text{if } 8\nmid n \\ 2^{-n}\sum_{x \in S_1}\binom{n}{K(x)} & \text{if } 8\mid n, 5\nmid n \\ 2^{-n}\sum_{x \in S_2}\binom{n}{K(x)} & \text{if } 40\mid n, 25\nmid n \\ 2^{-n}\sum_{x \in S_3}\binom{n}{K(x)} & \text{if } 200 \mid n \\ \end{cases} \end{equation*} \]
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
534674 results
Contributors: Ganian, Robert, Kalany, Martin, Szeider, Stefan, Träff, Jesper Larsson
Date: 2015-06-30
... We show that the problem of constructing tree-structured descriptions of data layouts that are optimal with respect to space or other criteria from given sequences of displacements, can be solved in polynomial time. The problem is relevant for efficient compiler and library support for communication of noncontiguous data, where tree-structured descriptions with low-degree nodes and small index arrays are beneficial for the communication soft- and hardware. An important example is the Message-Passing Interface (MPI) which has a mechanism for describing arbitrary data layouts as trees using a set of increasingly general constructors. Our algorithm shows that the so-called MPI datatype reconstruction problem by trees with the full set of MPI constructors can be solved optimally in polynomial time, refuting previous conjectures that the problem is NP-hard. Our algorithm can handle further, natural constructors, currently not found in MPI. Our algorithm is based on dynamic programming, and requires the solution of a series of shortest path problems on an incrementally built, directed, acyclic graph. The algorithm runs in $O(n^4)$ time steps and requires $O(n^2)$ space for input displacement sequences of length $n$.
Files:
Contributors: Shafiee, Mohammad Javad, Wong, Alexander, Fieguth, Paul
Date: 2015-06-30
... Random fields have remained a topic of great interest over past decades for the purpose of structured inference, especially for problems such as image segmentation. The local nodal interactions commonly used in such models often suffer the short-boundary bias problem, which are tackled primarily through the incorporation of long-range nodal interactions. However, the issue of computational tractability becomes a significant issue when incorporating such long-range nodal interactions, particularly when a large number of long-range nodal interactions (e.g., fully-connected random fields) are modeled. In this work, we introduce a generalized random field framework based around the concept of stochastic cliques, which addresses the issue of computational tractability when using fully-connected random fields by stochastically forming a sparse representation of the random field. The proposed framework allows for efficient structured inference using fully-connected random fields without any restrictions on the potential functions that can be utilized. Several realizations of the proposed framework using graph cuts are presented and evaluated, and experimental results demonstrate that the proposed framework can provide competitive performance for the purpose of image segmentation when compared to existing fully-connected and principled deep random field frameworks.
Files:
Contributors: Refsgaard, J., Kirsebom, O. S., Dijck, E. A., Fynbo, H. O. U., Lund, M. V., Portela, M. N., Raabe, R., Randisi, G., Renzi, F., Sambi, S.
Date: 2015-06-30
... While the 12C(a,g)16O reaction plays a central role in nuclear astrophysics, the cross section at energies relevant to hydrostatic helium burning is too small to be directly measured in the laboratory. The beta-delayed alpha spectrum of 16N can be used to constrain the extrapolation of the E1 component of the S-factor; however, with this approach the resulting S-factor becomes strongly correlated with the assumed beta-alpha branching ratio. We have remeasured the beta-alpha branching ratio by implanting 16N ions in a segmented Si detector and counting the number of beta-alpha decays relative to the number of implantations. Our result, 1.49(5)e-5, represents a 24% increase compared to the accepted value and implies an increase of 14% in the extrapolated S-factor.
Files:
Contributors: Schrade, Constantin, Zyuzin, A. A., Klinovaja, Jelena, Loss, Daniel
Date: 2015-06-30
... We study two microscopic models of topological insulators in contact with an $s$-wave superconductor. In the first model the superconductor and the topological insulator are tunnel coupled via a layer of scalar and of randomly oriented spin impurities. Here, we require that spin-flip tunneling dominates over spin-conserving one. In the second model the tunnel coupling is realized by an array of single-level quantum dots with randomly oriented spins. It is shown that the tunnel region forms a $\pi$-junction where the effective order parameter changes sign. Interestingly, due to the random spin orientation the effective descriptions of both models exhibit time-reversal symmetry. We then discuss how the proposed $\pi$-junctions support topological superconductivity without magnetic fields and can be used to generate and manipulate Kramers pairs of Majorana fermions by gates.
Files:
Contributors: Tsui, K. H.
Date: 2015-06-30
... Following the basic principles of a charge separated pulsar magnetosphere \citep{goldreich1969}, we consider the magnetosphere be stationary in space, instead of corotating, and the electric field be uploaded from the potential distribution on the pulsar surface, set up by the unipolar induction. Consequently, the plasma of the magnetosphere undergoes guiding center drifts of the gyro motion due to the transverse forces to the magnetic field. These forces are the electric force, magnetic gradient force, and field line curvature force. Since these plasma velocities are of drift nature, there is no need to introduce an emf along the field lines, which would contradict the $E_{\parallel}=\vec E\cdot\vec B=0$ plasma condition. Furthermore, there is also no need to introduce the critical field line separating the electron and ion open field lines. We present a self-consistent description where the magnetosphere is described in terms of electric and magnetic fields and also in terms of plasma velocities. The fields and velocities are then connected through the space charge densities self-consistently. We solve the pulsar equation analytically for the fields and construct the standard steady state pulsar magnetosphere. By considering the unipolar induction inside the pulsar and the magnetosphere outside the pulsar as one coupled system, and under the condition that the unipolar pumping rate exceeds the Poynting flux in the open field lines, plasma pressure can build up in the magnetosphere, in particular in the closed region. This could cause a periodic openning up of the closed region, leading to a pulsating magnetosphere, which could be an alternative for pulsar beacons. The closed region can also be openned periodically by the build-up of toroidal magnetic field through a positive feedback cycle.
Files:
Contributors: Moolekamp, Fred, Mamajek, Eric
Date: 2015-06-30
... As the size of images and data products derived from astronomical data continues to increase, new tools are needed to visualize and interact with that data in a meaningful way. Motivated by our own astronomical images taken with the Dark Energy Camera (DECam) we present Toyz, an open source Python package for viewing and analyzing images and data stored on a remote server or cluster. Users connect to the Toyz web application via a web browser, making it an convenient tool for students to visualize and interact with astronomical data without having to install any software on their local machines. In addition it provides researchers with an easy-to-use tool that allows them to browse the files on a server and quickly view very large images ($>$ 2 Gb) taken with DECam and other cameras with a large FOV and create their own visualization tools that can be added on as extensions to the default Toyz framework.
Files:
Contributors: Berezhiani, Zurab
Date: 2015-06-30
... The excess of high energy neutrinos observed by the IceCube collaboration might originate from baryon number violating decays of heavy shadow baryons from dark mirror sector which produce shadow neutrinos. These sterile neutrino species then oscillate into ordinary neutrinos transferring to them specific features of their spectrum. In particular, this scenario can explain the end of the spectrum above 2 PeV and the presence of the energy gap between 400 TeV and 1 PeV.
Files:
Contributors: Carayol, Arnaud, Löding, Christof, Serre, Olivier
Date: 2015-06-30
... We consider imperfect information stochastic games where we require the players to use pure (i.e. non randomised) strategies. We consider reachability, safety, B\"uchi and co-B\"uchi objectives, and investigate the existence of almost-sure/positively winning strategies for the first player when the second player is perfectly informed or more informed than the first player. We obtain decidability results for positive reachability and almost-sure B\"uchi with optimal algorithms to decide existence of a pure winning strategy and to compute one if exists. We complete the picture by showing that positive safety is undecidable when restricting to pure strategies even if the second player is perfectly informed.
Files:
Contributors: Simpson, Gideon, Watkins, Daniel
Date: 2015-06-30
... One way of getting insight into non-Gaussian measures, posed on infinite dimensional Hilbert spaces, is to first obtain good approximations in terms of Gaussians. These best fit Gaussians then provide notions of mean and variance, and they can be used to accelerate sampling algorithms. This begs the question of how one should measure optimality. Here, we consider the problem of minimizing the distance between a family of Gaussians and the target measure, with respect to relative entropy, or Kullback-Leibler divergence, as has been done previously in the literature. Thus, it is desirable to have algorithms, well posed in the abstract Hilbert space setting, which converge to these minimizers. We examine this minimization problem by seeking roots of the first variation of relative entropy, taken with respect to the mean of the Gaussian, leaving the covariance fixed. We prove the convergence of Robbins-Monro type root finding algorithms, highlighting the assumptions necessary for them to converge to relative entropy minimizers.
Files:
Contributors: Al-Safadi, Ebrahim B., Al-Naffouri, Tareq Y., Masood, Mudassir, Ali, Anum
Date: 2015-06-30
... A novel method for correcting the effect of nonlinear distortion in orthogonal frequency division multiplexing signals is proposed. The method depends on adaptively selecting the distortion over a subset of the data carriers, and then using tools from compressed sensing and sparse Bayesian recovery to estimate the distortion over the other carriers. Central to this method is the fact that carriers (or tones) are decoded with different levels of confidence, depending on a coupled function of the magnitude and phase of the distortion over each carrier, in addition to the respective channel strength. Moreover, as no pilots are required by this method, a significant improvement in terms of achievable rate can be achieved relative to previous work.
Files:
|
Newtonian gravity of a point-source can described by a potential $\Phi = -\mu/r$. If we suppress one spatial dimension and use it to graph the value of this potential instead, we get something that looks very close to this illustration, and is indeed infinitely deep at the center--at least, in the idealization of a point-mass. And farther away from the center, it goes flat, just as many illustrations like this have it.
Illustrations like this are fairly common, and I'm guessing that they're ultimately inspired by the Newtonian potential, because they have almost nothing to do with with the spacetime curvature.
Here's an isometric embedding of the Schwarzschild geometry at an instant of Schwarzschild time, again with one dimension supressed:
Above the horizon (red circle), the surface is a piece of a paraboloid (the
Flamm paraboloid). Unlike the potential, it does not go flat at large distances.
Being isometric means that it correctly represents the spatial distances above the horizon at an instant of Schwarzschild time. Below the horizon, the embedding isn't technically accurate because the Schwarzschild radial coordinate does not represent space there, but time. Although if we pretend it's spacelike below the horizon, that would be the correct embedding. Picture the below-horizon part as having one-directional flow into the singularity.
Since we've only represented space and not time, the embedding is not enough to reconstruct the trajectories of particles in this spacetime. Still, it is a more accurate representation of a
part of the spacetime curvature of point-source--specifically the spatial part.
The velocity of the object from this perspective, would seem to increase, until a point - where the velocity in x,y coordinates starts to decrease due to most of the motion happening "down" the time dimension. Is this also correct? Would a photon seem to slow down when moving down the well, if seen from above?
The above is an embedding of a slice of spatial geometry, and is not a gravity well. The mathematical form of the paraboloid above the horizon is best described in cylindrical coordinates as$$r = 2M + \frac{z^2}{8M}\text{.}$$Here the vertical $z$ coordinate doesn't mean anything physically. It's purely an artifact of creating a surface of the same intrinsic curvature in Euclidean $3$-space as the $2$-dimensional spatial slice of Schwarzschild geometry.
For the Schwarzschild spacetime, radial freefall is actually exactly Newtonian in the Schwarzschild radial coordinate and proper time, i.e. time experienced by the freefalling object, rather than Schwarzschild time. So the Newtonian gravity well isn't actually a bad picture for the physics--it's just not the geometry and so is not a good representation of how any part of spacetime is curved. For non-radial orbits, the effective potential is somewhat different that than the Newtonian one, but ignoring the effects of angular momentum gets us the Newtonian form.
In Schwarzschild time, yes, a photon (or anything else) does slow down as gets near the horizon. In fact, in Schwarzschild time it never reaches the horizon, which is one indication that the Schwarzschild coordinates are badly behaved at the horizon. The coordinate acceleration actually becomes repulsive close to the horizon--and for a fast enough infalling object, is
always repulsive. This can be understood as the particle moving to places with more and more gravitational time dilation. In proper time of any infalling observer, however, close to the horizon the acceleration is always attractive.
|
If you need to include simple diagrams or figures in your document, the
picture environment may be helpful. This article describes circles, lines, and other graphic elements created with LaTeX.
Contents
Images can be "programmed" directly in your LaTeX file
\setlength{\unitlength}{1cm} \thicklines \begin{picture}(10,6) \put(2,2.2){\line(1,0){6}} \put(2,2.2){\circle{2}} \put(6,2.2){\oval(4,2)[r]} \end{picture}
The syntax of the
picture is \begin{picture}(width,height)(x-offset,y-offset)
the parameters are passed inside parentheses,
width and
height as you may expect, determine the width and the height of the picture; the units for this parameter are set by
\setlength{\unitlength}{1cm}. The second parameter is optional and establishes the coordinates for the lower-left corner. Below a description of other commands:
\put(6,2.2){\oval(4,2)[r]}
4,2. The parameter
[r] is optional, you can use
\put(2,2.2){\circle{2}}
In the next section the rest of the commands are described.
Different basic elements can be combined for more complex pictures
\setlength{\unitlength}{0.8cm} \begin{picture}(12,4) \thicklines \put(8,3.3){{\footnotesize $3$-simplex}} \put(9,3){\circle*{0.1}} \put(8.3,2.9){$a_2$} \put(8,1){\circle*{0.1}} \put(7.7,0.5){$a_0$} \put(10,1){\circle*{0.1}} \put(9.7,0.5){$a_1$} \put(11,1.66){\circle*{0.1}} \put(11.1,1.5){$a_3$} \put(9,3){\line(3,-2){2}} \put(10,1){\line(3,2){1}} \put(8,1){\line(1,0){2}} \put(8,1){\line(1,2){1}} \put(10,1){\line(-1,2){1}} \end{picture}
In this example several lines and circles are combined to create a picture, then some text is added to label the points. Below each command is explained:
\thicklines
\thinlines which has the opposite effect.
\put(8,3.3){{\footnotesize $3$-simplex}}
\put(9,3){\circle*{0.1}}
\put(10,1){\line(3,2){1}}
Arrows can also be used inside a picture environment, let's see a second example
\setlength{\unitlength}{0.20mm} \begin{picture}(400,250) \put(75,10){\line(1,0){130}} \put(75,50){\line(1,0){130}} \put(75,200){\line(1,0){130}} \put(120,200){\vector(0,-1){150}} \put(190,200){\vector(0,-1){190}} \put(97,120){$\alpha$} \put(170,120){$\beta$} \put(220,195){upper state} \put(220,45){lower state 1} \put(220,5){lower state 2} \end{picture}
The syntax for vectors the same used for
line
\put(120,200){\vector(0,-1){150}}
Bézier curves are special curves that are drawn using three parameters, one start point, one end point and a control point that determines "how curved" it is.
\setlength{\unitlength}{0.8cm} \begin{picture}(10,5) \thicklines \qbezier(1,1)(5,5)(9,0.5) \put(2,1){{Bézier curve}} \end{picture}
Notice that the command
\qbezier (quadratic Bezier curve) is not inside a
\put command. The parameters that must be passed are:
Picture is the standard tool to create figures in LaTeX, as you see this is tool is sometimes too restrictive and cumbersome to work with, but it's supported by most of the compilers and no extra packages are needed. If you need to create complex figures, for more suitable and powerful tools see the TikZ package and Pgfplots package articles.
For more information see
|
I'm really stumped on a homework problem asking me to evaluate $\int \frac{ln\ 6x\ sin^{-1}(ln6x)}{x}dx$, and after a few hours of trying different approaches I'd definitely be appreciative for a bump in the right direction. As a caveat, I should add that I've already received credit for the assignment, I'm simply looking to fully understand how to complete the integral.
Here's what I've done so far:
I noticed a good u-substitution, so I let $u = ln\ 6x$ and $du = \frac{1}{x}dx$
So I rewrote my integral as $\int u\ sin^{-1}(u)\ du$
This particular section is allowing me to use formulas for integration so I've chosen:
$$\int x^n sin^{-1}x\ dx = \frac{1}{n+1} \left(x^{n+1}sin^{-1}x-\int\frac{x^{n+1}dx}{\sqrt{1-x^2}}\right)$$
Which gets me:
$$\frac{1}{2} \left(u^2sin^{-1}u-\int\frac{u^2du}{\sqrt{1-u^2}} \right)$$
Now I use a second formula for integration which states:
$$\int \frac{x^2}{\sqrt{a^2-x^2}}dx = -\frac{x}{2}\sqrt{a^2-x^2} + \frac{a^2}{2}sin^{-1}\frac{x}{a}+C$$
So this brings me to: \begin{align} &\frac{1}{2} \left(u^2sin^{-1}u-\left(-\frac{u}{2}\sqrt{1-u^2}+\frac{1}{2}sin^{-1}u\right)\right)+C \\ & = \frac{1}{4} \left( u \, \sqrt{1- u^{2}} + (2 u^{2} -1) \, \sin^{-1}(u) \right) + C \end{align} However, even when I replace $u$ with $ln\ 6x$ I can't seem to find a way to get to the answer, which is:
$$\frac{1}{4}\left((2 \, \ln^2(6x)-1) \, \sin^{-1}(\ln(6x)) + \ln(6x) \, \sqrt{1-\ln^2(6x)}\right)+C$$
Is my fundamental approach flawed or am I simply missing something in the latter stages of simplification?
|
1) According to the Royal Mint, a copper coin weighs 3.56g. The relative atomic mass (RAM) is 63.5. The coin therefore contains approximately $\frac{3.56}{63.5}\times6.02\times10^{23} = 3.28\times10^{22}$ copper atoms. Assuming each atom is neutral, each atom contains 29 electrons, so there are $9.52\times10^{23}$ electrons in one coin.
2) I estimated that a lightbulb contains 100ml of air. The mole fraction of argon in dry air is 0.00934. We therefore need $\frac{100}{0.00934}ml = 1.07\times10^4ml = 10.7l$ of air.
3) Assuming the density of air is around 1.2kg/m$^3$, the mass of the air from q2 is $1.2\times10.7\times10^{-3}$kg$ = 1.3\times10^{-2}$kg. If this is cooled to a temperature where it's all liquid and it's kept at standard pressure, we can assume it has the standard density of liquid air: 870 kg/m
3. It would approximately occupy $\frac{1.3\times10^{-2}}{870}m^3 = 1.5\times10^{-5}m^3 = 0.015l$.
4) The percentage mass of carbon in teflon is $100\times\frac{2\times12}{2\times12+4\times19} = 24\%$. There's an image of the polymer here.
5) The number of molecules of the monomer in a chain of length 1cm is $\frac{0.01m}{140pm\times2} = 3.5\times10^7.$ There are therefore $1.4\times10^8$ atoms of fluorine in such a chain.
One mole of $F_2$ gas contains 2 moles of F atoms. The number of moles of fluorine gas is therefore $\frac{ 2.8\times10^8}{2\times(6\times10^{23})} = 2\times10^{-16}$ moles.
6) The reaction is 2M (s) + 2H
2O (l)$\rightarrow$ 2MOH (aq) + H 2 (g), producing hydrogen gas. To produce 1.5 moles of gas, we'd need 3 moles of water, which weighs 54g and hence has volume 54ml.
7) The nucleus takes up a percentage volume $100\times\frac{\pi\times(1.75fm)^3}{\pi\times(120pm)^3} = 3\times10^{-13}\%$. The rest of the atom is a cloud of negatively charged electrons, which have much, much smaller mass than the nucleus.
8) Given the percentage mass of oxygen of your body is 65%, the percentage mass of carbon of your body is $65\times\frac{9.5}{25.5}\times\frac{12}{16} = 18$%. (This accounts for carbon being less abundant, and also a carbon atom is lighter than an oxygen atom.)
Suppose the average atomic mass is 10gmol$^{-1}$. Then the number of atoms in a human weighing 80kg is $\frac{8\times10^4}{10}\times6\times10^{23} \approx 5\times10^{27}$.
9)A lower bound for this distance is $5\times10^{27}\times120pm = 6\times10^{17}m$ (as hydrogen is the smallest atom). This is about 63 lightyears (the distance light travels in 63 years). There is apparently a masssive planet this distance away.
|
Dielectrophoretic Separation
How can you use an electric field to control the movement of electrically neutral particles? This may sound impossible, but in this blog entry, we will see that the phenomenon of dielectrophoresis (DEP) can do the trick. We will learn how DEP can be applied to particle separation and demonstrate a very easy-to-use biomedical simulation app that is created with the Application Builder and run with COMSOL Server™.
Forces on a Particle in an Inhomogeneous Static Electric Field
The dielectrophoretic effect will show up in both DC and AC fields. Let’s first look at the DC case.
Consider a dielectric particle immersed in a fluid. Furthermore, assume that there is an external static (DC) electric field applied to the fluid-particle system. The particle will in this case always be pulled from a region of weak electric field to a region of strong electric field, provided the permittivity of the particle is higher than that of the surrounding fluid. If the permittivity of the particle is lower than the surrounding fluid, then the opposite is true; the particle is drawn to a region of weak electric field. These effects are known as
positive dielectrophoresis (pDEP) and negative dielectrophoresis (nDEP), respectively.
The pictures below illustrate these two cases with a couple important quantities visualized:
Electric field Maxwell stress tensor (surface force density) An illustration of positive dielectrophoresis (pDEP), where the particle permittivity is higher than that of the surrounding fluid \epsilon_p > \epsilon_f. An illustration of negative dielectrophoresis (nDEP), where the particle permittivity is lower than that of the surrounding fluid \epsilon_p < \epsilon_f.
The Maxwell stress tensor represents the local force field on the surface of the particle. For this stress tensor to be representative of what forces are acting on the particle, the fluid needs to be “simple” in that it shouldn’t behave too weirdly either mechanically or electrically. Assuming the fluid is simple, we can see from the above illustrations that the net force on the particle appears to be in opposite directions between the two cases of pDEP and nDEP. Integrating the surface forces will indeed show that this is the case.
It turns out that if we shrink the particle and look at the infinitesimal case of a very small particle acting like a dipole in a fluid, then the net force is a function of the gradient of the square of the electric field.
Why is the net force behaving like this? To understand this, let’s look at what happens at a point on the surface of the particle. At such a point, the magnitude of the electric surface force density, f, is a function of charge times electric field:
(1)
where \rho is the induced polarization charges. (Let’s ignore for the moment that some quantities are vectors and make a purely phenomenological argument by just looking at magnitudes and proportionality.)
The induced polarization charges are proportional to the electric field:
(2)
Combining these two, we get:
(3)
But this is just the local surface force density at one point at the surface. In order to get a net force from all these surface force contributions at the various points on the surface, there needs to be a difference in force magnitude between one side of the particle and the other. This is why the net force, \bf{F}, is proportional to the gradient of the square of the electric field norm:
(4)
In the above derivation, we have taken some shortcuts. For example, what is the permittivity in this relationship? Is it that of the particle or that of the fluid or maybe the difference of the two? What about the shape of the particle? Is there a shape factor?
Let’s now address some of these questions.
Force on a Spherical Particle
In a more stringent derivation, we instead use the vector-valued relationship for the force on an electric dipole:
(5)
where \bf{P} is the electric dipole moment of the particle.
To get the force for different particles, we simply insert various expressions for the electric dipole moment. In this expression, we can also see that if the electric field is uniform, we get no force (since the particle is small, its dipole moment is considered a constant). For a spherical dielectric particle with a (small) radius r_p in an electric field, the dipole moment is:
(6)
where k is a parameter that depends on the the permittivity of the particle and the surrounding fluid. The factor 4 \pi r_p^3 can be seen as a shape factor.
Combining these, we get:
(7)
This again shows the dependency on the gradient of the square of the magnitude of the electric field.
Forces on a Particle in a Time-Varying Electric Field
If the electric field is time-varying (AC), the situation is a bit more complicated. Let’s also assume that there are losses that are represented by an electric conductivity, \sigma. The dielectrophoretic net force, \bf{F}, on a spherical particle turns out to be:
(8)
where
(9)
and
(10)
is the complex-valued permittivity. The subscripts p and f represent the particle and the fluid, respectively. The radius of the particle is r_p and \bf{E}_{\textrm{rms}} is the root-mean-square of the electric field. The frequency of the AC field is \nu.
From this expression, we can get the force for the electrostatic case by setting \sigma = 0. (We cannot take the limit when the frequency goes to zero, since the conductivity has no meaning in electrostatics.)
In the expression for the DEP force, we can see that indeed the difference in permittivity between the fluid and the particle plays an important role. If the sign of this difference switches, then the force direction is flipped. The factor k involving the difference and sum of permittivity values is known as the
complex Clausius-Mossotti function and you can read more about it here. This function encodes the frequency dependency of the DEP force.
If the particles are not spherical but, say, ellipsoidal, then you use another proportionality factor. There are also well-known DEP force expressions for the case where the particle has one or more thin outer shells with different permittivity values, such as in the case of biological cells. The simulation app presented below includes the permittivity of the cell membrane, which is represented as a shell.
The settings window for the effective DEP permittivity of a dielectric shell.
There may be other forces acting on the particles, such as fluid drag force, gravitation, Brownian motion force, and electrostatic force. The simulation app shown below includes force contributions from drag, Brownian motion, and DEP. In the Particle Tracing Module, a range of possible particle forces are available as built-in options and we don’t need to be bothered with typing in lengthy force expressions. The figure below shows the available forces in the
Particle Tracing for Fluid Flow interface. The different particle force options in the Particle Tracing for Fluid Flow interface. Dielectrophoretic Separation of Particles
Medical analysis and diagnostics on smartphones is about to undergo rapid growth. We can imagine that, in the future, a smartphone can work in conjunction with a piece of hardware that can sample and analyze blood.
Let’s envision a case where this type of analysis can be divided into three steps:
Extract blood using the hardware, which attaches directly to your smartphone, and compute mean platelet and red blood cell diameter. Compute the efficiency of separation of the red blood cells and platelets. This efficiency needs to be high in order to perform further diagnostics on the isolated red blood cells. Use the computed optimum separation conditions to isolate the red blood cells using the hardware attached to your smartphone.
The COMSOL Multiphysics simulation app focuses on Step 2 of the overall analysis process above. By exploiting the fact that blood platelets are the smallest cells in blood and have different permittivity and conductivity than red blood cells, it is possible to use DEP for size-based fractionation of blood; in other words, to separate red blood cells from platelets.
Red blood cells are the most common type of blood cell and the vertebrate organism’s principal means of delivering oxygen (O
2) to the body tissues via the blood flow through the circulatory system. Platelets, also called thrombocytes, are blood cells whose function is to stop bleeding.
Using the Application Builder, we created an app that demonstrates the continuous separation of platelets from red blood cells (RBCs) using the
Dielectrophoretic Force feature available in the Particle Tracing for Fluid Flow interface. (The app also requires one of the following: the CFD Module, Microfluidics Module, or Subsurface Flow Module and either the MEMS Module or AC/DC Module.)
The app is based on a lab-on-a-chip (LOC) device described in detail in a paper by N. Piacentini et al., “Separation of platelets from other blood cells in continuous-flow by dielectrophoresis field-flow-fractionation”, from
Biomicrofluidics, vol. 5, 034122, 2011.
The device consists of two inlets, two outlets, and a separation region. In the separation region, there is an arrangement of electrodes of alternating polarity that controls the particle trajectories. The electrodes create the nonuniform electric field needed for utilizing the dielectrophoretic effect. The figure below shows the geometry of the model.
The geometry used in the particle separation simulation app.
The inlet velocity for the lower inlet is significantly higher (853 μm/s) than the upper inlet (154 μm/s) in order to focus all the injected particles toward the upper outlet.
The app is built on a model that uses the following physics interfaces:
Creeping Flow(Microfluidics Module) to model the fluid flow. Electric Currents(AC/DC or MEMS Module) to model the electric field in the microchannel. Particle Tracing for Fluid Flow(Particle Tracing Module) to compute the trajectories of RBCs and platelets under the influence of drag and dielectrophoretic forces and subjected to Brownian motion.
Three studies are used in the underlying model:
Study 1 solves for the steady-state fluid dynamics and frequency domain (AC) electric potential with a frequency of 100 kHz. Study 2 uses a Time Dependent study step, which utilizes the solution from Study 1 and estimates the particle trajectories without the dielectrophoretic force. In this study, all particles (platelets and RBCs) are focused to the same outlet. Study 3 is a second Time Dependent study that includes the effect of the dielectrophoretic force.
You can download the model that the app was based on here.
A Biomedical Simulation App
To create the simulation app, we used the Application Builder, which is included in COMSOL Multiphysics® version 5.0 for the Windows® operating system.
The figure below shows the app as it looks like when first started. In this case, we have connected to a COMSOL Server™ installation in order to run the COMSOL Multiphysics app in a standard web browser.
The app lets the user enter quantities, such as the frequency of the electric field and the applied voltage. The results include a scalar value for the fraction of red blood cells separated. In addition, three different visualizations are available in a tabbed window: the blood cell and platelet distribution, the electric potential, and the velocity field for the fluid flow.
The figures below show visualizations of the electric potential and the flow field.
The app has three different solving options for computing just the flow field, computing just the separation using the existing flow field, or combining the two. A warning message is shown if there is not a clean separation.
Increasing the applied voltage will increase the magnitude of the DEP force. If the separation efficiency isn’t high enough, we can increase the voltage and click on the
Compute All button, since in this case, both the fields and particle trajectories need to be recomputed. We can control the value of the Clausius-Mossotti function of the DEP force expression by changing the frequency. It turns out that at the specified frequency of 100 kHz, only red blood cells will exit the lower outlet.
The fluid permittivity is in this case higher than that of the particles and both the platelets and the red blood cells experience a negative DEP force, but with different magnitude. To get a successful overall design, we need to balance the DEP forces relative to the forces from fluid drag and Brownian motion. The figure below shows a simulation with input parameters that result in a 100% success in separating out the red blood cells through the lower outlet.
Further Reading
To learn more about dielectrophoresis and its applications, click on one of the links listed below. Included in the list is a link to a video on the Application Builder, which also shows you how to deploy applications with COMSOL Server™.
Model Gallery: Dielectrophoretic Particle Separation Video Gallery: How to Build and Run Simulation Apps with COMSOL Server™ (archived webinar) Wikipedia: Dielectrophoresis Wikipedia: Maxwell-Wagner-Sillars polarization Wikipedia: Clausius-Mossotti relation Windows is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries. Comments (7) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
this means that 9-a+b-c+2 is divisible by 11, so b-(a+c) is divisible by 11.
so this value is either -11 or 0.
if it is -11, we can use casework.
first take the case where b is 0.
the a can be 2, c can be 9, or a can be 3, and c can be 8, all the way up to the case where a is 9 and c is 2.
this has 8 possibilities.
now you take the case where b is 1. the sums are 12, and there are 7 posibilities using the same logic.
you keep doing this untill you get 1 possibility, so there are 8+7+6+5+...+1 possibilities or 28.
now if b-(a+c)=0, then it is the same.
if b is 0, there is one possibility.
if b is 1 there are 2 possibilities.
and you keep doing this until b is 9 and there are 10 possibilities.
this makes 1+2+3+...+10=55 possibilities for that case.
now you just add them up and get 55+28=83 possibilities.
HOPE THIS HELPED!!
\(\text{You start your answer with an assertion that needs proving}\\ 9abc2 \equiv 0 \pmod{11} \Rightarrow (9-a+b+c+2) \equiv 0 \pmod{11}\)
\(90000 + 1000a + 100b+10c+2 \equiv 0 \pmod{11}\\ 9 + 10a + b +10c+2 \pmod{11} = \\ 11-a+b-c \pmod{11} = \\ b - (a+c) \pmod{11}\)
Neat little trick.
asdf335: I think you have made several mistakes in your reasoning.
1 - 8 + 7 + 6 + 5 +........+ 1 = 36 not 28
2- You should remove all numbers beginning with zeros to the right of 9, because they repeat: E.G. 9 011 2, 9 022 2, 9 033 2.......etc. There are 9 in total.
3- For each subsequent number beginning with 1, there are about 2 numbers repeated, which should also be removed: E.G. 9 112 2, 9 122, 2. There are 2 such numbers in each category beginning with 1 and ending in 9. The only exceptions are the numbers beginning with 5: 95062, 95172, 95282, 95392, 95502, 95612, 95722, 95832, 95942. Out of these 9 numbers, there is one repeat, i.e,: 9 550 2, which should be removed.
4- There are a total of 9 x 10 =90 - 9 zeros - (2 x 9) + 1 for the other 9 numbers = 64 combinations of the 3 numbers a, b, c.
5- Here are All the numbers that meet the condition that they be "distinct", or different from each other: 91322, 91432, 91542, 91652, 91762, 91872, 91982, 92092, 92312, 92532, 92642, 92752, 92862, 92972, 93082, 93192, 93412, 93522, 93742, 93852, 93962, 94072, 94182, 94292, 94512, 94622, 94732, 94952, 95062, 95172, 95282, 95392, 95612, 95722, 95832, 95942, 96052, 96272, 96382, 96492, 96712, 96822, 96932, 97042, 97152, 97262, 97482, 97592, 97812, 97922, 98032, 98142, 98252, 98362, 98472, 98692, 98912, 99022, 99132, 99242, 99352, 99462, 99572, 99682=64
|
Warning: Heavy use of LaTeX follows. Make sure you enabled Javascript to read those equations properly. It is highly recommended to enlarge the page so that the equations are not in congested form.
First of all, seasons greetings everyone. Let's see if I can get the customization article done before the end of 2016...
*
Readers probably know that I use linear algebra extremely heavily -- for casual blog posts, game studies, hardcore research and so on. It is really useful and it simplifies quite a lot of problems when you are stuck to prove it in a traditional way. You can find it in computational maths, topology, combinatorics, and now analysis.
Definition 1. Let $r\in \mathbb{N}_0$. Define $C^r$ to be the space of $r$-times differentiable functions, with the norm
$||f||_{C^r} = \sum _{|s|\leq r} ||D^sf||_{\infty}$
Definition 2. Let $r \in \mathbb{N}_0$ and $\alpha \in (0,1)$. Define $C^{r+\alpha}$ be the space so that the following equipped norm is finite:
$||f||_{C^{r+\alpha}} = \sum _{0\leq |s| < r} ||D^sf||_{\infty} + \sum _{|s|=r} H\ddot{o}l_{\alpha}(D^sf) < \infty $
Theorem 3. Some primitive results. There exist some constants $C>0$ so that:
1) If $f\in C^1$ then $||f||_{C^{\alpha}} \leq C ||f||_{\infty}^{1-\alpha} ||f||_{C^1}^{\alpha}$.
2) If $f\in C^{2+\alpha}$ then $||f||_{C^1} \leq C||f||_{C^{\alpha}}^{(\alpha +1)/2}||f||_{C^{2+\alpha}}^{(1-\alpha)/2}$.
In the rest of this article, $C$ represents some positive constants probably varying from line to line.
Proof. (1) is relatively easy to do using MVT:
$|f(x)-f(y)| = |f(x)-f(y)|^{1-\alpha} |f(x)-f(y)|^{\alpha}$
$\leq (2||f||_{\infty})^{1-\alpha}(||f||_{C^1}|x-y|)^{\alpha}$
$\leq C ||f||_{\infty}^{1-\alpha} ||f||_{C^1}^{\alpha}|x-y|^{\alpha}$
(2) involves some nasty analytical approximation so it will be skipped here.
We split the exponent $1$ into $(1-\alpha) + \alpha$, but what about harder interpolation estimates?
Theorem 4. If $f\in C^{2+\alpha}$ then $||f||_{C^1} \leq C ||f||_{\infty}^{\frac{1+\alpha}{2+\alpha}} ~ ||f||_{C^{2+\alpha}}^{\frac{1}{2+\alpha}}$ for some constant $C>0$.
Well, notice that these kinds of interpolation results share two common features: if $||f||_{C^p} \leq \prod ||f||_{C^{q_n}}^{r_n}$ then $\sum r_n =1 $ and $p = \sum q_nr_n$. These properties are preserved by the usual row operations!
Let $a,b,c,d$ be the exponent for the norm of $C^0, C^{\alpha}, C^1, C^{2+\alpha}$ respectively. Putting those 3 equations together we have the following matrix
$A = \begin{bmatrix} \alpha -1& 1 & -\alpha & 0 \\ 0 & -\frac{\alpha+1}{2} & 1 & \frac{\alpha-1}{2} \\ -\frac{1+\alpha}{2+\alpha} & 0 & 1 & -\frac{1}{2+\alpha} \end{bmatrix}$
which, upon reduction, has a zero row. Therefore theorem 4 is true.
QED.
If you want the real exponent to work with, use the linear dependency algorithm:
$A^t \sim \begin{bmatrix} 1& 0 & \frac{-\alpha -1}{\alpha ^2 + \alpha -2} \\ 0 & 1 & \frac{-2}{\alpha ^2 + \alpha -2} \\ 0 & 0 & 0\\0& 0& 0\end{bmatrix}$
The signs are not particularly useful because we have to care about the orientation of signs -- that flips if we take the negative power. Now upon such suitable exponent, we have the following from theorem 3:
$||f||_{C^{\alpha}}^{\frac{\alpha +1}{\alpha ^2+\alpha -2}} \geq C ||f||_{\infty}^{-\frac{1+\alpha}{2+\alpha}}~ ||f||_{C^1}^{\frac{\alpha (\alpha -1)}{\alpha ^2 + \alpha -2}}$
$||f||_{C^1}^{-\frac{2}{\alpha ^2+\alpha -2}} \leq C||f||_{C^{\alpha}}^{-\frac{\alpha +1}{\alpha ^2 + \alpha -2}}~||f||_{C^{2+\alpha}}^{\frac{1}{2+\alpha}}$
And, therefore, replicating the proof:
$||f||_{C^1} = (||f||_{C^1}^{-\frac{2}{\alpha ^2+\alpha -2}}~ )(||f||_{C^1}^{\frac{\alpha (\alpha -1)}{\alpha ^2 + \alpha -2}}~)$
$\leq C (||f||_{C^{\alpha}}^{-\frac{\alpha +1}{\alpha ^2 + \alpha -2}}~ ||f||_{C^{2+\alpha}}^{\frac{1}{2+\alpha}})(||f||_{C^{\alpha}}^{\frac{\alpha +1}{\alpha ^2+\alpha -2}}~ ||f||_{\infty}^{\frac{1+\alpha}{2+\alpha}})$
$ = C ||f||_{\infty}^{\frac{1+\alpha}{2+\alpha}} ~ ||f||_{C^{2+\alpha}}^{\frac{1}{2+\alpha}}$
You may actually find an easier solution to the above: just do substitution twice:
$||f||_{C^1} \leq C||f||_{C^{\alpha}}^{\frac{\alpha +1}{2}}~ ||f||_{C^{2+\alpha}}^{\frac{1-\alpha}{2}} \leq (||f||_{\infty}^{1-\alpha}~ ||f||_{C^1}^{\alpha})^{\frac{\alpha +1}{2}}~ ||f||_{C^{2+\alpha}}^{\frac{1-\alpha}{2}}$
which gives you the right answer after some rearrangement, but what about more complicated interpolation results? I simply leave two simple results here -- be warned that they are linearly independent from the above, so if they have rank 1 then you have to prove at least one of them using analytical results. The bad news is, they increase the rank of the system by 2. Can you still try to make use of linear algebra...?
Theorem 5. Let $f\in C^{2+\alpha}$, then the following holds for some $C>0$:
1): $||f||_{C^1} \leq C||f||_{\infty}^{\frac{\alpha}{\alpha +1}}~ ||f||_{1+\alpha}^{\frac{1}{\alpha +1}}$
2): $||f||_{C^2} \leq C||f||_{\infty}^{\frac{\alpha}{2+\alpha}}~ ||f||_{C^{2+\alpha}}^{\frac{2}{2+\alpha}}$
|
Search
Now showing items 1-5 of 5
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(Elsevier, 2016-02)
The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ...
Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2013-11)
We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
|
Search
Now showing items 1-2 of 2
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
|
By Dr Adam Falkowski (Résonaances; Orsay, France)
The title of this post is purposely over-optimistic in order to increase the traffic. A more accurate statement is that a recent analysis
of X-ray spectrum of galactic clusters claims the presence of a monochromatic \(3.5\keV\) photon line which can be interpreted as a signal of a\[ Detection of An Unidentified Emission Line in the Stacked X-ray spectrum of Galaxy Clustersby Esra Bulbul and 5 co-authors (NASA/Harvard-Smithsonian)
\large{m_{\nu({\rm ster})} = 7\keV }
\]sterile neutrino dark matter candidate decaying into a photon and an ordinary neutrino. It's a long way before this claim may become a well-established signal. Nevertheless, in my opinion, it's not the least believable hint of dark matter coming from astrophysics in recent years.
First, let me explain why one would anyone dirty their hands to study X-ray spectra. In the most popular scenario the dark matter particle is a WIMP — a particle in the \(\GeV\)-\(\TeV\) mass ballpark that has weak-strength interactions with the ordinary matter. This scenario may predict signals in gamma rays, high-energy anti-protons, electrons etc, and these are being searched high and low by several Earth-based and satellite experiments.
But in principle the mass of the dark matter particle could be anywhere between \(10^{-30}\) and \(10^{50}\GeV\), and there are many other models of dark matter on the market. One serious alternative to WIMPs is a \(\keV\)-mass sterile neutrino. In general, neutrinos
aredark matter: they are stable, electrically neutral, and are produced in the early universe. However we know that the 3 neutrinos from the Standard Model constitute only a small fraction of dark matter, as otherwise they would affect the large-scale structure of the universe in a way that is inconsistent with observations. The story is different if the 3 "active" neutrinos have partners from beyond the Standard Model that do not interact with W- and Z-bosons — the so-called "sterile" neutrinos. In fact, the simplest UV-complete models that generate masses for the active neutrinos require introducing at least 2 sterile neutrinos, so there are good reasons to believe that these guys exist. A sterile neutrino is a good dark matter candidate if its mass is larger than \(1\keV\) (because of the constraints from the large-scale structure) and if its lifetime is longer than the age of the universe.
How can we see if this is the right model? Dark matter that has no interactions with the visible matter seems hopeless. Fortunately, sterile neutrino dark matter is expected to decay and produce a smoking-gun signal in the form of a monochromatic photon line. This is because, in order to be produced in the early universe, the sterile neutrino should mix slightly with the active ones. In that case, oscillations of the active neutrinos into sterile ones in the primordial plasma can populate the number density of sterile neutrinos, and by this mechanism it is possible to explain the observed relic density of dark matter. But the same mixing will make the sterile neutrino decay, as shown in the diagrams here. If the sterile neutrino is light enough and/or the mixing is small enough then its lifetime can be much longer than the age of the universe, and then it remains a viable dark matter candidate.
The tree-level decay into 3 ordinary neutrinos is undetectable, but the 2-body loop decay into a photon and and a neutrino results in production of photons with the energy\[
\large{E=\frac{m_{\rm DM}}{2}.}
\] Such a monochromatic photon line can potentially be observed. In fact, in the simplest models sterile neutrino dark matter heavier than \(\approx 50\keV\) would produce a too large photon flux and is excluded. Thus the favored mass range for dark matter is between \(1\) and \(50\keV\). Then the photon line is predicted to fall into the X-ray domain that can be studied using X-ray satellites like XMM-Newton, Chandra, or Suzaku.
Until last week these searches were only providing lower limits on the lifetime of sterile neutrino dark matter. This paper claims they may have hit the jackpot. The paper use the XMM-Newton data to analyze the stacked X-ray spectra of many galaxy clusters where dark matter is lurking. After subtracting the background they see is this:
Although the natural reaction here is a loud "are you kidding me", the claim is that the excess near \(3.56\keV\) (red data points) over the background model is very significant, at 4-5 astrophysical sigma. It is difficult to assign this excess to any know emission lines from usual atomic transitions. If interpreted as the signal of sterile neutrino dark matter, the measured energy and the flux corresponds to the red star in the plot, with the mass \(7.1\keV\) and the mixing angle of order \(5\times 10^{-5}\). This is allowed by other constraints and, by twiddling with the lepton asymmetry in the neutrino sector, consistent with the observed dark matter relic density.
Clearly, a lot could go wrong with this analysis. For one thing, the suspected dark matter line doesn't stand alone in the spectrum. The background mentioned above consists not only of continuous X-ray emission but also of monochromatic lines from known atomic transitions. Indeed, the \(2\)-\(10\keV\) range where the search was performed is pooped with emission lines: the authors fit 28 separate lines to the observed spectrum before finding the unexpected residue at \(3.56\keV\). The results depend on whether these other emission lines are modeled properly. Moreover, the known argon XVII dielectronic recombination line happens to be nearby at \(3.62\keV\). The significance of the signal decreases when the flux from that line is allowed to be larger than predicted by models. So this analysis needs to be confirmed by other groups and by more data before we really get excited.
Decay diagrams borrowed from this review. For more up-to-date limits on sterile neutrino DM see this paper, or this plot. Update: another independent analysis of XMM-Newton data observes the anomalous 3.5 keV line in the Andromeda and the Perseus cluster. The text was reposted from Adam's blog with his permission...
|
Find the least positive four-digit solution to the following system of congruences.
\(7x \equiv 21 \pmod{14} \)
\(2x+13 \equiv 16 \pmod{9} \)
\(-2x+1 \equiv x \pmod{25} \)
from the first one, you can see that x is odd.
now you do the second one.
this means 2x leaves a remainder of 3 when divided by 9.
this means x is a multiple of 3.
now, 3x divided by 25 leaves remainder of 1 from the third one.
now you do guess and check and see tht the first x is 17, the next is 42, the next is 67, and on and on
this means that the number when divided by 25 leaves remainder 17, and that it is odd.
2x is remainder of 3 when divided by 9, so the first one would be 42, then adding 225 because of 25 and 9, so then it would be 942 would be one, then 1167. you can double check and it all works.
HOPE THIS HELPED!
There is a more formal method to solving this problem.
First let's reduce these congruences.
\(7x \equiv 21 \pmod{14}\\ x \equiv 3 \pmod{2}\\ x \equiv 1 \pmod{2}\)
--------------------------------
\(2x+13 \equiv 16 \pmod{9}\\ 2x \equiv 3 \pmod{9}\\ \text{now we multiply both sides by the multiplicative inverse of }2 \pmod{9}\\ 5\cdot 2x \equiv 5\cdot 3 \pmod{9}\\ x \equiv 6 \pmod{9}\)
---------------------------------
\(-2x + 1 \equiv x \pmod{25}\\ 3x \equiv 1 \pmod{25}\\ 17\cdot 3x \equiv 17 \pmod{25}\\ x \equiv 17 \pmod{25}\)
\(\text{Now we have the following system of linear congruences}\\ x \equiv 1 \pmod{2}\\ x \equiv 6 \pmod{9}\\ x \equiv 17 \pmod{25}\)
\(\text{From the first congruence we have}\\ x = 2t+1,~t \in \mathbb{Z}\\ \text{substitute this into the second congruence}\\ 2t+1 \equiv 6 \pmod{9}\\ 2t\equiv 5 \pmod{9}\\ 5\cdot 2t \equiv 5\cdot 5 \pmod{9}\\ t \equiv 7 \pmod{9}\\ t = 9s+7,~s \in \mathbb{Z}\)
\(\text{substitute this back into }x \text{ and simplify}\\ x = 2(9s+7)+1\\ x = 18s + 14+1 = 18s+15\)
\(\text{Now we substitute this into the third congruence}\\ 18s+15 \equiv 17 \pmod{25}\\ 18s \equiv 2 \pmod{25}\\ \text{Now we have to find }18^{-1} \pmod{25},\text{ with these small numbers trial and error works}\\ 18^{-1}\pmod{25} = 7\\ 7\cdot 18 s \equiv 14 \pmod{25}\\ s \equiv 14 \pmod{25}\\ s = 25u+14,~u\in \mathbb{Z}\)
\(\text{and we substitute this back into }x\\ x = 18(25u+14)+15 = 267+450u\\ x \equiv 267 \pmod{450}\\ \text{The smallest 4 digit solution will be}\\ x = 2\cdot 450 + 267 = 1167\).
|
Bernoulli Bernoulli Volume 25, Number 1 (2019), 375-394. On the longest gap between power-rate arrivals Abstract
Let $L_{t}$ be the longest gap before time $t$ in an inhomogeneous Poisson process with rate function $\lambda_{t}$ proportional to $t^{\alpha-1}$ for some $\alpha\in(0,1)$. It is shown that $\lambda_{t}L_{t}-b_{t}$ has a limiting Gumbel distribution for suitable constants $b_{t}$ and that the distance of this longest gap from $t$ is asymptotically of the form $(t/\log t)E$ for an exponential random variable $E$. The analysis is performed via weak convergence of related point processes. Subject to a weak technical condition, the results are extended to include a slowly varying term in $\lambda_{t}$.
Article information Source Bernoulli, Volume 25, Number 1 (2019), 375-394. Dates Received: March 2017 Revised: August 2017 First available in Project Euclid: 12 December 2018 Permanent link to this document https://projecteuclid.org/euclid.bj/1544605250 Digital Object Identifier doi:10.3150/17-BEJ990 Mathematical Reviews number (MathSciNet) MR3892323 Zentralblatt MATH identifier 07007211 Citation
Asmussen, Søren; Ivanovs, Jevgenijs; Segers, Johan. On the longest gap between power-rate arrivals. Bernoulli 25 (2019), no. 1, 375--394. doi:10.3150/17-BEJ990. https://projecteuclid.org/euclid.bj/1544605250
|
As before, let \(V\) be a complex vector space.
Let \(T\in\mathcal{L}(V,V)\) and \((v_1,\ldots,v_n)\) be a basis for \(V\). Recall that we can associate a matrix \(M(T)\ \in \mathbb{C}^{n\times n}\) to the operator \(T\). By Theorem 7.4.1, we know that \(T\) has at least one eigenvalue, say \(\lambda\in \mathbb{C}\). Let \(v_1 \neq 0\) be an eigenvector corresponding to \(\lambda\). By the Basis Extension Theorem, we can extend the list \((v_1)\) to a basis of \(V\). Since \(Tv_1 = \lambda v_1\), the first column of \(M(T)\) with respect to this basis is
\[ \begin{bmatrix} \lambda \\ 0\\ \vdots\\ 0 \end{bmatrix}. \]
What we will show next is that we can find a basis of \(V\)such that the matrix \(M(T)\) is
upper triangular.
Definition 7.5.1: Upper Trianglar Matrix
A matrix \(A=(a_{ij})\in \mathbb{F}^{n\times n}\) is called
upper triangular if \(a_{ij}=0\) for \(i>j\).
Schematically, an upper triangular matrix has the form
\[ \begin{bmatrix} * && * \\ &\ddots& \\ 0 &&* \end{bmatrix}, \]
where the entries \(*\) can be anything and every entry below the main diagonal is zero.
Here are two reasons why having an operator \(T\) represented by an upper triangular matrix can be quite convenient:
the eigenvalues are on the diagonal (as we will see later); it is easy to solve the corresponding system of linear equations by back substitution (as discussed in Section A.3).
The next proposition tells us what upper triangularity means in terms of linear operators and invariant subspaces.
Proposition 7.5.2
Suppose \(T\in \mathcal{L}(V,V)\) and that \((v_1,\ldots,v_n)\) is a basis of \(V\). Then the following statements are equivalent:
the matrix\(M(T)\) with respect to the basis\((v_1,\ldots,v_n)\) is upper triangular; \(Tv_k \in \Span(v_1,\ldots,v_k)\) for each\(k=1,2,\ldots,n\); \(\Span(v_1,\ldots,v_k)\) is invariant under\(T\) for each\(k=1,2,\ldots,n\).
Proof
The equivalence of Condition~1 and Condition~2 follows easily from the definition since Condition~2 implies that the matrix elements below the diagonal are zero.
Obviously, Condition~3 implies Condition~2. To show that Condition~2 implies Condition~3, note that any vector \(v \in\Span(v_1,\ldots,v_k)\) can be written as \(v=a_1v_1+\cdots+a_kv_k\). Applying \(T\), we obtain
\[ Tv = a_1 Tv_1 + \cdots + a_k Tv_k \in \Span(v_1,\ldots,v_k) \]
since, by Condition~2, each \(Tv_j \in \Span(v_1,\ldots,v_j)\subset \Span(v_1,\ldots,v_k)\) for \(j=1,2,\ldots,k\) and since the span is a subspace of \(V\).
\(\square\)
The next theorem shows that complex vector spaces indeed have some basis for which the matrix of a given operator is upper triangular.
Theorem 7.5.3
Let \(V\) be a finite-dimensional vector space over \(\mathbb{C}\) and \(T\in\mathcal{L}(V,V)\). T hen there exists a basis \(B\) for \(V\) such that \(M(T)\) is upper triangular with respect to \(B\).
Proof
We proceed by induction on \(\dim(V)\). If \(\dim(V)=1\), then there is nothing to prove.
Hence, assume that \(\dim(V)=n>1\)and that we have proven the result of the theorem for all \(T\in \mathcal{L}(W,W)\), where \(W\) is a complex vector space with \(\dim(W)\le n-1\). By Theorem7.4.1, \(T\) has at least one eigenvalue \(\lambda\).
Define
\[ U = \range(T-\lambda I), \]
and note that
\(\dim(U)<\dim(V)=n\)since \(\lambda\)is an eigenvalue of \(T\) and hence \(T-\lambda I\) is not surjective; \(U\) is an invariant subspace of \(T\)since, for all \(u\in U\), we have
\[ Tu = (T-\lambda I) u + \lambda u, \]
which implies that \(Tu\in U\) since \((T-\lambda I) u \in \range(T-\lambda I)=U\) and \(\lambda u\in U\).
Therefore, we may consider the operator \(S=T|_U\), which is the operator obtained by restricting \(T\) to the subspace \(U\). By the induction hypothesis, there exists a basis \((u_1,\ldots,u_m)\) of \(U\) with \(m\le n-1\) such that \(M(S)\) is upper triangular with respect to \((u_1,\ldots,u_m)\). This means that
\[ Tu_j = Su_j\in \Span(u_1,\ldots,u_j), \quad \text{for all \(j=1,2,\ldots,m\).} \]
Extend this to a basis \((u_1,\ldots,u_m,v_1,\ldots,v_k)\)of \(V\). Then
\[ Tv_j=(T-\lambda I)v_j + \lambda v_j, \quad \text{for all \(j=1,2,\ldots,k\).} \]
Since \((T-\lambda I) v_j\in \range(T-\lambda I)=U=\Span(u_1,\ldots,u_m)\), we have that
\[ Tv_j \in \Span(u_1,\ldots,u_m,v_1,\ldots,v_j), \quad \text{for all \(j=1,2,\ldots,k\).} \]
Hence, \(T\) is upper triangular with respect to the basis \((u_1,\ldots,u_m,v_1,\ldots,v_k)\).
\(\square\)
The following are two very important facts about upper triangular matrices and their associated operators.
Proposition 7.5.4
Suppose \(T\in\mathcal{L}(V,V)\) is a linear operator and that \(M(T)\) is upper triangular with respect to some basis of \(V\).
\(T\) is invertible if and only if all entries on the diagonal of \(M(T)\) are nonzero. The eigenvalues of \(T\) are precisely the diagonal elements of \(M(T)\).
Proof of Proposition 7.5.4, Part 1
Let \((v_1,\ldots,v_n)\)be a basis of \(V\) such that
\begin{equation*}
M(T) = \begin{bmatrix} \lambda_1 &&*\\ &\ddots&\\ 0&&\lambda_n \end{bmatrix} \end{equation*}
is upper triangular. The claim is that \(T\) is invertible if and only if \(\lambda_k\neq 0\) for all \(k=1,2,\ldots,n\). Equivalently, this can be reformulated as follows: \(T\) is not invertible if and only if \(\lambda_k=0\) for at least one \(k\in\{1,2,\ldots,n\}\).
Suppose \(\lambda_k=0\). We will show that this implies the non-invertibility of \(T\). If \(k=1\), this is obvious since then \(Tv_1=0\), which implies that \(v_1\in\kernel(T)\) so that \(T\) is not injective and hence not invertible. So assume that \(k>1\). Then
\( Tv_j \in \Span(v_1,\ldots,v_{k-1}), \quad \) for all \(j \le k\),
since \(T\) is upper triangular and \(\lambda_k=0\). Hence, we may define \(S=T|_{\Span(v_1,\ldots,v_k)}\) to be the restriction of \(T\) to the subspace \(\Span(v_1,\ldots,v_k)\) so that
\[ S: \Span(v_1,\ldots,v_k) \to \Span(v_1,\ldots,v_{k-1}). \]
The linear map \(S\) is not injective since the dimension of the domain is larger than the dimension of its codomain, i.e.,
\[ \dim(\Span(v_1,\ldots,v_k)) = k > k-1 = \dim(\Span(v_1,\ldots,v_{k-1})). \]
Hence, there exists a vector \(0\neq v\in \Span(v_1,\ldots,v_k)\) such that \(Sv=Tv=0\). This implies that \(T\)is also not injective and therefore also not invertible.
Now suppose that \(T\) is not invertible. We need to show that at least one \(\lambda_k=0\). The linear map \(T\) not being invertible implies that \(T\) is not injective. Hence, there exists a vector \(0\neq v\in V\) such that \(Tv=0\), and we can write
\begin{equation*}
v = a_1 v_1 + \cdots + a_k v_k \end{equation*} for some \(k\), where \(a_k\neq 0\). Then \begin{equation}\label{eq:expansion} 0 = Tv = (a_1 Tv_1 + \cdots + a_{k-1} Tv_{k-1}) + a_k Tv_k. \label{7.5.1} \end{equation}
Since \(T\) is upper triangular with respect to the basis \((v_1,\ldots,v_n)\), we know that \(a_1 Tv_1 + \cdots + a_{k-1} Tv_{k-1}\in \Span(v_1,\ldots,v_{k-1})\). Hence, Equation \ref{7.5.1} shows that \(Tv_k \in \Span(v_1,\ldots,v_{k-1})\), which implies that \(\lambda_k=0\).
\(\square\)
Proof of Proposition 7.5.4, Part 2.
Recall that \(\lambda\in\mathbb{F}\) is an eigenvalue of \(T\) if and only if the operator \(T-\lambda I\) is not invertible. Let \((v_1,\ldots,v_n)\) be a basis such that \(M(T)\) is upper triangular. Then
\begin{equation*}
M(T-\lambda I) = \begin{bmatrix} \lambda_1-\lambda &&*\\ &\ddots&\\ 0&&\lambda_n-\lambda \end{bmatrix}. \end{equation*}
Hence, by Proposition 7.5.4(1), \(T-\lambda I\) is not invertible if and only if \(\lambda=\lambda_k\) for some \(k\).
\(\square\)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.