text
stringlengths 256
16.4k
|
|---|
Volume 9, Number 5, 2019, Pages 1872-1883
DOI:10.11948/20180335
An integral boundary value problem of conformable integro-differential equations with a parameter
Chengbo Zhai,Yuqing Liu
Keywords:Positive solution, conformable derivative, integro-differential equations, fixed point theorem of generalized concave operators.
Abstract:
In this article, we consider some properties of positive solutions for a new conformable integro-differential equation with integral boundary conditions and a parameter$$\left\{ \begin{array}{l} T_{\alpha}u(t)+\lambda f(t,u(t),I_{\alpha}u(t))=0,t\in[0,1],\u(0)=0,u(1)=\beta\int_{0}^{1}u(t)dt ,\beta\in[\frac 32,2), \ \end{array}\right.\nonumber$$where $\alpha\in(1,2]$, $\lambda$ is a positive parameter, $T_{\alpha}$ is the usual conformable derivative and $I_{\alpha}$ is the conformable integral, $f:[0,1]\times\mathbf{R^{+}}\times\mathbf{R^{+}}\rightarrow \mathbf{R^{+}} $ is a continuous function, where $\mathbf{R^{+}}=[0,+\infty)$.We use a recent fixed point theorem for monotone operators in ordered Banach spaces, and then establish the existence and uniqueness of positive solutions for the boundary value problem. Further, we give an iterative sequence to approximate the unique positive solution and some good properties of positive solution about the parameter $\lambda$. A concrete example is given to better demonstrate our main result.
PDF
Download reader
|
Combining All Pairs Shortest Paths and All Pairs Bottleneck Paths Problems Abstract
We introduce a new problem that combines the well known All Pairs Shortest Paths (APSP) problem and the All Pairs Bottleneck Paths (APBP) problem to compute the shortest paths for all pairs of vertices for all possible flow amounts. We call this new problem the All Pairs Shortest Paths for All Flows (APSP-AF) problem. We firstly solve the APSP-AF problem on directed graphs with unit edge costs and real edge capacities in \(\tilde{O}(\sqrt{t}n^{(\omega+9)/4}) = \tilde{O}(\sqrt{t}n^{2.843})\) time, where
n is the number of vertices, t is the number of distinct edge capacities (flow amounts) and O( n ω O( n 2.373) is the time taken to multiply two n-by- n matrices over a ring. Secondly we extend the problem to graphs with positive integer edge costs and present an algorithm with \(\tilde{O}(\sqrt{t}c^{(\omega+5)/4}n^{(\omega+9)/4}) = \tilde{O}(\sqrt{t}c^{1.843}n^{2.843})\) worst case time complexity, where c is the upper bound on edge costs. KeywordsShort Path Acceleration Phase Path Cost Successor Node Edge Cost Preview
Unable to display preview. Download preview PDF.
References 1.Alon, N., Galil, Z., Margalit, O.: On the Exponent of the All Pairs Shortest Path Problem. In: Proc. 32nd FOCS, pp. 569–575 (1991)Google Scholar 2.Chan, T.: More algorithms for all-pairs shortest paths in weighted graphs. In: Proc. 39th STOC, pp. 590–598 (2007)Google Scholar 3. 4.Duan, R., Pettie, S.: Fast Algorithms for (max,min)-matrix multiplication and bottleneck shortest paths. In: Proc. 19th SODA, pp. 384–391 (2009)Google Scholar 5. 6. 7.Le Gall, F.: Faster Algorithms for Rectangular Matrix Multiplication. In: Proc. 53rd FOCS, pp. 514–523 (2012)Google Scholar 8. 9. 10. 11. 12. 13. 14.Williams, V.: Breaking the Coppersmith-Winograd barrier. In: Proc. 44th STOC (2012)Google Scholar 15. 16.
|
I always get the following warning:
LaTeX Font Warning: Font shape `OMS/cmsy/m/n' in size <11.5> not available
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community
First of all,
cmsy doesn't look like Times New Roman. In order to use this exact font, use the
mathptmx package (Times font for both text and math formulae). To use any font size, you need to use Koma Script with
scrartcl document class.
MWE
Input:
\documentclass[fontsize=11.5pt]{scrartcl}%if you want both maths and text in Times New Roman :\usepackage{mathptmx}%if you only want text in Times New Roman :%\usepackage{times}\usepackage{lipsum} %Blind text\begin{document}\lipsum[1-3]$$SE=\sqrt{\frac{1}{n-2}\left[\sum^{n}_{i=1}(y_i-\overline{y})^2-\frac{\Big[\sum^{n}_{i=1}(x_i-\overline{x})(y_i-\overline{y})\Big]^2}{\sum^{n}_{i=1}(x_i-\overline{x})^2}\right]}$$\lipsum[1-2]\end{document}
Output with
mathptmx:
Output with
times:
|
In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
Topological susceptibility and chiral condensate with maximally twisted mass fermions
byDrElena García Ramos(DESY)
→Europe/Rome
Aula Conversi (LNF)
Aula Conversi
LNF
Via Enrico Fermi, 4000044 Frascati (Roma)
Description
We study the `spectral projector' method for the computation of the thechiral condensate and the topological susceptibility, using maximally twisted mass Wilson fermions. In particular, we perform a study of the quark massdependence of the chiral condensate $\Sigma$ and topologicalsusceptibility $\chi_{top}$ in the range $270\;\mathrm{MeV} < m_{\pi} <500\;\mathrm{MeV}$ and compare our data with the analytical predictions. Inaddition, we compute $\chi_{top}$ in the quenched approximation where wematched the lattice spacing to the dynamical simulations. Using the Kaon, $\eta$ and $\eta'$ meson masses computed on the dynamical ensembles, we thenperform a preliminary test of the Witten-Veneziano relation.
|
Answer
$(5, 315^\circ)$ = $3.536 - 3.536i$
Work Step by Step
$(5, 315^\circ)$ is equivalent to $5cis315^\circ$ $5cis315^\circ$ = $5(cos315^\circ + isin315^\circ)$ = $5(\frac{\sqrt{2}}{2} - \frac{\sqrt{2}}{2}i)$ = $3.536 - 3.536i$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
I decided to read Max Born's 1954 Physics Nobel Prize lecture,
We like to think about the Copenhagen school. Born makes it clear that Niels Bohr was a guru in his eyes. But he also prefers to talk about three schools, the Copenhagen school; the Göttingen school; and the Cambridge school. He was the boss of the Göttingen school in 1926-27; the Cambridge school had Dirac as its epicenter.
On the first page, he tells us he wants to describe how physics solved the "crisis". He also enumerates some important people who have always remained uncomfortable with the standard interpretation of quantum mechanics – including Planck, Einstein, de Broglie, and Schrödinger.
On the second page, we're reminded about Planck's 1900 calculation of the black body spectrum. In the early 1920s, it was already generally accepted that the electromagnetic energy at frequency \(f\) simply has to come in packets \(E=hf\): this assumption was able to explain so many things. The corpuscular theory of light was partially revived and people got used to some form of wave-particle duality although they didn't quite understand why these two aspects of light didn't contradict one another.
Also, in 1913, the 20-year-old Niels Bohr presented his model of the atom. The hydrogen atom had discrete states and light is emitted when a transition occurs. Born uses a rather modern quantum mechanical language to discuss Bohr's idea – but it's plausible that so did Bohr. Born already sketches matrices relatively to the energy eigenstate basis and suggests that Bohr was already thinking in similar ways in 1913. Also, Bohr was able to see that the principle of correspondence had to hold. In modern words, when the quantum numbers (integers) are much larger than one, physics must reduce to its classical limit.
On the third page, he sketches how people managed to reconcile the discreteness of the electromagnetic energy with the principle of correspondence. The number of photons at a given frequency had to be integer; however, when this integer was high, one should have gotten classical physics where light may be emitted at any intensity or frequency.
How can one describe emission – the process that becomes continuous and allows an arbitrary intensity in the classical limit – if the number of photons is discrete, \(n\in\ZZ\)? Born credits Bohr, Kramers, Sommerfeld, Epstein, and especially Einstein with the idea that the intensity has to be replaced by
transition probabilities. The number \(n\in\ZZ\) cannot change continuously. But what can be continuous is the probability that it changes to \(n\pm 1\), among other things.
When non-orthodox "interpreters" of quantum mechanics (Bohmian mechanics, MWI, objective collapses etc.) try to claim that the proper formulation of quantum mechanics is unnecessary, they love to talk about one electron, a fermion that obeys the Pauli exclusion principle. And they design various tricks and extra stories that are meant to replace quantum mechanics.
It never works properly. But if they considered emission or absorption of radiation, it could have been much clearer to them that the intrinsically probabilistic description is absolutely unavoidable. It's unavoidable for a simple reason. The energy – of electromagnetic radiation or the atom – often has the discrete (or mixed) spectrum. So the change of the energy (like the emission or absorption of a photon by an atom) has to be a discrete operation, too. It either happens (one) or doesn't happen (zero) and because the underlying equations of physics are continuous, the only continuous relevant quantity that physics may calculate for this (or any) discrete process is the probability \(p\) that the process will take place!
In Göttingen, Born, Ladenburg, Kamer, Heisenberg, and Jordan did the hard work for the case of atoms. For the first time, it became clear that they needed to talk about the
probability amplitude(which is a square root of the probability), and not the probability itself. There were lots of confusion about the right formulae for quite some time. They localized the correct one mainly because they were demanding the principle of correspondence – the correct classical limit – at every point.
Here I am praising lots of people, and not just Born, with the "probability amplitude". So why do we only celebrate Born for \(\rho(x)=\abs{\psi}^2\)? Well, the amplitudes in the previous paragraphs were only appreciated for photons' being there or not (intensity), and they were not functions of space. Even though the probabilistic description of "something" was already being adopted in physics, it wasn't universal yet.
A place where nobody dared to go, except for Heisenberg, as Max Born's granddaughter put it. I had to embed it here because they're just playing Xanadu on the radio now.
As discussed on the fourth page, in 1925, Heisenberg – despite his hay fever – did the most important step when he described observables such as \(x\) by "arrays" and replaced the multiplication of those observables by the matrix multiplication. Heisenberg had never heard of matrices before (although they had obviously existed in mathematics at that time) and it seems that no one else among these founders of quantum mechanics did, either. Well, Born was able to remember that he had heard about matrices somewhere at school. He was insanely excited about Heisenberg's ideas and sure that they were on the right track. And suddenly, Born could even see\[
pq-qp = \frac{h}{2\pi i},
\] if I use the original notation. Observables were non-commuting which is cool and deep. Heisenberg's first paper didn't have the complete quantum mechanics and lots of work must have been done – to prove that the off-diagonal entries drop from certain expressions but not others, and so on. Born together with his "pupil" Pascual Jordan, Born invited himself to become a collaborator of Heisenberg. ;-) Well, it was a remote collaboration because Heisenberg was out of town.
As the fifth page reminds us, they soon published the paper with three authors which had the full "matrix mechanics" in it. They already recognized that Heisenberg's "arrays" were mathematicians' "matrices". Before it got published, a paper by Dirac appeared and it got almost equivalent results, I think already in the bra-ket notation. It wasn't a complete coincidence that the papers appeared simultaneously: Dirac's work was
inspiredby a Cambridge talk by Heisenberg.
The initial paper of Heisenberg didn't have the final theory with all the right interpretations but it had the right "advice" what kind of mathematics should be used to describe observables. And I think – the history seems to confirm this opinion – that because of the existence of smart people with a good intuition, Heisenberg's initial paper already made the imminent discovery of the full quantum mechanics inevitable.
Shortly after the three-men paper, Wolfgang Pauli used quantum mechanics to calculate the spectrum of the hydrogen atom. From that moment, Born says, there could have been no doubts about the validity of matrix mechanics. I agree with that. It was a completely new formalism that was "deduced" by looking at totally different phenomena than the hydrogen atom. Nevertheless, it was able to calculate all the energy levels of the hydrogen atoms accurately – while the character of the energy eigenstates was highly inequivalent to Bohr's orbits. This agreement with the experiments really couldn't have been a coincidence.
Interpretational confusions remained after the three-men paper. To complicate things further, in a year, Schrödinger published his wave mechanics – a more complex (potential energy allowing etc.) variation of the ideas by de Broglie. Schrödinger himself proved the mathematical equivalence of wave mechanics to the old good matrix mechanics, which quickly ended the strange "schism" in early 1926.
Wave mechanics quickly became more popular than the Göttingen and Cambridge versions of quantum mechanics. However, to make things dirty, Schrödinger's insights came with lots of physical gibberish. He was convinced that one could keep determinism and that \(e\abs{\psi}^2\) was the electric charge density \(\rho\) that one should simply insert to the classical Maxwell's equations.
Born informs us on the sixth page that his Göttingen school already found those remarks "unacceptable". More precisely, they clearly contradicted some (already) well-established empirical knowledge. Particles could have already been counted and precisely located – which contradicted the continuous character of the charge that would follow from Schrödinger's philosophy.
On seventh page, Born says that Schrödinger's picture seemed to lead to a misleading interpretation for bound electrons, so Born himself tried to get the right interpretation by working on matrix mechanics. During a neat collaboration with Norbert Wiener at MIT, he was finally able to return to wave mechanics and combine some ideas about the wave function with Einstein's picture of transition probabilities for the photons – and to make the simple and bold claim that \(|\psi|^2\) is the probability density for an electron.
Science doesn't end with such statements. It's an intriguing hypothesis and most of the work is about proving it. One great argument, completed by Wentzel, reproduced Rutherford's scattering formula from Born's interpretation of the wave function. Our modern scattering theory and S-matrices came from those advances. However, Heisenberg's paper coining the uncertainty principle contributed more to this right interpretation of the wave function. On the eighth page, Born thanks a few early authors such as Faxén, Holtsmark, Bethe, and my granduncle Nevill Motl in England. ;-)
Together with Fock of Russia, Born also formulated the probabilistic interpretation of \(|c_n|^2\) in the case of the discrete spectrum, especially that of a harmonic oscillator. Dirac formulated the transition probabilities within the final quantum framework more elegantly.
At the end of the eighth page, Born asks once again why Einstein et al. couldn't embrace those advances. On the ninth page, he divides these psychological obstacles to the attachment to determinism; and to realism. The determinism is less serious, the ninth page explains, because a small inaccuracy has led to indeterminism in practice even in classical physics. It was always operationally nonsensical to ask whether the location \(x\) is equal to the transcendent number such as \(\pi\), for example. One can't recognize those. Even in classical mechanics, one needs to formulate things probabilistically if we use any real-world measurements etc. Positions are uncertain and have distributions etc.
So the determinism isn't terribly deep and "hard to abandon". It's the realism that is the source of more serious objections. What prevents us from measuring both position and momentum if we can measure them separately? Niels Bohr developed the theory of measurement especially as a sequence of replies to Einstein. The principle of complementarity became a part of it. Objects allow to be studied by many experiments, but two experiments often mutually exclude each other.
The last, eleventh page (if I don't count the references on the twelfth page) becomes rather philosophical – and interesting. It tells us that Bohr tried to extend his complementarity to very distant disciplines such as the "brain" and the "free will". Born also says things about physics itself. He wants to preserve the concept of a particle. Why do particles exhibit wave properties etc.? Born's favorite way of talking about these matters was the following one:
Every object that we perceive appears in innumerable aspects. The concept of the object is the invariant of all these aspects. From this point of view, the present universally used system of concepts in which particles and waves appear simultaneously, can be completely justified.It's pretty wise. When we talk about particles, we are discussing some "aspect" of the objects. For those aspects, it's totally sensible to talk about "particles". But the objects also have other aspects that allow us to talk differently and there is no contradiction. To switch from particles to waves is a form of a transformation and the essence of the object is unchanged by this transformation.
In the last paragraph, he speculates that for further progress, we will have to eliminate some "concept of our doctrine" that is still being used although it's unjustified by experience. A modification of the mathematics – the Hilbert space and the Hamiltonian – won't be enough, Born says. He justified those speculations by particle physics and especially hadrons. In the mid 1950s, the zoo of the strongly interacting particles already began to emerge and Born's last paragraph is an example of the desire of the people at that time to make a "new revolution" similar to the quantum one in order to understand the hadrons.
As we know today, no conceptual revolution was needed to describe the interior of the protons etc. QCD – the first correct description of hadrons we had – is just a slightly more technical sibling of QED. In recent years, we also found the dual stringy/holographic description of these things which may perhaps be classified as more than an evolutionary advance. But I guess that he meant that some "complete new quantum-like revolution" would be absolutely necessary to describe the hadrons, and this wasn't the case.
The last paragraph seems to be the only one in which this 60-year-old lecture may be considered outdated, I think.
|
Let $f$ be a real function so that for all $x,y\in \mathbb{R}$, $f(x+y) = f(x)+f(y)+xy(x+y)$ and $\lim _{x \to 0} f(x)/x = 1$ hold. Find $f$.
It does not like a casual question to ordinary university students, not even for maths students...but anyway one may notice that $xy(x+y) = (x+y)^3 - x^3 - y^3$ if you know symmetric polynomials well, and the free linear term makes up the limit we have $f(x) = x^3/3 + x$.
What about uniqueness?
Well, it is easy to show that the function is continuous by exploiting the equality $f(2x) = 2f(x) + 2x^3$, but even stronger we can prove differentiability. Rearranging gives
$\frac{f(x+y)-f(y)}{x} = \frac{f(x) + xy(x+y)}{x}$
Taking limit $x\to 0$ yields $f'(y) = 1 + y^2$ - not only that the derivative exists, we also get a complete DE with an initial value $f(0) = 0$. That easily solves to $f(x) = x^3/3 + x$.
What if the limit condition is changed? Say, $\lim _{x\to 1} f(x)/(x-1) = 1$? We can rewrite the expression as the following:
$f(x+y-1) = f(x-1)+f(y)+(x-1)y(x+y-1)$
Dividing both sides by $(x-1)$ reduces the question to the original case which gives the same solution.
Let is consider the functional equation at a much generalized form: $f(x+y) = f(x)+f(y)+g(x,y)$. According to the above argument if we managed to show that
$\lim _{x\to 0} (f(x)+g(x,y))/x$ exists then we can easily reduce it back to a DE where existance or uniqueness is clear. However this is hard to work around if we do not assume the limit condition because we know pretty much nothing about $f$. It does not work by assuming continuity of $f$ or $g$, since we may come across to some very nasty functions like the Weierstrass function which makes no sense in these questions. We leave a few observations here without solving it (or even getting close):
1. $g$ must by symmetric. This is obvious by observing the rest of the term. In particular, if $g$ is a polynomial then it is in the ring generated by $\sigma _1 = x+y$ and $\sigma _2 = xy$.
2. If $g(x,y) = O(xy)$ for small $x,y$ then it is possible to recover $\lim _{x\to 0}f(x)/x = 1$ using estimates like $f(x) = 2^n f(2^{-n}x) + O(x^2)$ or $f(x) = nf(x/n) + \log n O(x^2)$.
3. If $g$ is Lipschitz we know immediately that it's differentiable a.e. but that says we could have uncountably many non-differentiable points that we not want to deal with...
But that's it. I do not want to spend more than 60 minutes on this useless (for me) problem :d
------------
The 1st Simon-Marais (aka the Pacific Putnam) was held on October 2017 and the statistics are finally out (compare the efficiency against IMO marking team...). It's very surprising that only 1~2% of the students got problem A4 (and A3). I expected the top rankers to be close to 42 (aka 6 correct answers) but it turned out that not many olympiad players participated the event as can be judged from the award list. I expected the event to be much harder next year.
$\
|
I know that the $y$-
section $A_x$ of a $\mu_x\otimes \mu_y$-measurable set $A$, where $\mu_x\otimes \mu_y$ is the Lebesgue extension of the product measure $\mu_x\times \mu_y$ (both measures being $\sigma$-additive complete measures defined on $\sigma$-algebras of subsets of $X$ and $Y$, respectively)$^1$, defined by $$A_y=\{x\in X:(x,y)\in A\}$$is $\mu_y$-measurable for almost all $y$.My textbook says that (I think that the reason is that $\forall x\notin\bigcup_{y\in Y} A_y\quad A_x=\emptyset$) the integral $\int_X \mu_y(A_x)d\mu_x$ is the same as $\int_{\bigcup_{y\in Y} A_y} \mu_y(A_x)d\mu_x$.
Therefore I think that it is implicit that $\bigcup_{y\in Y} A_y$ is measurable, but I cannot see why it is. Could anybody be so kind to explain why it is measurable, provided that it is?
$^1$I want to apologise in advance if my wording may not be rigourous or formal enough, I fear I am inheriting such a style from my book, which I often find hard to understand precisely because of such a lack of formality and rigour.
|
I've wondered this for a while but not known how to ask the question,If light is a transverse wave, then what is it transverse to?To elaborate, light travels in three-dimensions, radially. To me, this seems analogous to the sound wave, with pulses of pressure moving longitudinally to the...
Hi, it's been a long time since I've been on these forums, but here is a new 3D blender model that I spent four weeks of daily work to finish.I'm glad it's done, it wasn't easy.This is metal gear RAY, a 70-foot tall robot from the popular video game franchise metal gear solid.
Hi!I'm trying to calculate the allowed energies of each state for 3D harmonic oscillator.En = (Nx+1/2)hwx + (Ny+1/2)hwy+ (Nz+1/2)hwz, Nx,Ny,Nz = 0,1,2,...Unfortunately I didn't find this topic in my textbook.Can somebody help me?
I am taking a high school multivariable calculus class and we have an end-of-semester project where we trace out some letters etc., except that they all have to be connected, continuous and differentiable everywhere. My group's chosen to do Euler's formula, but so far we are having problems...
Hello,I'm having a visualisation problem. I have a point in R3 that I want to rotate about the ##y##-axis anticlockwise (assuming a right-handed cartesian coordinate system.) I know that the change to the point's ##x## and ##z## coordinates can be described as follows:$$z =...
Hello,I am having trouble comprehending how grids are made and defined in computers. What is the unit that they use and how is it defined ? I know that softwares use standardized units of measure (measurement) such as centimetre. Basically, how is a 3-Dimensional Space created in computers...
1. Homework StatementPlot the electric potential ##V(r)## due to a positively charged particle located at the origin of an XY plane.2. Homework Equations##V=\frac 1 {4πε_0} \frac q r##3. The Attempt at a SolutionI'm unfamiliar with 3D coordinates at this time, but I like to know how...
∴1. Homework StatementFind an equation for the plane that contains the point (1,-1,2) and the line x=t, y=t+1, z=-3+2t?2. Homework Equations3. The Attempt at a SolutionI dont know what Im doing wrong but everytime I try to do these plane questions my amswer is always the opposite sign...
1. Homework StatementLet γ : I → R3 be an arclength parametrized curve whose image lies in the 2-sphere S2 , i.e. ||γ(t)||2 = 1 for all t ∈ I. Consider the “moving basis” {T, γ × T, γ} where T = γ'.(i) Writing the moving basis as a 3 × 3 matrix F := (T, γ × T, γ) (where we think of T and...
1. Homework StatementFind the curvature of the car's path, K(t)Car's Path: r(t) = \Big< 40cos( \frac {2 \pi}{16}t ) , 40sin( \frac {2 \pi}{16}t ), \frac{20}{16}t \Big>2. Homework EquationsK(t) = \frac { |r'(t)\:X \:r''(t)|}{|r'(t)|^3 }3. The Attempt at a SolutionThis is part of a...
1. Homework StatementYou wish to find the moment at the origin produced by a force, ̅F= {Fi}̂, that is applied at r= {̂ai+bj +ck ̂}. Which variables (F,a ,b,c , ) do we need to know to find a numerical answer? Explain using math and figures.2. Homework EquationsMo = r x F3. The Attempt at...
First, I'd like to say I apologize if my formatting is off! I am trying to figure out how to do all of this on here, so please bear with me!So I was watching this video on spherical coordinates via a rotation matrix:and in the end, he gets:x = \rho * sin(\theta) * sin(\phi)y = \rho*...
Hi, I have a FORTRAN code with an array called Chi that I want to run an inverse FT on. I have defined two spaces X and K which each consist of 3 vectors running across my physical verse and inverse space.My code (If it works??) is extremely slow and inefficient (see below). What is the best...
I am looking for 3D (or 4D, etc.) images of waves (such as light or sound), but seem to be having difficulty locating such models. Can someone please direct me to this kind of display, or is it not something being currently done?
I had a function of 3 variables f'(kx,ky,kz)and did an ifftf = ifft(f')and now i have a function where inside a small polygon the function f = 1, and outside f=0, within a cube space of length L.how do i plot a scatter graph for this function where there is a point when f =1, and no...
I have, say, an ellipse in the x-y plane: (x^2/a^2) + (y^2/b^2) = 1I want a 3d (e.g. z) function where inside the ellipse z=+1, outside z=0; the function is not continuous.so in effect what I'm left with is a large plane where z= 0, and a small ellipse cut out raised to z=1.How do I write...
For time independent Schrodinger's equation in 3-DWhere Enx,ny,nz=(nx/Lx2+ny/Ly2+nz/Lz2)(π2ħ2/2mand Ψnx,ny,nz=Asin(nxπx/Lx)sin(nyπy/Ly)sin(nzπz/Lz)How do I normalize A to get (2/L)^3/2?I don't think I understand how to normalize constants.
1. Homework StatementLOOKING AT PART (c) The ball is traveling at velocity c/√2. The car is traveling at velocity c/√2.Ball is thrown up through the sun roof or something I don't know.2. Homework Equations3. The Attempt at a SolutionI don't know how to think about this in 3D. I've...
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
|
Up until this point we have been integrating over one dimensional lines, two dimensional domains, and finding the volume of three dimensional objects. In this section we will be integrating over surfaces, or two dimensional shapes sitting in a three dimensional world. These integrals can be applied to real world situations such as finding the upward force on an open parachute.
Introduction
In the last section, we learned how to find the surface area for parametric surfaces. We cut the region in the uv-plane into tiny rectangles and added up the area of the corresponding tiny parallelograms in the xy-plane. The area of these parallelograms was
\[ \Delta A = \left|r_u \times r_v \right| \Delta u \Delta v\]
If we think of the surface as having varying density \(f(x,y,z)\), then the mass of this parallelogram will be
\[\Delta M = f(x(u,v),y(u,v),z(u,v)) ||r_u \times r_v || \Delta u \Delta v \]and adding up all these masses and taking the limit as the rectangle sizes approach zero, gives the definition of the surface integral.
To compute the integral of a surface, we extend the idea of a line integral for integrating over a curve. Although surfaces can fluctuate up and down on a plane, by taking the area of small enough square sections we can essentially ignore the fluctuations and treat is as a flat rectangle. Over time the area of the surface can successfully be calculated by taking small enough sections, much like what you learned with Reimann sums previously. The surface integral can be calculated in one of three ways depending on how the surface is defined. All three are valid and can be used interchangeably, but depending on how the surfaces are described, one integral will be easier to solve than the others. The integrals from the above methods are typically impossible or very difficult to analytically solve, but can easily solved numerically.
Surface Integral: Parametric Definition
For a smooth surface \(S\) defined
parametrically as \(r(u,v) = f(u,v)\hat{\textbf{i}} + g(u,v) \hat{\textbf{j}} + h(u,v) \hat{\textbf{k}} , (u,v) \in R \), and a continuous function \(G(x,y,z)\) defined on \(S\), the surface integral of \(G\) over \(S\) is given by the double integral over \(R\):
\[ \iint_{S} G(x,y,z)\, d\sigma = \iint_{R} G(f(u,v), g(u,v), h(u,v)) |r_{u} \times r_{v}| \, du \,dv .\]
Surface Integral: implicit Definition
For a surface \(S\) given
implicitly by \( F(x,y,z) = c \), where \(F\) is a continuously differentiable function, with \(S\) lying above its closed and bounded shadow region \(R\) in the coordinate plane beneath it, the surface integral of the continuous function \(G\) over \(S\) is given by the double integral \(R\),
\[ \iint_{S} G(x,y,z)d\sigma = \iint_{R} G(x,y,z) \frac{|\nabla F|}{|\nabla F \cdot p|} dA ,\]
where \(p\) is a unit vector normal to \(R\) and \( \nabla F \cdot p \neq 0\).
Surface Integral: explicit Definition
For a surface \(S\) given
explicitly as the graph of \(z = f(x,y)\), where \(f\) is a continuously differentiable function over a region \(R\) in the \(xy\)-plane, then the parameterization
\[ {\textbf{r}}(u,v) = u \hat{\textbf{i}} + v\hat{\textbf{j}} + f(u,v)\hat{\textbf{k}}\]
has the property that
\[ \left| \textbf{r}_u \times \textbf{r}_v \right| = \sqrt{f_{x}^{2} + f_{y}^{2} + 1}\]
So the surface integral of the continuous function \(G\) over \(S\) is given by the double integral over \(R\),
\[ \iint_{S} G(x,y,z)\,d\sigma = \iint_{R} G(x,y, f(x,y)) \sqrt{f_{x}^{2} + f_{y}^{2} + 1} \,dx\, dy \].
We call a smooth surface \(S\)
orientable or two-sided if it is possible to define a field \(\textbf{n}\)of unit normal vectors on \(S\) that varies continuously with position. All parts of an orientable surface are orientable. Spheres and other smooth closed surfaces in space are orientable. In general, we choose \(\textbf{n}\) on a closed surface to point outward.
Example \(\PageIndex{1}\)
Integrate the function \( H(x,y,z) = 2xy + z \) over the plane \( x + y + z = 2 \).
Solution
First, let's draw out the plane.
Next, notice the equation of the plane can be manipulated. Thus, we can put it in the explicit form \( z = 2 - x - y \). This gives us the integral
\[ \iint_{S} H(x,y,z)\,d \sigma = \iint_{R} H(x,y,z) \sqrt{f_{x}^{2} + f_{y}^{2} + 1} \,dA. \nonumber\]
Take the partial derivatives of \(x\) and \(y\) of the surface. In this case, \(f_x = -1\) and \(f_y = -1\). Substitute these values into the integral along with \(H(x,y,z)\) with \(z = 2 - x - y \) to get the integral
\[ \iint_{R} (2xy + 2 - x - y) \sqrt{ (-1)^{2} + (-1)^{2} + 1 }\, dA. \nonumber\]
In order to determine the bounds for the integral, we need to compress the surface to 2-dimensions, or look at its "shadow region". The idea is to imagine looking at the above graph from above, down the \(z\)-axis. Thus, you will be looking at the \(xy\) plane and the surface will look like the triangle bordered by the \(x\)-axis, \(y\)-axis, and the equation \(y = 2 - x \). This yields the integral
\[ \int_{0}^{2} \int_{0}^{2-x} (2xy + 2 - x - y) \sqrt{ (-1)^{2} + (-1)^{2} + 1 }\,dy\,dx .\nonumber\]
Now we can solve this integral just like any other double integral
\[\begin{align*} & \sqrt{3} \int_{0}^{2} \int_{0}^{2-x} 2xy + 2 - x - y \, dy\, dx \\ &= \sqrt{3} \int_{0}^{2} \left[ xy^2 + 2y - xy - \frac{y^{2}}{2} \right]_{0} ^{2 - x} dx \\ &= \sqrt{3} \int_{0}^{2} x(2-x)^2 - x(2-x) - \frac{(2-x)^{2}}{2} dx \\ &= \sqrt{3} \int_{0}^{2} 4x - 4x^{2} + x^{3} - 2x + x^{2} - 2 + 2x - \frac{x^{2}}{2} dx \\ &= \sqrt{3} \int_{0}^{2} x^{3} - \frac{7x^{2}}{2} + 4x - 2 dx \\ &= \left. \sqrt{3} \left( \frac{x^4}{4} - \frac{7x^3}{6} + 2x^2 - 2x \right) \right|_0^2 \\ &= \sqrt{3} \left(16 - \frac{56}{6} \right) . \end{align*}\]
Example \(\PageIndex{2}\)
Find
\[ \iint_S f(x,y,z)\,dS \nonumber\]
where \(S\) is the surface
\[ r(u,v) = u\hat{\textbf{i}} + u2\hat{\textbf{j}}+ (u+ v) \hat{\textbf{k}} \nonumber\]
with \( 0 \le u \le 2 \) and \(1 \le v \le 4\) and \( f(x,y,z) = x + 2z\).
Solution
We find the partial derivatives
\[ \textbf{r}_u = \hat{\textbf{i}}+ (2u)\hat{\textbf{j}}+ \hat{\textbf{k}} \nonumber\]
\[ \textbf{r}_v = \hat{\textbf{k}} \nonumber\]
and take the cross product
\[\begin{align*} ||r_u \times r_v || &= \begin{vmatrix} \hat{\textbf{i}} &\hat{\textbf{j}} &\hat{\textbf{k}} \\[4pt] 1 &2u &1 \\[4pt] 0 &0 &1 \end{vmatrix} \\[4pt] &= ||2u \hat{\textbf{i}} - \hat{\textbf{j}} || \\[4pt] &= \sqrt{1+4u^2}. \end{align*} \]
We have
\[\begin{align*} f(x(u,v),y(u,v),z(u,v)) &= x(u,v) +2z(u,v) \\[4pt] &= u +2(u + v) \\[4pt] &= 3u + v. \end{align*} \]
We find
\[\int_3^4\int_2^6 (3u +2v)\sqrt{1+4u^2}\, dv\,du.\nonumber \]
Although this integral is possible, its solution is quite involved. You can verify that the surface integral evaluates to \( \approx 525.27\).
Example \(\PageIndex{3}\)
Find
\[ \iint_S f(x,y,z)\,dS\nonumber\]
where \(S\) is the part of the paraboloid
\[ z = x^2 + y^2 \nonumber\]
that lies inside the cylinder
\[ x^2 + y^2 = 1 \nonumber\]
and
\[ f(x,y,z) = z.\nonumber\]
Solution
We have
\[ \sqrt{1+g_x^2 +g_y^2} = \sqrt{1+4x^2 + 4y^2}\nonumber\]
and
\[ f(x,y,z) = z = x^2 + y^2 . \nonumber\]
At this point, you should be thinking, "This looks like a job for polar coordinates." And we get
\[ \int_0^{2\pi}\int_0^1 r^2\sqrt{1+4r^2} \, r\,dr\,d\theta.\nonumber\]
Let
\( u = 1 + 4r^2\) and \(du = 8r\, dr\) with \(r^2 = \dfrac{1}{4} u - \dfrac{1}{4}\)
and the substitution gives us
\[ \dfrac{1}{32} \int_0^{2pi}\int_0^5 (u-1)u^{1/2} \, dr\,d\theta = \dfrac{1}{32} \int_0^{2\pi} \left[ \dfrac{2}{5} u^{5/2} - \dfrac{2}{3} u^{3/2} \right]_1^5 \, d\theta \approx 2.98\nonumber\]
Oriented Surfaces
We have seen how a region \(R\) with boundary curve \(C\) can be oriented. Traveling along \(C\), we look to see if the region is on the right or left. Unfortunately, this definition does
not work will for surfaces in three dimensions. The idea of right and left is not well defined. In fact not all surfaces can be oriented. We say that a surface is orientable if a unit normal vector can be defined on the surface such that it varies continuously over the surface. Below is an example of a non-orientable surface (called the Möbius Strip). You can see that there is no front or back of this surface.
A Möbius strip made with a piece of paper and tape. If an ant were to crawl along the length of this strip, it would return to its starting point having traversed the entire length of the strip (on both sides of the original paper) without ever crossing an edge. Image used with permission (CC-SA-BY; David Benbennick) Contributors Michael Rea (UCD) Larry Green (Lake Tahoe Community College)
Integrated by Justin Marshall.
|
I am trying to understand how the short-time behaviour of a function $f(t)$ influences the large-frequency asymptotics of its Fourier transform $g(\omega)=\mathcal{F}[f(t)](\omega)\equiv \tilde{f}(\omega)$, in case it exists. I have read sometimes, and it's apparently very intuitive (not to me), that those limits are directly related, but I don't know how to put this on formal grounds or whether is just loosely stated. As far as I understand, one can define an uncertainty principle relating the characteristic "width" of each function, but this does not tell us anything about the behaviour at large frequencies themselves (in an "absolute" sense). Maybe I am misunderstanding something very basic here.
I came across this doubt while reading this paper, where asymptotic expressions for the short-time limit of the autocorrelation function and the large-frequency behaviour of the power spectrum are derived. At the end of page 13 of the linked pdf file, it is stated:
A small-$\tau$ expansion of $R(\tau)$ from (43) yields $R(\tau)\sim a - 2(1-i\delta)|\tau|+\mathcal{O}(\tau^2,a^{-1})$ as $\tau\rightarrow 0, a\rightarrow \infty$ and the
correspondinglarge-frequency asymptotic form of the spectrum $F(\omega)$ from (20) is $F(\omega)\sim\frac{4}{\omega^2}+\mathcal{O}(\omega^{-3})$ as $\omega\rightarrow \infty$,
where $R(\tau)$ is the autocorrelation function, $a$ and $\delta$ are constants and $F(\omega)=2\Re{\int_0^{\infty}d\tau e^{i\omega\tau}R(\tau)}$ is the power spectrum. The original $R(\tau)$ from which the approximation is derived is
\begin{equation} R(\tau)=a(a_0+a_1 e^{-2a\tau}+a_2 e^{-4a\tau})e^{-D(\tau)}, \end{equation} where $D(\tau)=\frac{1}{a}\tau+\frac{\delta^2}{2a^2}(2a\tau+e^{-2a\tau}-1)$, $a_0=\left( 1+\frac{\delta}{2a^2}i\right)^2$, $a_1=\frac{1}{2a^2}\left( 1+\frac{\delta^2}{a^2}-2\delta i \right)$ and $a_2=-\frac{\delta^2}{2a^4}$.
I understand how to get the approximation for $R(\tau)$, but I don't see how to obtain from there the asymptotics for $F(\omega\rightarrow\infty)$ and how they are related, even though it seems to be trivial for the authors.
To sum up: I'd be interested to know how short-time and large-frequency limit are related (or whether this is only valid e.g. when $f(t)$ is peaked around $t=0$ and there is a characteristic width $\delta t$) and in particular how to obtain the large-frequency asymptotics for the case outlined above.
Thank you very much for your help!
|
Moments and lower bounds in the far-field of solutions to quasi-geostrophic flows
1.
Department of Mathematics, University of Santa Cruz, Santa Cruz, CA 95064, United States
2.
Department of Mathematical Sciences, Florida Atlantic University, Boca Raton, FL 33431, United States
$\frac{\partial \theta}{\partial t} + u\cdot \nabla \theta + \kappa (-\Delta)^{\alpha}\theta =f$, $\theta|_{t=0}=\theta_0 $
Rates of decay are obtained for moments of the solutions, and lower bounds of decay rates of the solutions are established.
Mathematics Subject Classification:35Q35, 76B0. Citation:Maria Schonbek, Tomas Schonbek. Moments and lower bounds in the far-field of solutions to quasi-geostrophic flows. Discrete & Continuous Dynamical Systems - A, 2005, 13 (5) : 1277-1304. doi: 10.3934/dcds.2005.13.1277
[1]
Yong Zhou.
Decay rate of higher order derivatives for solutions to the 2-D dissipative quasi-geostrophic flows.
[2]
Hongjie Dong, Dapeng Du.
Global well-posedness and a decay estimate for the critical dissipative quasi-geostrophic equation in the whole space.
[3] [4]
May Ramzi, Zahrouni Ezzeddine.
Global existence of solutions for subcritical quasi-geostrophic equations.
[5]
T. Tachim Medjo.
Averaging of a multi-layer quasi-geostrophic equations with
oscillating external forces.
[6]
Hongjie Dong.
Dissipative quasi-geostrophic equations in critical Sobolev spaces: Smoothing effect and global well-posedness.
[7]
Eleftherios Gkioulekas, Ka Kit Tung.
Is the subdominant part of the energy spectrum due to downscale energy cascade hidden in quasi-geostrophic turbulence?.
[8] [9]
Wen Tan, Bo-Qing Dong, Zhi-Min Chen.
Large-time regular solutions to the modified quasi-geostrophic equation in Besov spaces.
[10] [11]
Gerard Gómez, Josep–Maria Mondelo, Carles Simó.
A collocation method for the numerical
Fourier analysis of quasi-periodic functions.
I: Numerical tests and examples.
[12]
Gerard Gómez, Josep–Maria Mondelo, Carles Simó.
A collocation method for the numerical
Fourier analysis of quasi-periodic functions.
II: Analytical error estimates.
[13]
Priyanjana M. N. Dharmawardane.
Decay property of regularity-loss type for
quasi-linear hyperbolic systems of viscoelasticity.
[14] [15] [16] [17] [18]
Jiecheng Chen, Dashan Fan, Lijing Sun.
Asymptotic estimates for unimodular Fourier multipliers on
modulation spaces.
[19] [20]
Georgi Grahovski, Rossen Ivanov.
Generalised Fourier transform and perturbations to soliton equations.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Which of the following has higher boiling points? Alkanes, alkenes, or alkynes? And why?
Disclaimer: All of this "jazz" will be about reaching a mere rule-of-thumb. You can't just compare whole families of organic compounds with each other. There are more factors to consider than below, mostly based on isomerism notions. However, as most of the A grade exams emphasize on the lighter aliphatic compounds, we can understand each other here. :) It's all about polarizability.
Polarizability is the ability for a molecule to be polarized.
When determining (aka comparing) the boiling points of different
molecular substances, intermolecular forces known as London Dispersion Forces are at work here. Which means, these are the forces that are overcome when the boiling occurs. (See here for example)
London forces get stronger with an increase in volume, and that's because the polarizability of the molecule increases. (See the answer to this recent question)
Alkanes vs. Alkenes
In their simplest form (where no substitution etc. has occurred) alkanes tend to have very close boiling points to alkenes.
The boiling point of each alkene is very similar to that of the alkane with the same number of carbon atoms. Ethene, propene and the various butenes are gases at room temperature. All the rest that you are likely to come across are liquids.
Boiling points of alkenes depends on more molecular mass (chain length). The more intermolecular mass is added, the higher the boiling point. Intermolecular forces of alkenes gets stronger with increase in the size of the molecules.
\begin{array}{|c|c|}\hline \text{Compound} & \text{Boiling point / }^\circ\mathrm{C} \\ \hline \text{Ethene} & -104 \\ \hline \text{Propene} & -47 \\ \hline \textit{trans}\text{-2-Butene} & 0.9 \\ \hline \textit{cis}\text{-2-Butene} & 3.7 \\ \hline \textit{trans}\text{-1,2-dichlorobutene} & 155 \\ \hline \textit{cis}\text{-1,2-dichlorobutene} & 152 \\ \hline \text{1-Pentene} & 30 \\ \hline \textit{trans}\text{-2-Pentene} & 36 \\ \hline \textit{cis}\text{-2-Pentene} & 37 \\ \hline \text{1-Heptene} & 115 \\ \hline \text{3-Octene} & 122 \\ \hline \text{3-Nonene} & 147 \\ \hline \text{5-Decene} & 170 \\ \hline \end{array} In each case, the alkene has a boiling point which is a small number of degrees lower than the corresponding alkane. The only attractions involved are Van der Waals dispersion forces, and these depend on the shape of the molecule and the number of electrons it contains. Each alkene has 2 fewer electrons than the alkane with the same number of carbons.
Alkanes vs. Alkynes
As explained, since there is a bigger volume to an alkane than its corresponding alkyne (i.e. with the same number of carbons) the alkane should have a higher boiling point. However, there's something else in play here:
Alkynes, have a TRIPLE bond!
I currently can think of two things that happen as a result of this:
London Dispersion Forces are in relation with distance. Usually, this relation is $r^{-6}$. (See here) The triple bond allows two alkynes to get closer. The closer they are, the more the electron densities are polarised, and thus the stronger the forces are.
Electrons in $\pi$ bonds are more polarizable$^{10}$.
These two factors overcome the slight difference of volume here. As a result, you have higher boiling points for alkynes than alkanes, generally.
\begin{array}{|c|c|}\hline\text{Compound} & \text{Boiling point / }^\circ\mathrm{C} \\ \hline\text{Ethyne} & -84^{[1]} \\ \hline\text{Propyne} & -23.2^{[2]} \\ \hline\text{2-Butyne} & 27^{[3]} \\ \hline\text{1,4-Dichloro-2-butyne} & 165.5^{[4]} \\ \hline\text{1-Pentyne} & 40.2^{[5]} \\ \hline\text{2-Heptyne} & 112\text{–}113^{[6]} \\ \hline\text{3-Octyne} & 133^{[7]} \\ \hline\text{3-Nonyne} & 157.1^{[8]} \\ \hline\text{5-Decyne} & 177\text{–}178^{[9]} \\ \hline\end{array}
1: http://en.wikipedia.org/wiki/Acetylene 2: http://en.wikipedia.org/wiki/Propyne 3: http://en.wikipedia.org/wiki/2-Butyne 4: http://www.lookchem.com/1-4-Dichloro-2-butyne/ 5: http://en.wikipedia.org/wiki/1-Pentyne 6: http://www.chemsynthesis.com/base/chemical-structure-17405.html 7: http://www.chemspider.com/Chemical-Structure.76541.html 8: http://www.thegoodscentscompany.com/data/rw1120961.html 9: http://www.chemsynthesis.com/base/chemical-structure-3310.html 10: https://chemistry.stackexchange.com/a/27531/5026 Conclusion: We can't fully determine the boiling points of the whole class of alkanes, alkenes and alkynes. However, for the lighter hydrocarbons, comparing the boiling points, you get:$$\text{Alkynes > Alkanes > Alkenes}$$
protected by Community♦ Nov 1 '18 at 12:19
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
Choice of $e$
It is mathematically OK to choose a huge $e$ (or even a negative one), as long as the only link between $e$ and the factors $p$, $q$ of $N$ are the mandatory $\gcd(e,p-1)=1$ and $\gcd(e,q-1)=1$, and nothing is made to chose $e$ so that $d$ has special properties. For example $e$ could be:
A random odd integer (including negative) with $e≠±1$, $\gcd(e,p-1)=1$ and $\gcd(e,q-1)=1$. A huge prime, random or independent of either $p$ or $q$; e.g. $e=2^{1257787}-1$. A public function of $n$ with the necessary properties, such as $e=n^k$ for some $k≥1$. In fact, $e=k$ was suggested by Clifford C. Cocks as early as 1973 (see this), before RSA even got its name.
However standards-conformance, regulatory, interoperability and performance concerns dictate otherwise:
The PKCS#1 v2.2 standard requires $2<e<n$ (making $e=n-2$ the highest suitable $e$) NIST's FIPS 186-4 and other regulatory bodies require $2^{16}<e<2^{256}$ Many implementations have a limit of $e<2^{32}$ Modular exponentiation to the (positive) power $e$ has cost $O(\log(e))$: for $e=2^k+1$, it requires computing $k$ modular squares and one modular multiplication. That is a reason to chose $e$ small. Negative $e<0$ would introduce extra complexity (a modular inversion at each use of $e$) and slowness. That's not used.
so that and at the end of the day, $e=2^{16}+1=65537$ is the choice one is the least likely to regret (except performance-wise, and then not by a huge factor: at most a factor of 8.5 compared to $e=3$).
Choice of $d$
Mathematically, any choice of $d$ with $e\,d\bmod\lambda(N)=1$ will do (no matter how large or negative), where $\lambda(N)=\operatorname{lcm}(p-1,q-1)$ when $N=p\,q$ with $p$ and $q$ distinct odd primes. This is precisely the condition necessary and sufficient for $x↦x^e\bmod N$ and $x↦x^d\bmod N$ to be reciprocals mappings of $[0,N-1]$. However
PKCS#1 v2.2 (the industry standard) additionally wants $0<d<n$. FIPS 186-4 is even more restrictive and requires $2^{\lceil\log_2 N\rceil/2}<d<\lambda(N)$ Some texts take $d=e^{-1}\bmod\varphi(N)$, where $\varphi(N)=\phi(N)=(p-1)(q-1)$ when $N=p\,q$ with $p$ and $q$ distinct odd primes. That implies $1\le d<(p-1)(q-1)$. That choice of $d$ is allowed by PKCS#1 v2.2, but often leads to $d$ too large for FIPS 186-4. Use of (positive) $d$ also has cost $O(\log(d))$. That makes $d=e^{-1}\bmod\lambda(N)$ attractive, as that's that's the smallest working positive $d$ for a given $(N,e)$. Negative $d<0$ would introduce extra complexity (a modular inversion at each use of $d$) and slowness. That's not used. Using a large $e$
There's at last one reason to use a large $e$: it makes use of the public key more costly for one not knowing the private key, and it has been suggested as a proof-of-work.
If one chooses a huge $e$, one should not choose it as $e=e_0+k⋅\varphi(N)$ or $e+k⋅\lambda(N)$ with $e_0$ guessable (like $e_0$ small, or $e_0$ linked to $n$ or some public data in some public way) and $k≠0$. Such $e$ will work just as well as $e_0$, but knowledge of $e$ and guessing $e_0$ will leak $e-e_0=k⋅\lambda(N)$, which allows efficient factorization of $N$ (at least for moderate $k$ or $k$ public; I do not know exactly what happens for huge random secret $k$).
Also, choosing $e$ as a function of $d$ small (or sparse) may allows factorization attacks.
Rather, $e$ could be chosen before $p$ and $q$, perhaps as a large prime ($e$ prime is not required, but slightly simplifies the choice of $p$ and $q$ with $\gcd(e,p-1)=1$ and $\gcd(e,q-1)=1$ ). Alternatively, $e=N^k$ for some moderate $k$ should be fine.
|
Not an answer and really just for fun. I understand that you did not ask for the codes. The main purpose of this is to substantiate the claim that it is not too difficult to draw these irregular shapes with elementary Ti
kZ syntax. Here are some reproductions of two of the figures of your list.
\documentclass[tikz,border=3.14mm]{standalone}
\usetikzlibrary{calc,intersections,arrows.meta}
\usepackage{pgfplots}
\usepgfplotslibrary{fillbetween}
\begin{document}
\begin{tikzpicture}[long dash/.style={dash pattern=on 10pt off 2pt}]
\draw[ultra thick,long dash,name path=left,fill=orange!30] plot[smooth cycle] coordinates
{(0.3,-2) (-1,-3) (-8,-1.2) (-8.8,-0.2) (-7,0.6) (-1,-0.6)};
\draw[ultra thick,long dash,name path=left bottom] plot[smooth cycle] coordinates
{(-8,-2.8) (-9,-2.5) (-8.5,-1) (-7,0) (-6,1.7) (-5,1.7) (-4,-0) (-5.5,-2)};
\draw[ultra thick,long dash,name path=left top] plot[smooth cycle] coordinates
{(-7.2,-1) (-7.8,1) (-6.7,2) (-5.5,1) (-5,0) (-5.4,-1) (-6,-1.2)};
\path [%draw,blue,ultra thick,
name path=left arc,
intersection segments={
of=left top and left,
sequence={A1--B1}
}];
\path [%draw,red,ultra thick,
fill=red!30,
name path=left blob,
intersection segments={
of=left bottom and left arc,
sequence={A1--B0}
}];
\node[fill,circle,label=above right:$F$] at (-6.1,-0.3){};
\node at (-2.5,-1.8){$V_y$};
% right part
\path[fill=orange!30] plot[smooth cycle] coordinates
{(-1.3,2) (-0.7,3) (1,3.7) (5.2,3) (8,1.6) (8.4,1) (8,0.3) (6,0) (4,0) (2,0.3) (0,1)};
\path[fill=blue!30] plot[smooth cycle] coordinates
{(0,-2) (-0.3,-1.5) (-0.2,0) (-0.3,1) (-1,2) (0,2.8) (3,2) (7,1) (7.3,-1)
(6,-2.3) (4,-2.3) (2,-2)};
\draw[ultra thick,long dash,name path=right top] plot[smooth cycle] coordinates
{(-1.3,2) (-0.7,3) (1,3.7) (5.2,3) (8,1.6) (8.4,1) (8,0.3) (6,0) (4,0) (2,0.3) (0,1)};
\draw[ultra thick,name path=right] plot[smooth cycle] coordinates
{(0,-2) (-0.3,-1.5) (-0.2,0) (-0.3,1) (-1,2) (0,2.8) (3,2) (7,1) (7.3,-1)
(6,-2.3) (4,-2.3) (2,-2)};
\draw[ultra thick,long dash,name path=middle] plot[smooth cycle] coordinates
{(0,-3.4) (-1,-2) (-1,-0.5) (-1.5,0.4) (-1,1.6) (0,1.9) (2.1,1) (3,-1) (2.5,-3) (1,-3.7)};
\draw[ultra thick,long dash,name path=right bottom] plot[smooth cycle] coordinates
{(1,-3) (0.6,-2) (1.2,0) (3,0.8) (6,0.8) (8.5,1) (10,0) (9,-3) (7,-3.7) (5,-3.6) (2,-3.6)};
\path[name path=circle] (5.2,1.5) arc(-30:190:4mm);
\path [%draw,red,ultra thick,
name path=aux1,
intersection segments={
of=circle and right,
sequence={B1}
}];
\path [draw,blue!30,ultra thick,
name path=aux2,
intersection segments={
of=circle and aux1,
sequence={B0}
}];
\node at (4.8,1.6){$U_y$};
\node[fill,circle,label=below right:$y$] at (3.3,1.5){};
\node[fill=blue!30] at (3.7,0){$K$};
\end{tikzpicture}
\begin{tikzpicture}[thick]
\draw[-latex] (0,0) -- (5,0) node[right]{$s$};
\draw[-latex] (0,0) -- (0,7) node[left]{$y$};
\draw (4,0) -- (4,5.5);
\foreach \X in {1,2,4.5}
{\draw (0,\X) -- (4,\X);}
\foreach \X/\Y [count=\Z]in {0/0,3.5/t,5.5/1}
{
\ifnum\Z=1
\draw[very thick,fill] (0,\X) circle(1pt) node[left]{$(0,\Y)$} -- (4,\X)
coordinate[midway,below] (l1) circle(1pt)
node[below right]{$(1,\Y)$};
\else
\draw[very thick,fill] (0,\X) circle(1pt) node[left]{$(0,\Y)$} -- (4,\X)
\ifnum\Z=2
coordinate[midway,below] (l3)
\fi
\ifnum\Z=3
coordinate[midway,above] (l2)
\fi
circle(1pt)
node[right]{$(1,\Y)$};
\fi
}
\draw[fill,very thick] (1.5,3.5) circle (1pt) node[below] {$(s,t)$};
\begin{scope}[xshift=6.5cm]
\draw[very thick] plot[smooth cycle] coordinates
{(0,2) (0,5) (1.3,7) (5,7.9) (8.2,6) (8.3,3) (6,1.4) (2,1.2)};
\node[circle,fill,scale=0.6] (L) at (0.5,4){};
\node[circle,fill,scale=0.6] (R) at (7.5,4.2){};
\foreach \X in {-45,-20,-5,45,60}
{\pgfmathsetmacro{\Y}{180-\X+4*rnd}
\draw (L) to[out=\X,in=\Y,looseness=1.2] (R);
\ifnum\X=-45
\path (L) to[out=\X,in=\Y,looseness=1.2] coordinate[pos=0.5,below] (r1)
node[pos=0.6,below]{$\sigma$} (R);
\fi
\ifnum\X=60
\path (L) to[out=\X,in=\Y,looseness=1.2] coordinate[pos=0.4,above] (r2)
node[pos=0.6,above]{$\tau$} (R);
\fi
}
\draw[very thick] (L) to[out=20,in=163,looseness=1.2]
node[pos=0.2,circle,fill,scale=0.6,label=above right:$h_t(s)$]{}
coordinate[pos=0.35] (r3) (R);
\end{scope}
\draw[-latex,shorten >=2pt] (l1) to[out=14,in=220] (r1);
\draw[-latex,shorten >=2pt] (l2) to[out=24,in=140] (r2);
\draw[-latex,shorten >=2pt] (l3) to[out=-12,in=210] (r3);
\end{tikzpicture}
\end{document}
They should illustrate that with Ti
kZ you can do all sorts of cool things like computing intersections of lines and/or surfaces. Of course, you may do similar things with some graphics software, but IMHO the advantage of this approach is that you only need to change one thing to have global changes. For instance, if you do not like the lengths of the dashes all you need to do is to redefine the style. And last but not least I think it is more fun to do it that way. I understand of course that others may not share that opinion.
ADDENDUM: A proposal to convert freehand graphics to "nice(r)" Ti kZ code. First save your freehand graphics in some format, here I will call the file
tmp.png (for which I just took the first of your pictures). Then include it into a Ti
kZ picture and read off some coordinates for
smooth cycles etc.
\documentclass[tikz,border=3.14mm]{standalone}
\usetikzlibrary{calc}
\begin{document}
\begin{tikzpicture}
\node (tmp) {\includegraphics{tmp.png}};
\draw (tmp.south west) grid (tmp.north east);
\draw let \p1=(tmp.south west), \p2=(tmp.north east) in
\pgfextra{\pgfmathsetmacro{\xmin}{int(\x1/1cm)}
\pgfmathsetmacro{\xmax}{int(\x2/1cm)}
\pgfmathsetmacro{\ymin}{int(\y1/1cm)}
\pgfmathsetmacro{\ymax}{int(\y2/1cm)}
\typeout{\xmin,\xmax}}
\foreach \X in {\xmin,...,\xmax} {(\X,\y1) node[anchor=north] {\X}}
\foreach \Y in {\ymin,...,\ymax} {(\x1,\Y) node[anchor=east] {\Y}};
\end{tikzpicture}
\end{document}
Most likely one could write a code that extracts some coordinates along the contours. I guess that with machine learning it will be possible very soon to let the computer do that. It is painful, yes, at present to do that by hand but not super painful. (And I apologize in advance if someone already has done the thing with the automatic grid with labels, I could not find it so I wrote it quickly myself. In the examples I found the range of the tick labels was hard coded.)
|
Answer
Period$=\frac{1}{2}$ seconds. Frequency=2 cycles per second.
Work Step by Step
The period can be found by first comparing the equation $s(t)=-5\cos 4\pi t$ to $s(t)=a\cos w t$ to find $w$ which is the angular velocity. By comparing the two equations, we find that $w=4\pi$. Next, we find the period using the formula: Period$=\frac{2\pi}{w}$ Period$=\frac{2\pi}{4\pi}$ Period$=\frac{1}{2}$ seconds. Since frequency is the reciprocal of the period, the frequency is 2 cycles per second.
|
1. True or False? Line integral \(\displaystyle\int _C f(x,y)\,ds\) is equal to a definite integral if \(C\) is a smooth curve defined on \([a,b]\) and if function \(f\) is continuous on some region that contains curve \(C\).
Solution: True
2. True or False? Vector functions \(\vecs r_1=t\,\hat{\mathbf i}+t^2\,\hat{\mathbf j}, \quad 0≤t≤1,\) and \(\vecs r_2=(1−t)\,\hat{\mathbf i}+(1−t)^2\,\hat{\mathbf j}, \quad 0≤t≤1\), define the same oriented curve. 3. True or False? \(\displaystyle\int _{−C}(P\,dx+Q\,dy)=\int _C(P\,dx−Q\,dy)\)
Solution: False
4. True or False? A piecewise smooth curve \(C\) consists of a finite number of smooth curves that are joined together end to end. 5. True or False? If \(C\) is given by \(x(t)=t,\quad y(t)=t, \quad 0≤t≤1\), then \(\displaystyle\int _Cxy\,ds=\int ^1_0t^2\,dt.\)
Solution: False
For the following exercises, use a computer algebra system (CAS) to evaluate the line integrals over the indicated path.
6. \(\displaystyle\int _C(x+y)\,ds\) [T]
\(C:x=t,y=(1−t),z=0\) from \((0, 1, 0)\) to \((1, 0, 0)\)
\(\displaystyle \int _C(x−y)ds\) 7. [T]
\(C:\vecs r(t)=4t\,\hat{\mathbf i}+3t\,\hat{\mathbf j}\) when \(0≤t≤2\)
Solution: \(\displaystyle\int _C(x−y)\,ds=10\)
\(\displaystyle\int _C(x^2+y^2+z^2)\,ds\) 8. [T]
\(C:\vecs r(t)=sint\,\hat{\mathbf i}+cost\,\hat{\mathbf j}+8t\,\hat{\mathbf k}\) when \(0≤t≤\dfrac{π}{2}\)
Evaluate \(\displaystyle\int _Cxy^4\,ds\), where \(C\) is the right half of circle \(x^2+y^2=16\) and is traversed in the clockwise direction. 9. [T]
Solution: \(\displaystyle\int _Cxy^4\,ds=\frac{8192}{5}\)
Evaluate \(\displaystyle\int _C4x^3ds\), where 10. [T] Cis the line segment from \((−2,−1)\) to \((1, 2)\).
For the following exercises, find the work done.
11. Find the work done by vector field \(\vecs F(x,y,z)=x\,\hat{\mathbf i}+3xy\,\hat{\mathbf j}−(x+z)\,\hat{\mathbf k}\) on a particle moving along a line segment that goes from \((1,4,2)\) to \((0,5,1)\).
Solution: \(W=8\)
12. Find the work done by a person weighing 150 lb walking exactly one revolution up a circular, spiral staircase of radius 3 ft if the person rises 10 ft. 13. Find the work done by force field \(\vecs F(x,y,z)=−\dfrac{1}{2}x\,\hat{\mathbf i}−\dfrac{1}{2}y\,\hat{\mathbf j}+\dfrac{1}{4}\,\hat{\mathbf k}\) on a particle as it moves along the helix \(\vecs r(t)=\cos t\,\hat{\mathbf i}+\sin t\,\hat{\mathbf j}+t\,\hat{\mathbf k}\) from point \((1,0,0)\) to point \((−1,0,3π)\).
Solution: \(W=\dfrac{3π}{4}\)
14. Find the work done by vector field \(\vecs{F}(x,y)=y\,\hat{\mathbf i}+2x\,\hat{\mathbf j}\) in moving an object along path \(C\), which joins points \((1, 0)\) and \((0, 1)\). 15. Find the work done by force \(\vecs{F}(x,y)=2y\,\hat{\mathbf i}+3x\,\hat{\mathbf j}+(x+y)\,\hat{\mathbf k}\) in moving an object along curve \(\vecs r(t)=\cos(t)\,\hat{\mathbf i}+\sin(t)\,\hat{\mathbf j}+16\,\hat{\mathbf k}\), where \(0≤t≤2π\).
Solution: \(W=π\)
16. Find the mass of a wire in the shape of a circle of radius 2 centered at (3, 4) with linear mass density \(ρ(x,y)=y^2\).
For the following exercises, evaluate the line integrals.
17. Evaluate \(\displaystyle\int C\vecs F·d\vecs{r}\), where \(\vecs{F}(x,y)=−1\,\hat{\mathbf j}\), and \(C\) is the part of the graph of \(y=12x^3−x\) from \((2,2)\) to \((−2,−2)\).
Solution: \(\displaystyle\int _C\vecs F·d\vecs{r}=4\)
18. Evaluate \(\displaystyle\int _γ(x^2+y^2+z^2)^{−1}ds\), where \(γ\) is the helix \(x=\cos t,y=\sin t,z=t(0≤t≤T).\) 19. Evaluate \(\displaystyle\int _Cyz\,dx+xz\,dy+xy\,dz\) over the line segment from \((1,1,1) \) to \((3,2,0).\)
Solution: \(\displaystyle\int _Cyz\,dx+xz\,dy+xy\,dz=−1\)
20. Let C be the line segment from point (0, 1, 1) to point (2, 2, 3). Evaluate line integral \(\displaystyle\int _Cy\,ds.\) Use a computer algebra system to evaluate the line integral \(\displaystyle\int _Cy^2\,dx+x\,dy\), where \(C\) is the arc of the parabola \(x=4−y^2\) from \((−5, −3)\) to \((0, 2)\). 21. [T]
Solution: \(\displaystyle\int _C(y^2)\,dx+(x)\,dy=\dfrac{245}{6}\)
Use a computer algebra system to evaluate the line integral \(\displaystyle\int _C(x+3y^2)dy\) over the path \(C\) given by \(x=2t,y=10t,\) where \(0≤t≤1.\) 22. [T] Use a CAS to evaluate line integral \(\displaystyle\int _Cxy\,dx+y\,dy\) over path \(C\) given by \(x=2t,y=10t\),where \(0≤t≤1\). 23. [T]
Solution: \(\displaystyle\int _Cxy\,dx+y\,dy=\dfrac{190}{3}\)
24. Evaluate line integral \(\displaystyle\int _C(2x−y)\,dx+(x+3y)\,dy\), where \(C\) lies along the \(x\)-axis from \(x=0\) to \(x=5\). Use a CAS to evaluate \(\displaystyle\int _C\dfrac{y}{2x^2−y^2}\,ds\), where \(C\) is \(x=t,y=t,1≤t≤5.\) 26. [T]
Solution: \(\displaystyle\int _C\dfrac{y}{2x^2−y^2}\,ds=\sqrt{2}ln5\)
Use a CAS to evaluate \(\displaystyle\int _Cxy\,ds\), where \(C\) is \(x=t^2,y=4t,0≤t≤1.\) 27. [T]
In the following exercises, find the work done by force field \(\vecs F\) on an object moving along the indicated path.
28. \(\vecs{F}(x,y)=−x \,\hat{\mathbf i}−2y\,\hat{\mathbf j}\)
\(C:y=x^3\) from \((0, 0)\) to \((2, 8)\)
Solution: \(W=−66\)
29. \(\vecs{F}(x,y)=2x\,\hat{\mathbf i}+y\,\hat{\mathbf j}\) C: counterclockwise around the triangle with vertices \((0, 0), (1, 0), \) and \((1, 1)\) 30. \(\vecs F(x,y,z)=x\,\hat{\mathbf i}+y\,\hat{\mathbf j}−5z\,\hat{\mathbf k}\)
\(C:\vecs r(t)=2\cos t\,\hat{\mathbf i}+2\sin t\,\hat{\mathbf j}+t\,\hat{\mathbf k},0≤t≤2π\)
Solution: \(W=−10π^2\)
31. Let \(\vecs F\) be vector field \(\vecs{F}(x,y)=(y^2+2xe^y+1)\,\hat{\mathbf i}+(2xy+x^2e^y+2y)\,\hat{\mathbf j}\). Compute the work of integral \(\displaystyle\int _C\vecs F·d\vecs{r}\), where \(C\) is the path \(\vecs r(t)=\sin t\,\hat{\mathbf i}+\cos t\,\hat{\mathbf j},\quad 0≤t≤\dfrac{π}{2}\). 32. Compute the work done by force \(\vecs F(x,y,z)=2x\,\hat{\mathbf i}+3y\,\hat{\mathbf j}−z\,\hat{\mathbf k}\) along path \(\vecs r(t)=t\,\hat{\mathbf i}+t^2\,\hat{\mathbf j}+t^3\,\hat{\mathbf k}\),where \(0≤t≤1\).
Solution: \(W=2\)
33. Evaluate \(\displaystyle\int _C\vecs F·d\vecs{r}\), where \(\vecs{F}(x,y)=\dfrac{1}{x+y}\,\hat{\mathbf i}+\dfrac{1}{x+y}\,\hat{\mathbf j}\) and \(C\) is the segment of the unit circle going counterclockwise from \((1,0)\) to \((0, 1)\). 34. Force \(\vecs F(x,y,z)=zy\,\hat{\mathbf i}+x\,\hat{\mathbf j}+z^2x\,\hat{\mathbf k}\) acts on a particle that travels from the origin to point (1, 2, 3). Calculate the work done if the particle travels: along the path \((0,0,0)→(1,0,0)→(1,2,0)→(1,2,3)\) along straight-line segments joining each pair of endpoints; along the straight line joining the initial and final points. Is the work the same along the two paths?
Solution: a. \(W=11\); b. \(W=11\); c. Yes
35. Find the work done by vector field \(\vecs F(x,y,z)=x\,\hat{\mathbf i}+3xy\,\hat{\mathbf j}−(x+z)\,\hat{\mathbf k}\) on a particle moving along a line segment that goes from \((1, 4, 2)\) to \((0, 5, 1).\) 36. How much work is required to move an object in vector field \(\vecs{F}(x,y)=y\,\hat{\mathbf i}+3x\,\hat{\mathbf j}\) along the upper part of ellipse \(\dfrac{x^2}{4}+y^2=1\) from \((2, 0)\) to \((−2,0)\)?
Solution: \(W=2π\)
37. A vector field is given by \(\vecs{F}(x,y)=(2x+3y)\,\hat{\mathbf i}+(3x+2y)\,\hat{\mathbf j}\). Evaluate the line integral of the field around a circle of unit radius traversed in a clockwise fashion. 38. Evaluate the line integral of scalar function xy along parabolic path \(y=x^2\) connecting the origin to point \((1, 1)\).
Solution: \(\displaystyle\int _C\vecs F·d\vecs{r}=\dfrac{25\sqrt{5}+1}{120}\)
39. Find \(\displaystyle\int _Cy^2\,dx+(xy−x^2)\,dy\) along \(C: y=3x\) from (0, 0) to (1, 3). 40. Find \(\displaystyle\int _Cy^2\,dx+(xy−x^2)\,dy\) along \(C: y^2=9x\) from (0, 0) to (1, 3).
Solution: \(\displaystyle\int _Cy^2\,dx+(xy−x^2)\,dy=6.15\)
For the following exercises, use a CAS to evaluate the given line integrals.
Evaluate \(\vecs F(x,y,z)=x^2z\,\hat{\mathbf i}+6y\,\hat{\mathbf j}+yz^2\,\hat{\mathbf k}\), where \(C\) is represented by \(\vecs r(t)=t\,\hat{\mathbf i}+t^2\,\hat{\mathbf j}+\ln t \,\hat{\mathbf k},1≤t≤3\). 41. [T] Evaluate line integral \(\displaystyle\int _γxe^y\,ds\) where, \(γ\) is the arc of curve \(x=e^y\) from \((1,0)\) to \((e,1)\). 42. [T]
Solution: \(\displaystyle\int _γxe^y\,ds≈7.157\)
Evaluate the integral \(\displaystyle\int _γxy^2\,ds\), where \(γ\) is a triangle with vertices \((0, 1, 2), (1, 0, 3)\), and \((0,−1,0)\). 43. [T] Evaluate line integral \(\displaystyle\int _γ(y^2−xy)\,dx\), where \(γ\) is curve \(y=\ln x\) from \((1, 0)\) toward \((e,1)\). 44. [T]
Solution: \(\displaystyle\int _γ(y^2−xy)\,dx≈−1.379\)
Evaluate line integral \(\displaystyle\int γxy4\,ds\), where \(γ\) is the right half of circle \(x^2+y^2=16\). 45. [T] Evaluate \int CF⋅dr,\int CF·dr, where F(x,y,z)=x2yi+(x−z)j+xyzkF(x,y,z)=x2yi+(x−z)j+xyzk and 46. [T] \(C: r(t)=ti+t^2j+2k,0≤t≤1\).
Solution: \(\displaystyle\int _CF⋅dr≈−1.133\)
47. Evaluate \(\displaystyle\int _CF⋅dr\), where \(\vecs{F}(x,y)=2xsin(y)i+(x^2cos(y)−3y^2)j\) and \(C\) is any path from \((−1,0)\) to \((5, 1)\). 48. Find the line integral of \(F(x,y,z)=12x^2i−5xyj+xzk\) over path \( C\) defined by \(y=x^2, z=x^3\) from point \((0, 0, 0)\) to point \((2, 4, 8)\).
Solution: \(\displaystyle\int _CF⋅dr≈22.857\)
49. Find the line integral of \(\displaystyle\int _C(1+x^2y)ds\), where \( C\) is ellipse \(r(t)=2costi+3sintj\) from \(0≤t≤π.\)\
For the following exercises, find the flux.
50. Compute the flux of \(\vecs{F}=x^2 i+y j\) across a line segment from \((0, 0)\) to \((1, 2).\)
Solution: \(flux=−\dfrac{1}{3}\)
51. Let \(\vecs{F}=5 i\) and let \( C\) be curve \(y=0,0≤x≤4\). Find the flux across \( C\). 52. Let \(\vecs{F}=5 j\) and let \( C\) be curve \(y=0,0≤x≤4\). Find the flux across \( C\).
Solution: \(flux=−20\)
53. Let \(\vecs{F}=−y i+x j\) and let \( C: r(t)=cost i+sint j (0≤t≤2π)\). Calculate the flux across \( C\). 54. Let \(\vecs{F}=(x^2+y^3) i+(2xy) j\). Calculate flux orientated counterclockwise across curve \( F C: x^2+y^2=9.\)
Solution: \(flux=0\)
55. Find the line integral of \(\displaystyle\int _Cz^2dx+ydy+2ydz,\) where \( C\) consists of two parts: \(C_1\) and \(C_2. C_1\) is the intersection of cylinder \(x^2+y^2=16\) and plane \(z=3\) from \((0, 4, 3)\) to \((−4,0,3). C_2\) is a line segment from \((−4,0,3)\) to \((0, 1, 5)\). 56. A spring is made of a thin wire twisted into the shape of a circular helix \(x=2cost,y=2sint,z=t.\) Find the mass of two turns of the spring if the wire has constant mass density.
Solution: \(m=4πρ\sqrt{5}\)
57. A thin wire is bent into the shape of a semicircle of radius a. If the linear mass density at point P is directly proportional to its distance from the line through the endpoints, find the mass of the wire. 58. An object moves in force field \( F(x,y,z)=y^2 i+2(x+1)y j\) counterclockwise from point \((2, 0)\) along elliptical path \(x^2+4y^2=4\) to \((−2,0)\), and back to point \((2, 0)\) along the x-axis. How much work is done by the force field on the object?
Solution: \(W=0\)
59. Find the work done when an object moves in force field \( F(x,y,z)=2x i−(x+z) j+(y−x) k\) along the path given by \( r(t)=t^2 i+(t^2−t) j+3 k, 0≤t≤1.\) 60. If an inverse force field is given by \( F F(x,y,z)=\dfrac{ k}{‖ r‖^3} r\), where kis a constant, find the work done by as its point of application moves along the F x-axis from \(A(1,0,0)\) to \(B(2,0,0)\).
Solution: \(W=\dfrac{k}{2}\)
61. David and Sandra plan to evaluate line integral \(\displaystyle\int _C F·d r\) along a path in the xy-plane from (0, 0) to (1, 1). The force field is \(\vecs{F}(x,y)=(x+2y) i+(−x+y2) j\). David chooses the path that runs along the x-axis from (0, 0) to (1, 0) and then runs along the vertical line x=1 from (1, 0) to the final point (1, 1). Sandra chooses the direct path along the diagonal line y=x from (0, 0) to (1, 1). Whose line integral is larger and by how much?
|
This question already has an answer here:
Partitioning an infinite set 7 answers
Is it possible to have a countable infinite number of countable infinite sets such that no two sets share an element and their union is the positive integers?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
Is it possible to have a countable infinite number of countable infinite sets such that no two sets share an element and their union is the positive integers?
Sure. For example, let $A_n$ be the natural numbers with exactly $n$ ones in their binary expansion.
Alternately, pick your favorite way of decomposing $\mathbb{N}$ into two disjoint infinite subsets $A, B$, and pick a bijection $f : B \to \mathbb{N}$. Then $f(B)$ can be decomposed into two disjoint subsets $A, B$, hence $B$ can be decomposed into two disjoint subsets $f^{-1}(A), f^{-1}(B)$. Rinse and repeat. This argument is fairly general and works for any infinite set which admits a decomposition into two disjoint subsets of the same cardinality as it (which under the Axiom of Choice is all of them).
Maybe we can start slowly, by doing a decomposition of $\mathbb{N}$ into $2$ disjoint sets, say the
odds and the evens.
Let's now go for a decomposition into $3$ disjoint sets. Leave the odds alone, and decompose the evens into those divisible by $2$ but no higher power of $2$, and those divisible by $4$. To put it another way, we are using the odds, twice the odds, and the rest.
Continue, and let's introduce some notation. Let $W_0$ be the set of odd positive integers. Let $W_1$ be the set of positive integers which are $2^1$ times an odd number. Let $W_2$ be the set of positive integers which are $2^2$ times an odd number. In general let $W_n$ be the set of integers which are $2^n$ times an odd number.
It is clear that the $W_k$ are all infinite, pairwise disjoint, and that their union is all of $\mathbb{N}$.
Let p_n be the nth prime and let A_n be the positive integers whose smallest prime divisor is p_n (throw 1 in with A_1). This is basically applying the sieve of Eratosthenes to the entire set of positive integers.
Let $$ A_0 = \lbrace 1,3,5,7,9,\ldots \rbrace $$ and $$ A_1 = \lbrace 2^n 1 : n \in \mathbb{N} \rbrace, $$ $$ A_2 = \lbrace 2^n 3 : n \in \mathbb{N} \rbrace, $$ $$ A_3 = \lbrace 2^n 5 : n \in \mathbb{N} \rbrace, $$ $$ A_4 = \lbrace 2^n 7 : n \in \mathbb{N} \rbrace, $$ $$ A_5 = \lbrace 2^n 9 : n \in \mathbb{N} \rbrace, $$ $$ \cdots. $$ Noting that for any two distinct elements $r_1$ and $r_2$ of $A_0$ it holds $2^{n_1}r_1 \neq 2^{n_2}r_2$ $\forall n_1,n_2 \in \mathbb{N}$, we have that the $A_i$ are disjoint. On the other hand, let $2k$, with $k \in \mathbb{N}$, be an arbitrary even natural number. Considering its prime factorization, it is necessarily of the form $2k = 2^n r$, where $n \in \mathbb{N}$ and $r \in A_0$. Hence $2k \in \cup _{i = 1}^\infty A_i$, from which it follows that $\cup _{i = 1}^\infty A_i = \lbrace 2,4,6,8,10,\ldots \rbrace$, and so $\mathbb{N} = \cup _{i = 0}^\infty A_i$, with all the $A_i$ disjoint and countably infinite.
EDIT: Relation to user6312's answer.
The sets $A_i$, $i = 1,2,3,\ldots$, correspond to the rows $$ 2,4,8,16,32, \ldots, $$ $$ 6,12,24,48,96, \ldots, $$ $$ 10,20,40,80,160, \ldots, $$ $$ 14,28,56,112,224,\ldots, $$ $$ 18,36,72,144,288,\ldots, $$ $$ \cdots, $$ while the sets $W_i$, $i=1,2,3,\ldots$, in user6312's answer correspond to the corresponding columns, that is to $$ 2,6,10,14,18,\ldots, $$ $$ 4,12,20,28,36,\ldots, $$ $$ 8,24,40,56,72,\ldots, $$ $$ 16,48,80,112,144,\ldots, $$ $$ 32,96,160,224,288,\ldots, $$ $$ \cdots. $$
Easily.
Let $P=\{2,3,5,7,11,\ldots\}$ be all the prime numbers. Take for $p\in P$, any prime number let $A_p = \{p^n\mid n\in\mathbb N, n\neq 0\}$ and take $A_1 = \{n\in\mathbb N\mid n \text{ have at least two different prime divisors}\}\cup\{0\}$.
Every $A_i$ is disjoint of the rest, and every natural number has to appear in at least one.
However if one requires the sets to be disjoint then it is possible to show that you cannot split $\mathbb N$ to more than a countable number of disjoint sets.
Suppose $A_i$ for $i\in I$, for some indices set $I$, is a partition of $\mathbb N$ to disjoint sets. Map each $i\in I$ to the minimal $n\in A_i$.
Since $A_i\cap A_j=\varnothing$ we have that the minimal element of $A_i$ cannot be in $A_j$, let alone its minimal element. Since every $A_i$ is non-empty then it
has a minimal element. Therefore we have an injective mapping from $I$ into $\mathbb N$ and thus $I$ is countable (countably infinite, or finite in this case).
Another fine example is to take your favourite bijection between $\mathbb N$ and $\mathbb N\times\mathbb N$, call it $f$. Now take $A_n=f^{-1}[\{n\}\times\mathbb N]$, then $A_n\cap A_m=\varnothing$ for $n\neq m$ and easily enough this is a partition as you like. This can be generalized to any cardinal number such that $|A|=|A\times A|$.
While all of the answers here are obviously excellent, I'll throw in one more: imagine the points of $\mathbb{N}^2$ drawn out as an infinite grid of cells (the positive quadrant of the plane, essentially); then fill the cell $(1,1)$ with $1$, the cells $(2,1)$ and $(1,2)$ with $2$ and $3$, and in general all of the cells $(n,1)$ through $(1,n)$ that sum to $n+1$ with the numbers from $\left({1\over 2}n(n-1)\right)+1$ to ${1\over 2}n(n+1)$. This provides an easy arithmetic pairing function $\langle\cdot,\cdot\rangle$ from $\mathbb{N}^2$ to $\mathbb{N}$, mapping $(i,j)$ to $\langle i,j\rangle = {1\over2}(i+j-2)(i+j-1)+i$ (and with explicitly definable inverse functions $j_0(n), j_1(n)$ such that $\langle j_0(n),j_1(n)\rangle = n$ for all $n$, though I won't write those out); $\mathbb{N}$ can then be partitioned into the sets $\mathbb{N}_k = \left\{\langle k,i\rangle: i\in\mathbb{N}\right\}$. This is the approach usually taken in recursion theory, in particular, where the explicit $\Delta_0$ definability of the pairing function and its inverses is important.
For each integer $n$ put it in $A_i$ if $i$ is the smallest integer such that $n-i$ is a square. Or you can replace square with any even-degree polynomial with integer coefficients for a whole family. Or you can say n-i must be a prime.
Also: $A_i$ = all integers with exactly $i$ prime-factors, counting multiplicity.
|
In the following exercises assume that there’s no damping.
[exer:6.1.1] An object stretches a spring 4 inches in equilibrium. Find and graph its displacement for \(t>0\) if it is initially displaced 36 inches above equilibrium and given a downward velocity of 2 ft/s.
[exer:6.1.2] An object stretches a string 1.2 inches in equilibrium. Find its displacement for \(t>0\) if it is initially displaced 3 inches below equilibrium and given a downward velocity of 2 ft/s.
[exer:6.1.3] A spring with natural length.5 m has length 50.5 cm with a mass of 2 gm suspended from it. The mass is initially displaced 1.5 cm below equilibrium and released with zero velocity. Find its displacement for \(t>0\).
[exer:6.1.4] An object stretches a spring 6 inches in equilibrium. Find its displacement for \(t>0\) if it is initially displaced 3 inches above equilibrium and given a downward velocity of 6 inches/s. Find the frequency, period, amplitude and phase angle of the motion.
[exer:6.1.5] An object stretches a spring 5 cm in equilibrium. It is initially displaced 10 cm above equilibrium and given an upward velocity of.25 m/s. Find and graph its displacement for \(t>0\). Find the frequency, period, amplitude, and phase angle of the motion.
[exer:6.1.6] A 10 kg mass stretches a spring 70 cm in equilibrium. Suppose a 2 kg mass is attached to the spring, initially displaced 25 cm below equilibrium, and given an upward velocity of 2 m/s. Find its displacement for \(t>0\). Find the frequency, period, amplitude, and phase angle of the motion.
[exer:6.1.7] A weight stretches a spring 1.5 inches in equilibrium. The weight is initially displaced 8 inches above equilibrium and given a downward velocity of 4 ft/s. Find its displacement for \(t > 0\).
[exer:6.1.8] A weight stretches a spring 6 inches in equilibrium. The weight is initially displaced 6 inches above equilibrium and given a downward velocity of 3 ft/s. Find its displacement for \(t>0\).
[exer:6.1.9] A spring–mass system has natural frequency \(7\sqrt{10}\) rad/s. The natural length of the spring is.7 m. What is the length of the spring when the mass is in equilibrium?
[exer:6.1.10] A 64 lb weight is attached to a spring with constant \(k=8\) lb/ft and subjected to an external force \(F(t)=2\sin t\). The weight is initially displaced 6 inches above equilibrium and given an upward velocity of 2 ft/s. Find its displacement for \(t>0\).
[exer:6.1.11] A unit mass hangs in equilibrium from a spring with constant \(k=1/16\). Starting at \(t=0\), a force \(F(t)=3\sin t\) is applied to the mass. Find its displacement for \(t>0\).
[exer:6.1.12] A 4 lb weight stretches a spring 1 ft in equilibrium. An external force \(F(t)=.25\sin8 t\) lb is applied to the weight, which is initially displaced 4 inches above equilibrium and given a downward velocity of 1 ft/s. Find and graph its displacement for \(t>0\).
[exer:6.1.13] A 2 lb weight stretches a spring 6 inches in equilibrium. An external force \(F(t)=\sin8t\) lb is applied to the weight, which is released from rest 2 inches below equilibrium. Find its displacement for \(t>0\).
[exer:6.1.14] A 10 gm mass suspended on a spring moves in simple harmonic motion with period 4 s. Find the period of the simple harmonic motion of a 20 gm mass suspended from the same spring.
[exer:6.1.15] A 6 lb weight stretches a spring 6 inches in equilibrium. Suppose an external force \(F(t)={3\over16}\sin\omega t+{3\over8}\cos\omega t\) lb is applied to the weight. For what value of \(\omega\) will the displacement be unbounded? Find the displacement if \(\omega\) has this value. Assume that the motion starts from equilibrium with zero initial velocity.
[exer:6.1.16] A 6 lb weight stretches a spring 4 inches in equilibrium. Suppose an external force \(F(t)=4\sin\omega t-6\cos\omega t\) lb is applied to the weight. For what value of \(\omega\) will the displacement be unbounded? Find and graph the displacement if \(\omega\) has this value. Assume that the motion starts from equilibrium with zero initial velocity.
[exer:6.1.17] A mass of one kg is attached to a spring with constant \(k=4\) N/m. An external force \(F(t)=-\cos\omega t-2\sin\omega t\) n is applied to the mass. Find the displacement \(y\) for \(t>0\) if \(\omega\) equals the natural frequency of the spring–mass system. Assume that the mass is initially displaced 3 m above equilibrium and given an upward velocity of 450 cm/s.
[exer:6.1.18] An object is in simple harmonic motion with frequency \(\omega_0\), with \(y(0)=y_0\) and \(y'(0)=v_0\). Find its displacement for \(t>0\). Also, find the amplitude of the oscillation and give formulas for the sine and cosine of the initial phase angle.
[exer:6.1.19] Two objects suspended from identical springs are set into motion. The period of one object is twice the period of the other. How are the weights of the two objects related?
[exer:6.1.20] Two objects suspended from identical springs are set into motion. The weight of one object is twice the weight of the other. How are the periods of the resulting motions related?
[exer:6.1.21] Two identical objects suspended from different springs are set into motion. The period of one motion is 3 times the period of the other. How are the two spring constants related?
|
This question came up in the process of finding solution to another problem. Eventually, the problem was solved avoiding calculation of this sum, but it looks quite interesting on its own. Is there a closed form for $$\sum_{n=1}^\infty\frac{\psi(n+\frac{5}{4})}{(1+2n)(1+4n)^2},$$ where $\psi(z)=\frac{\Gamma'(z)}{\Gamma(z)}$ is the digamma function?
Yes, there is!
The derivation, which involves a lot of computer-assisted manipulations, is a little bit long, so I won't put it here just now, but the answer is$$ -2 C-C \gamma-\frac{1}{2}C \pi+\frac{1}{4}\gamma \pi-\frac{\pi ^2}{6}-\frac{\gamma \pi ^2}{8}+\frac{11}{64} \pi ^3-6 \Im\text{Li}_3\left(\frac{1+i}{2}\right)-3 C \log 2+\frac{1}{2} \gamma \log 2+\pi\log 2+\frac{3}{2} (\log 2)^2+\frac{3}{16} \pi (\log 2)^2+\frac{7}{4} \zeta(3) - \psi\left(\frac54\right),$$where $C$ is the Catalan constant, $\gamma$ is Euler's gamma, and $\text{Li}_3$ is a polylogarithm, and (
edited) the last term $\psi(5/4)=4-\gamma-\frac\pi2-\log 8$ is there because everything below computes the sum over $n\geq0$, so I need to subtract $n=0$ from the answer below to get the sum over $n\geq1$. The way to derive this is to write the sum as$$ \frac18 S\left(\frac54\middle| \frac14,\frac14\right) - \frac18 S\left(\frac54\middle|\frac14,\frac12\right),$$where I've introduced the notation$$ S(\beta|\alpha_1,\alpha_2,\ldots) = \sum_{n\geq0} \frac{\psi(n+\beta)}{(n+\alpha_1)(n+\alpha_2)\cdots}. $$ Edit to explain the calculation.
The intermediate sums can be found in closed form mainly using the integral representation of the digamma function, in the form $$ \psi(z_1)-\psi(z_2) = \int_0^1 \frac{dt}{1-t}(t^{z_2-1}-t^{z_1-1}), $$ a sum representation in the form $$ \psi(z+1)+\gamma = \sum_{k\geq1} \frac{1}{k} - \frac{1}{k+z},$$ and using also the definition of the Lerch transcendent and its relation to the hypergeometric function and the incomplete beta function: $$ \Phi(z,s,a) = \sum_{n\geq0}\frac{z^n}{(n+a)^s}, $$ $$ \Phi(z,1,a) = z^{-a}B(z,a,0) = \frac{1}{a}F(1,a;1+a;z). $$
The intermediate sum $S(\frac54;\frac14,\frac14)$ can be found using $$ \sum_{n\geq0}\frac{\psi(n+\frac54)+\gamma}{(n+\frac14)^2} = \sum_{n\geq0}\sum_{k\geq1} \frac{1}{(n+\frac14)^2}\left(\frac1k - \frac1{k+n+\frac14}\right) = \sum_{k\geq1} \frac{\psi(k+\frac14)-\psi(\frac14)}{k^2} = 4F\left(\begin{array}{c}1,1,1,1\\2,2,\frac54\end{array}\right) + \int_0^1 \frac{\log t\log(1-t)}{t(1-t)^{3/4}}\,dt, $$ where $$ \text{Li}_2(1) - \text{Li}_2(t) = \text{Li}_2(1-t) + \log t\log(1-t), $$ so $$ S\left(\frac54\middle| \frac14,\frac14\right) = -\gamma\psi_1(1/4) + \sum_{n\geq0}\frac{\psi(n+\frac54)+\gamma}{(n+\frac14)^2}. $$
Using the above relationship between incomplete beta function, the fact that this hypergeometric is a repeated integral of the incomplete beta function (which you know because of the ones on top and matching twos on the bottom) and the fact that there is a nice closed form for the incomplete beta function with a rational first parameter and zero second parameter, like so: $$ B(z,p/q,0) = -\sum_{0\leq l < q} e^{-2\pi i l p/q}\log(1-z^{1/q}e^{2\pi i l/q}), \qquad p,q\in\mathbb{Z}, $$ it can be simplified to $$ F\left(\begin{array}{c}1,1,1,1\\2,2,\frac54\end{array}\right) = \frac{7}{32}\pi^3 - 12\Im\text{Li}_3\left(\frac{1+i}{2}\right) + \frac32\pi(\log 2)^2 + \pi^2\log 8 - 14\zeta(3). $$
The second intermediate sum can be simplified using partial fractions on $\frac1{(n+\frac14)(n+\frac12)}$, and the fact that $$ \sum_{n\geq0}\frac{1}{(n+\alpha_1)(n+\alpha_2)} = \frac{\psi(\alpha_1)-\psi(\alpha_2)}{\alpha_1-\alpha_2}. $$ Then $$ S\left(\frac54\middle|\frac14,\frac12\right) + \gamma\frac{\psi(\frac12)-\psi(\frac14)}{\frac12-\frac14} = \sum_{n\geq 0}\frac{\psi(n+\frac54)+\gamma}{(n+\frac14)(n+\frac12)} = \sum_{k\geq1}\frac{\psi(k+\frac14)-\psi(\frac12)}{k(k-\frac14)}, $$ where I've expanded difference of digamma functions as an infinite sum over $k$ and performed the sum over $n$. This expression can be handled using the integral representation for $\psi$ mentioned above, giving $$ \int_0^1\frac{dt}{1-t}\left( (12\log2-2\pi)t^{-\frac12} - t^{-\frac34}\left( 4\log(1-t)+\frac{16}{3}tF\left(1,\frac34;\frac74;t\right)\right)\right) \\= 16C+\frac43\pi^2-8\pi\log2-12(\log2)^3. $$
Putting everything together, and using Mathematica to evaluate the easier sums, gives the expression I gave above.
Basically, the most important steps in the derivation are the integral representation for $\psi(x)-\psi(y)$ and the explicit formula for $B(z,\beta,0)$ when $\beta$ is a rational number. By luck all the integrals then simplify to something Mathematica can do in closed form.
I don't know any standard reference for this; the whole question is basically just computing $$ \frac{d}{d\epsilon}\big|_{\epsilon=0}F\left(\begin{array}{c}1,\frac12,\frac14,\frac14,\frac54+\epsilon\\ \frac32,\frac54,\frac54,\frac54 \end{array}\right), $$ and in general derivatives of hypergeometric functions are given by Kampe de Feriet functions, which don't have a closed form in terms of hypergeometric functions. So in general there should be no closed form, but in this case the parameters are just right that all integrals can be done.
When $Q(n)$ is some rational function having a partial fraction expansion $\sum_k q_k (n+\alpha_k)^{-s}$, the sum $$ \sum_{n\geq0}Q(n)(\psi(n+\beta)-\psi(\beta)) $$ can be expressed in terms of sums $$ \sum_{n\geq0} \frac{\psi(n+\beta)-\psi(\beta)}{(n+\alpha)^s} = \int_0^1 B(v,\beta,0)\Phi'(v,s,\alpha)\,dv, $$ where $\Phi'$ is the Lerch transcendent's derivative. If the sum on the left diverges, it is necessary to consider the asymptotic expansion $$ \sum_{n\geq0} \frac{\psi(n+\beta)-\psi(\beta)}{n+\alpha}z^n \\= \Theta(\log(1-z))^2 + \Theta(\log(1-z)) + S(\beta,\alpha) + o(1), \qquad z\to1- $$ and only take the non-divergent term $S(\beta,\alpha)$ as the value of the sum at $z\to1$ (the divergent terms should be independent of $\alpha$ and cancel out when the partial fractions of $Q(n)$ are added together).
So the problem of evaluating the sum $$\sum_{n\geq0}Q(n)(\psi(n+\beta)-\psi(\beta))$$ reduces to calculating asymptotic expansions of integrals of the form $$ z \int_0^1 B(u,\beta,0)\Phi'(z u,s,\alpha)\,du, \qquad z\to1-. $$
|
I can do a few
a) The velocity is positive on 0 < t < 1
This means the ball is rising during this time
b) The antiderivative is s (t) = 32t - 16t^2 + C
At s(0) we have 32(0) - 16(0)^2 + C = 0
So C = 0
c) s(1) - s(1/2) = [ 32(1) - 16(1)^2 ] - [ 32(1/2) - 16(1/2)^2 ] = [ 16] - [ 16 - 4] = 4
This represents the change in postion from t = 1/2 to t = 1
[ It rises 4 ft in this interval = the displacement on this interval ]
d) At t = (1/2).....the height of the triangle formed is 32 - 32(1/2) = 32 - 16 = 16
So....the area = (1/2) base * height = (1/2) (1/2)(16) = 4
This area also represents the displacement on t =1/2 to t =1
f) s(2) - s(0) = [ 32(2) - 16(2)^2] - 0 = 0
This means that the total displacement = 0 ......it falls as far as it rises
Note that the velocity at t = 0 = 32 ft/s
And at t = 2 = -32 ft/s
This makes sense.....the terminal velocity = the beginning velocity....only in the opposite direction
A lesson in understanding definite integrations
Think about what you are finding when you get the area under a curve.
In this instance from t=0 to t=1 you are finding the area of a triangle which is 4. But 4 what?
Look at the units. They are t time (seconds) in the horizonal directions and velocity (feet/sec) in the vertical direction.
When you find the area you can find the units the same way. The unit of the area will be \(seconds\times \frac{feet}{sec}=feet\)
So that is why the area under the velocity curve gives displacement.
And youcan find the area with integration.
NOW
Lets think about integration for a moment.
The integration sign.
\(\int\) is a stylized S and it stands for SUM
What you are doing with any definite integral is finding the sum of all the rectangles (with infinitely small width) that lie between the curve and the axis that you are integrating with respect to (usually the horizonal axis).
------
A lesson in understanding differentiation
With differentiation the
d stands for difference. So \(\frac{dx}{dt}=\frac{\text{difference in x}}{\text{difference in t }} \)
Since this is the answer as t tends to 0, dx/dt results in the instantaneous velocity.
The gradient of the tangent to a curve IS the differential to the curve at that point.
If you have questions then please ask them
|
These formulas are for different matrix formats of the rectangular matrix $A$.
The matrix to be (pseudo-)inverted should have full rank. (added:) If $A\in I\!\!R^{m\times n}$ is a tall matrix, $m>n$, then this means $rank(A)=n$, that is, the columns have to be linearly independent, or $A$ as a linear map has to be injective. If $A$ is a wide matrix, $m<n$, then the rows of the matrix have to be independent to give full rank. (/edit)
If full rank is a given, then you are better off simplifying these formulas using a QR decomposition for $A$ resp. $A^T$. There the R factor is square and $Q$ is a narrow tall matrix with the same format as $A$ or $A^T$,
If $A$ is tall, then $A=QR$ and $A^{\oplus}_{left}=R^{-1}Q^T$
If $A$ is wide, then $A^T=QR$, $A=R^TQ^T$, and $A^{\oplus}_{right}=QR^{-T}$.
You only need an SVD if $A$ is suspected to not have the maximal rank for its format. Then a reliable rank estimation is only possible comparing the magnitudes of the singular values of $A$. The difference is $A^{\oplus}$ having a very large number or a zero as a singular value where $A$ has a very small singular value.
Added, since wikipedia is curiosly silent about this: Numerically, you first compute or let a library compute the SVD $A=U\Sigma V^T$ where $Σ=diag(σ_1,σ_2,\dots,σ_r)$ is the diagonal matrix of singular values, ordered in decreasing size $σ_1\ge σ_2\ge\dots\ge σ_r$.
Then you estimate the effective rank by looking for the smallest $k$ with for instance $σ_{k+1}<10^{-8}σ_1$ or as another strategy, $σ_{k+1}<10^{-2}σ_k$, or a combination of both. The factors defining what is "small enough" are a matter of taste and experience.
With this estimated effective rank $k$ you compute $$Σ^⊕=diag(σ_1^{-1},σ_2^{-1},\dots,σ_k^{-1},0,\dots,0)$$ and $$A^⊕=VΣ^⊕U^T.$$
Note how the singular values in $Σ^⊕$ and thus $A^⊕$ are increasing in this form, that is, truncating at the effective rank is a very sensitive operation, differences in this estimation lead to wildly varying results for the pseudo-inverse.
|
SageMath
Sage is a program for numerical and symbolic mathematical computation. It is meant to provide an alternative for commercial programs such as Maple, Matlab, and Mathematica. It is actually mantained in the community repository.
Contents Installation
To install binaries from community repository:
# pacman -S sage-mathematics Usage Sage-Commandline
Once installed, users should be able to start the sage commandline in bash.
$ sage
Math can then be typed at this commandline.
sage: 2+2 4
Note, however, that the CLI is not very comforable for certain purposed. For instance, if you plot a function,
sage: plot(sin,(x,0,10))
this is opened in a browser window. In these cases, you can start the notebook server by
sage: notebook() Sage-Notebook
To start the Sage-Notebook server from bash, without going through Sage-Commandline:
$ sage -n
The notebook will be accessible in the browser from http://localhost:8000 , and will require you to login. However, If you are only intending to run the server for personal use, and not across the internet, the login will be an annoyance.
You can instead start Sage notebook without requiring login, and have it automatically pop up in a browser.
$ BROWSER="chromium" sage -c "notebook(require_login=false,open_viewer=true)"
More detailed documentation can be read about the "notebook() command."
Cantor
You can install it with:
$ pacman -S kdeedu-cantor
or along with the 'kde' or 'kdeedu' groups.
Documentation
For local Documentation, one can compile it into html or even pdfs. Apparently, this needs to be executed as root (perhaps unsafely):
# sage -docbuild reference html
This builds html docs for the whole tree "reference" (which is alot; Expect many many hours). An option is to build a smaller part of the documentation tree, but you would need to know what it is you want. Until then, you might consider just browsing reference online. One place this can be found is at http://www.sagemath.org/doc/ .
Optional additions SageTeX
If you have installed Texlive on your system, you may be interested in using SageTeX, a package that makes possible the inclusion of sage code in LaTeX files. Starting from version 4.6.1-3, Texlive is made aware of SageTeX automagically, and you can start using it straight away.
As a simple example, here is how you include a Sage 2D plot in your latex document (assuming you use pdfLaTeX):
include the sagetex package in the preamble of your document with the usual \usepackage{sagetex} create a sagesilent environment in which you insert your sage code: \begin{sagesilent} dob(x) = sqrt(x^2-1)/(x * arctan(sqrt(x^2-1))) dpr(x) = sqrt(x^2-1)/(x * log( x + sqrt(x^2-1))) p1 = plot(dob,(x,1,10),color='blue') p2 = plot(dpr,(x,1,10),color='red') ptot=p1+p2 ptot.axes_labels(['$\\xi$','$\\frac{R_h}{\\max(a,b)}$']) \end{sagesilent} create the plot e.g. inside a float environment: \begin{figure} \begin{center} \sageplot[width=\linewidth]{ptot} \end{center} \end{figure} compile your document with the following procedure: $pdflatex <doc.tex> $sage <doc.sage> $pdflatex <doc.tex> you can have a look at your output document.
The full documentation of SageTeX is available on CTAN.
Troubleshooting
If your Texlive installation does not find the sagetex package, you can try the following procedure (do it as root, or with sudo, or use a local folder):
Copy the files to the texmf directory: # cp /opt/sage/local/share/texmf/tex/* /usr/share/texmf/tex/ Refresh Texlive: # texhash /usr/share/texmf/ texhash: Updating /usr/share/texmf/.//ls-R... texhash: Done. Additional Packages
If you need to compile your own custom packages in addition to the standard ones, you might need to install aur/sage-mathematics-spkgs so that you will have the spkgs of the original packages.
|
In classical electromagnetism, the
electric potential (a scalar quantity denoted by Φ(Phi), Φ or E V and also called the electric field potential or the electrostatic potential) at a point of space is the amount of electric potential energy that a unitary point charge would have when located at that point. The electric potential of a point may also be defined as the work done in carrying a unit positive charge from infinity to that point.
The electric potential at a point is equal to the electric potential energy (measured in joules) of any charged particle at that location divided by the charge (measured in coulombs) of the particle. Since the charge of the test particle has been divided out, the electric potential is a "property" related only to the electric field itself and not the test particle. The electric potential can be calculated at a point in either a static (time-invariant) electric field or in a dynamic (varying with time) electric field at a specific time, and has the units of joules per coulomb (
J C –1), or volts ( V).
There is also a generalized electric scalar potential that is used in electrodynamics when time-varying electromagnetic fields are present. This generalized electric potential cannot be simply interpreted as the ratio of potential energy to charge, however.
Contents Introduction 1 Electrostatics 2 Electric potential due to a point charge 2.1 Generalization to electrodynamics 3 Units 4 Galvani potential versus electrochemical potential 5 See also 6 References 7 Introduction
Objects may possess a property known as an electric charge. An electric field exerts a force on charged objects. If the charged object has a positive charge, the force will be in the direction of the electric field vector at that point. The force will be in the opposite direction if the charge is negative. The magnitude of the force is given by the quantity of the charge multiplied by the magnitude of the electric field vector. A net force acting on an object will cause it to accelerate, as explained by Classical mechanics which explores concepts such as force, energy, potential etc. The electric potential (or simply potential) at a point in an electric field is defined as the work done in moving a unit positive charge from infinity to that point. The electric potential at infinity is assumed to be zero.
Force and potential energy are directly related. As an object moves in the direction that the force accelerates it, its potential energy decreases. For example, the gravitational potential energy of a cannonball at the top of a hill is greater than at the base of the hill. As the object falls, that potential energy decreases and is translated to motion, or inertial (kinetic) energy.
For certain force fields, it is possible to define the "potential" of a field such that the potential energy of an object due to a field depends only on the position of the object with respect to the field. Those forces must affect objects depending only on the intrinsic properties of the object (e.g., mass or charge) and the position of the object, and obey certain other mathematical rules.
Two such force fields are the gravitational force field (gravity) and the electric field in the absence of time-varying magnetic fields. The potential of an electric field at a point is called the electric potential. The synonymous term "electrostatic potential" is also in common use.
The electric potential and the magnetic vector potential together form a four vector, so that the two kinds of potential are mixed under Lorentz transformations.
Electrostatics
The electric potential at a point
r in a static electric field E is given by the line integral
V_\mathbf{E} = - \int_C \mathbf{E} \cdot \mathrm{d} \boldsymbol{\ell} \,
where
C is an arbitrary path connecting the point with zero potential to r. When the curl ∇ × E is zero, the line integral above does not depend on the specific path C chosen but only on its endpoints. In this case, the electric field is conservative and determined by the gradient of the potential:
\mathbf{E} = - \mathbf{\nabla} V_\mathbf{E}. \,
Then, by Gauss's law, the potential satisfies Poisson's equation:
\mathbf{\nabla} \cdot \mathbf{E} = \mathbf{\nabla} \cdot \left (- \mathbf{\nabla} V_\mathbf{E} \right ) = -\nabla^2 V_\mathbf{E} = \rho / \varepsilon_0, \,
where
ρ is the total charge density (including bound charge) and ∇· denotes the divergence.
The concept of electric potential is closely linked with potential energy. A test charge
q has an electric potential energy U given by E U_ \mathbf{E} = q\,V. \,
The potential energy and hence also the electric potential is only defined up to an additive constant: one must arbitrarily choose a position where the potential energy and the electric potential are zero.
These equations cannot be used if the curl
∇ × E ≠ 0, i.e., in the case of a nonconservative electric field (caused by a changing magnetic field; see Maxwell's equations). The generalization of electric potential to this case is described below. Electric potential due to a point charge
The electric potential created by a charge
Q is V= Q/(4πε o r). Different values of Q will make different values of electric potential V (shown in the image).
The electric potential created by a point charge
Q, at a distance r from the charge (relative to the potential at infinity), can be shown to be V_\mathbf{E} = \frac{1}{4 \pi \varepsilon_0} \frac{Q}{r}, \,
where
ε 0 is the electric constant (permitivity of free space). This is known as the Coulomb potential.
The electric potential due to a system of point charges is equal to the sum of the point charges' individual potentials. This fact simplifies calculations significantly, since addition of potential (scalar) fields is much easier than addition of the electric (vector) fields.
The equation given above for the electric potential (and all the equations used here) are in the forms required by SI units. In some other (less common) systems of units, such as CGS-Gaussian, many of these equations would be altered.
Generalization to electrodynamics
When time-varying magnetic fields are present (which is true whenever there are time-varying electric fields and vice versa), it is not possible to describe the electric field simply in terms of a scalar potential
V because the electric field is no longer conservative: \textstyle\int_C \mathbf{E}\cdot \mathrm{d}\boldsymbol{\ell} is path-dependent because \mathbf{\nabla} \times \mathbf{E} \neq \mathbf{0} (Faraday's law of induction).
Instead, one can still define a scalar potential by also including the magnetic vector potential
A. In particular, A is defined to satisfy: \mathbf{B} = \mathbf{\nabla} \times \mathbf{A}, \,
where
B is the magnetic field. Because the divergence of the magnetic field is always zero due to the absence of magnetic monopoles, such an A can always be found. Given this, the quantity \mathbf{F} = \mathbf{E} + \frac{\partial\mathbf{A}}{\partial t}
is a conservative field by Faraday's law and one can therefore write \mathbf{E} = -\mathbf{\nabla}V - \frac{\partial\mathbf{A}}{\partial t}, \,
where
V is the scalar potential defined by the conservative field F.
The electrostatic potential is simply the special case of this definition where
A is time-invariant. On the other hand, for time-varying fields, -\int_a^b \mathbf{E} \cdot \mathrm{d}\boldsymbol{\ell} \neq V_{(b)} - V_{(a)}, \,
unlike electrostatics.
Units
The SI unit of electric potential is the volt (in honour of Alessandro Volta), which is why a difference in electric potential between two points is known as voltage. Older units are rarely used today. Variants of the centimeter gram second system of units included a number of different units for electric potential, including the abvolt and the statvolt.
Galvani potential versus electrochemical potential
Inside metals (and other solids and liquids), the energy of an electron is affected not only by the electric potential, but also by the specific atomic environment that it is in. When a voltmeter is connected between two different types of metal, it measures not the electric potential difference, but instead the potential difference corrected for the different atomic environments.
[1] The quantity measured by a voltmeter is called electrochemical potential or fermi level, while the pure unadjusted electric potential is sometimes called Galvani potential. The terms "voltage" and "electric potential" are a bit ambiguous in that, in practice, they can refer to either of these in different contexts. See also References ^ Bagotskii, Vladimir Sergeevich (2006). Fundamentals of electrochemistry. p. 22.
Griffiths, David J. (1998). Introduction to Electrodynamics (3rd. ed.). Prentice Hall. Jackson, John David (1999). Classical Electrodynamics (3rd. ed.). USA: John Wiley & Sons, Inc. Wangsness, Roald K. (1986). Electromagnetic Fields (2nd., Revised, illustrated ed.). Wiley.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
In this paper, the authors detail the methods to calculate the stream function $\psi$ from velocity data in the case of the global ocean (i.e., multiply connected domains).
The general idea is to use velocity to calculate the vorticity $\zeta$ and, with that, calculate the stream function by solving the Poisson equation: $$\nabla^2\psi=\zeta$$ The difficulty of the problem is to specify the boundary conditions. As we do not want a flow across the boundary, we require $\psi=\mu_k$ on all the coastlines, with the value of $\mu_k$ being a constant to be determined for each different coastline (i.e., each island/continent).
To do this one can use the so-called Godfrey's islands rule that follows from requiring that the circulation around each island should be constant. At the end of the day, one should solve a system of equations that relate the $\mu_k$ and the derivatives of $\psi$ normal to the coastlines (eq. 19 in the paper):
$$\sum_k\mu_k\int_{\partial I_k}\partial_n\psi_j~ds+\int_{\partial I_k}\partial_n\psi_0~ds=\int_{\partial I_k}\mathbf{m}\cdot ds$$
BUT
Shouldn't the normal derivative be zero in order to have no-flow?
If yes, probably not, but if yes, this would imply that all the $\mu_k$ would be zero and one could avoid to solve the system altogether...
What am I missing?
Side questions:
1- How can one calculate the normal derivative with numerical data when the coastline is "segmented" and the normal vector not well defined? For example in this case:
______|xxxx|___xxxxxxxxxxx
2- Does anyone know of a freely-available code for solving this problem? I am currently struggling with the normal derivatives...
|
As part of solving a problem I came across a summation which I'm having some difficulty simplifying. I'm not sure if it would even simplify into something nice.
$\sum\limits_{i=0}^{p-1} {ip^e \choose k}$
where $p$ is prime, $0\leq e\in\mathbb{Z}$, and $k\in\mathbb{Z}$, $0\leq k\leq p^e(p-1)$.
I'm aware that when $e=0$ this is just the hockey-stick identity, but working with the general case has been unfruitful.
|
A conservative functor $\mathsf{Top} \to \mathsf{Set}$ has been described in the comments. In a similar vein, for schemes we can take $X \mapsto |X| \amalg \mathcal{P}(\mathrm{Stalks}(X))$. This should also work for manifolds. Actually, for manifolds the much more formal $X \mapsto \coprod_{Y ~ \mathrm{conn}} \mathrm{Hom}(Y,X)$ works too, where the coproduct is over connected $Y$. So there are more categories conservative over $\mathsf{Set}$ than you might think.
It might be interesting to ask which categories admit a representable conservative functor to $\mathsf{Set}$. Or at least "familialy representable": a set of objects $S_\alpha$ such that $\coprod_\alpha \mathrm{Hom}(S_\alpha,-)$ is conservative is called a strong generator for a category. Categories with strong generators are too general to admit a good classification, but I'm pretty sure that topological spaces and schemes don't have strong generators (although manifolds do: the connected manifolds). Also, see the nlab article for other sorts of "generators" (the nlab prefers the term "separator"). Generating hypotheses are important auxiliary hypotheses in theorems which often rule out categories of a "topological" nature.
One question which has received attention in the categorical literature is when a category admits a
faithful functor to $\mathsf{Set}$, or is "concrete". Famously, the homotopy category of topological spaces does not. Necessary and sufficient conditions are known characterizing when a category is concrete -- if the category has finite limits, then the condition is equivalent to being regularly well-powered (i.e. every object has a small set of regular subobjects). But this question doesn't really get at the distinction you want: the usual functor $\mathsf{Top} \to \mathsf{Set}$ is faithful, for example.
Stronger notions which may be related to the distinction you want include accessible categories, topological categories, and algebraic categories.
As to the edit, if $\mathbb{T}$ is a first-order theory, I can think of several reasonable categories of models to consider:
$\mathrm{Mod}(\mathbb{T})$, the category of models and homomorphisms. In this case, it seems to me that one easy criterion for the forgetful functor to be conservative is that there be no relation symbols (except equality), but only function symbols (including constants) in the language. The converse -- if the forgetful functor is conservative then $\mathbb{T}$ can be axiomatized over a language with no relation symbols -- might be true, I don't know.
EDIT Actually, this converse probably fails. For example, let $\mathbb{T}$ be the theory with two binary relations $R,S$ and the axiom $\forall x,y \, R(x,y) \Leftrightarrow \neg S(x,y)$. Then the forgetful functor is conservative, but it seems very unlikely that this theory can be axiomatized in a signature with no relations.
$\mathrm{Elem}(\mathbb{T})$, the category of models and elementary embeddings. In this case, it seems to me that the forgetful functor to $\mathsf{Set}$ is always conservative.
For just about any reasonable class of formulas $\Phi$, the category $\Phi-\mathrm{Elem}(\mathbb{T})$, the category of models and homomorphisms which preserve the satisfaction of formulas in $\Phi$. $\mathrm{Mod}(\mathbb{T})$ is the case when $\Phi$ is just the atomic formulas (those built up from variables, function symbols, and relation symbols, but no logical connectives or quantifiers). As long as $\Phi$ contains the atomic formulas and their negations (so that the morphisms of $\mathrm{Mod}(\mathbb{T})$ are all embeddings or at least strong homomorphisms, I believe that the forgetful functor to $\mathsf{Set}$ is conservative.
As for higher-order logic, I really don't know because I'm not familiar enough with it.
|
Difference between revisions of "Talk:Absolute continuity"
(what is the problem?)
Line 32: Line 32:
:: OK, I found a way around. But there must be some bug: it seems that whenever I write the symbol "bigger" then things gets messed up (now even on THIS page). [[User:Camillo.delellis|Camillo]] 10:57, 10 August 2012 (CEST)
:: OK, I found a way around. But there must be some bug: it seems that whenever I write the symbol "bigger" then things gets messed up (now even on THIS page). [[User:Camillo.delellis|Camillo]] 10:57, 10 August 2012 (CEST)
+ + Revision as of 13:06, 10 August 2012
Could I suggest using $\lambda$ rather than $\mathcal L$ for Lebesgue measure since
it is very commonly used, almost standard it would be consistent with the notation for a general measure, $\mu$ calligraphic is being used already for $\sigma$-algebras
--Jjg 12:57, 30 July 2012 (CEST)
Between metric setting and References I would like to type the following lines. But for some reason which is misterious to me, any time I try the page comes out a mess... Camillo 10:45, 10 August 2012 (CEST)
if for every $\varepsilon$ there is a $\delta > 0$ such that, for any $a_1<b_1<a_2<b_2<\ldots < a_n<b_n \in I$ with $\sum_i |a_i -b_i| <\delta$, we have \[ \sum_i d (f (b_i), f(a_i)) <\varepsilon\, . \] The absolute continuity guarantees the uniform continuity. As for real valued functions, there is a characterization through an appropriate notion of derivative. Theorem 1A continuous function $f$ is absolutely continuous if and only if there is a function $g\in L^1_{loc} (I, \mathbb R)$ such that\begin{equation}\label{e:metric}d (f(b), f(a))\leq \int_a^b g(t)\, dt \qquad \forall a<b\in I\,\end{equation}(cp. with ). This theorem motivates the following Definition 2If $f:I\to X$ is a absolutely continuous and $I$ is compact, the metric derivative of $f$ is the function $g\in L^1$ with the smalles $L^1$ norm such that \ref{e:metric} holds (cp. with ) OK, I found a way around. But there must be some bug: it seems that whenever I write the symbol "bigger" then things gets messed up (now even on THIS page). Camillo 10:57, 10 August 2012 (CEST) But I did not understand what is the problem. Messed up? On this page? Where? And what was the way around? --Boris Tsirelson 13:06, 10 August 2012 (CEST) How to Cite This Entry:
Absolute continuity.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Absolute_continuity&oldid=27472
|
Simple question. When are we allowed to exchange limits and integrals? I'm talking about situations like$$\lim_{\varepsilon\to0^+} \int_{-\infty}^\infty dk f(k,\varepsilon) \overset{?}{=} \int_{-\infty}^\infty dk\lim_{\varepsilon\to0^+} f(k,\varepsilon).$$Everyone refers to either
dominated convergence theorem or monotone convergence theorem but I'm not sure if I understand how exactly one should go about applying it. Both theorems are about sequences and I don't see how that relates to integration in practice. Help a physicist out :)
Simple question. When are we allowed to exchange limits and integrals? I'm talking about situations like$$\lim_{\varepsilon\to0^+} \int_{-\infty}^\infty dk f(k,\varepsilon) \overset{?}{=} \int_{-\infty}^\infty dk\lim_{\varepsilon\to0^+} f(k,\varepsilon).$$Everyone refers to either
The statement of the dominated convergence theorem (DCT) is as follows:
"Discrete" DCT.Suppose $\{f_n\}_{n=1}^\infty$ is a sequence of (measurable) functions such that $|f_n| \le g$ for some integrable function $g$ and all $n$, and $\lim_{n\to\infty}f_n = f$ pointwise almost everywhere. Then, $f$ is an integrable function and $\int |f-f_n| \to 0$. In particular, $\lim_{n\to\infty}\int f_n = \int f$ (by the triangle inequality). This can be written as $$ \lim_{n\to\infty}\int f_n = \int \lim_{n\to\infty} f_n.$$
(The statement and conclusion of the monotone convergence theorem are similar, but it has a somewhat different set of hypotheses.)
As you note, the statements of these theorems involve
sequences of functions, i.e., a $1$-discrete-parameter family of functions $\{f_n\}_{n=1}^\infty$. To apply these theorems to a $1$-continuous-parameter family of functions, say $\{f_\epsilon\}_{0<\epsilon<\epsilon_0}$, one typically uses a characterization of limits involving a continuous parameter in terms of sequences:
Proposition.If $f$ is a function, then $$\lim_{\epsilon\to0^+}f(\epsilon) = L \iff \lim_{n\to\infty}f(a_n) = L\quad \text{for $\mathbf{all}$ sequences $a_n\to 0^+$.}$$
With this characterization, we can formulate a version of the dominated convergence theorem involving continuous-parameter families of functions (note that I use quotations to title these versions of the DCT because these names are not standard as far as I know):
"Continuous" DCT.Suppose $\{f_\epsilon\}_{0<\epsilon<\epsilon_0}$ is a $1$-continuous-parameter family of (measurable) functions such that $|f_\epsilon| \le g$ for some integrable function $g$ and all $0<\epsilon<\epsilon_0$, and $\lim_{\epsilon\to0^+}f_\epsilon=f$ pointwise almost everywhere. Then, $f$ is an integrable function and $\lim_{\epsilon\to 0^+}\int f_\epsilon = \int f$. This can be written as $$ \lim_{\epsilon\to0^+}\int f_\epsilon = \int \lim_{\epsilon\to0^+} f_\epsilon.$$
The way we use the continuous DCT in practice is by picking an
arbitrary sequence $\pmb{a_n\to 0^+}$ and showing that the hypotheses of the "discrete" DCT are satisfied for this arbitrary sequence $a_n$, using only the assumption that $a_n\to 0^+$ and properties of the family $\{f_\epsilon\}$ that are known to us.
Let's look at it in a sample case. We want to prove by DCT that $$\lim_{\varepsilon\to0^+} \int_0^\infty e^{-y/\varepsilon}\,dy=0$$
This is the case if and only if for all sequences $\varepsilon_n\to 0^+$ it holds $$\lim_{n\to\infty}\int_0^\infty e^{-y/\varepsilon_n}\,dy=0$$
And now you can use DCT on each of these sequences. Of course, the limiting function will always be the zero function and you may consider the dominating function $e^{-x}$.
|
Sorry for the late response. Here's a proof of the open mapping theorem assuming the maximum modulus principle.
First, we need the "minimum modulus principle". That is, if $f$ is a non-constant analytic function on an open connected set $D \subset \mathbb{C}$, and $f$ has no zeroes in $D$, then $| f |$ cannot attain a minimum in $D$. The proof follows trivially by applying the maximum modulus principal to the function $1/f$ which is analytic on $D$.
Now suppose $D \subset \mathbb{C}$ is open and connected, and $f$ is a non-constant analytic function on $D$. Let $U \subset D$ be open, and let $w_0 \in f(U)$, say $w_0=f(z_0)$ with $z_0 \in U$. We must show that there is a disc centered at $w_0$ which is contained in $f(U)$.
Choose $t>0$ so that $\overline {D_t(z_0)} \subset U$ and $f(z) \neq w_0$ for any $z \in \overline {D_t(z_0)}$ other than $z_0$. Let $m=\inf \{|f(z)-w_0| : |z-z_0|=t \} > 0 $. Suppose $|w-w_0| < m/3$, and that there is no $z \in U$ such that $f(z)=w$. Then the function $g(z)=f(z)-w$ is analytic, non-constant, and has no zeroes in the open connected set $D_t(z_0)$, so the minimum modulus principle shows that $g$ cannot attain a minimum modulus in $D_t(z_0)$. However, $g$ does attain a minimum modulus in the compact set $ \overline {D_t(z_0)} $, so this minimum modulus must occur on the boundary circle defined by $|z-z_0|=t$. But if $|z-z_0|=t$, then
$|g(z)|=|f(z)-w| \geq |f(z)-w_0| - |w_0-w| \geq 2m/3 $, and
$|g(z_0)| = |w_0-w| < m/3 < 2m/3$.
This gives a contradiction since $z_0$ is obviously in the interior of the disc in question. Therefore $f(z)=w$ for some $z \in D_t(z_0)$, and $D_{m/3}(w) \subset f(U)$, showing that $f(U)$ is open and proving the theorem.
|
LaTeX supports many worldwide languages by means of some special packages. In this article is explained how to import and use those packages to create documents in
German.
Contents
German language has some special characters. For this reason the preamble of your file must be modified accordingly to support these characters and some other features.
\documentclass{article} %encoding %-------------------------------------- \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} %-------------------------------------- %German-specific commands %-------------------------------------- \usepackage[ngerman]{babel} %-------------------------------------- %Hyphenation rules %-------------------------------------- \usepackage{hyphenat} \hyphenation{Mathe-matik wieder-gewinnen} %-------------------------------------- \begin{document} \tableofcontents \vspace{2cm} %Add a 2cm space \begin{abstract} Dies ist eine kurze Zusammenfassung der Inhalte des in deutscher Sprache verfassten Dokuments. \end{abstract} \section{Einleitendes Kapitel} Dies ist der erste Abschnitt. Hier können wir einige zusätzliche Elemente hinzufügen und alles wird korrekt geschrieben und umgebrochen werden. Falls ein Wort für eine Zeile zu lang ist, wird \texttt{babel} versuchen je nach Sprache richtig zu trennen. \section{Eingabe mit mathematischer Notation} In diesem Abschnitt ist zu sehen, was mit Macros, die definiert worden, geschieht. \[ \lim x = \theta + 152383.52 \] \end{document}
There are three packages in this document related to the encoding and the special characters. These packages will be explained in the next sections.
If your are looking for instructions on how to use more than one language in a single document, for instance English and German, see the International language support article.
Modern computer systems allow you to input letters of national alphabets directly from the keyboard. In order to handle a variety of input encodings used for different groups of languages and/or on different computer platforms LaTeX employs the
inputenc package to set up input encoding. In this case the package properly displays characters in the German alphabet. To use this package add the next line to the preamble of your document: \usepackage[utf8]{inputenc} The recommended input encoding is utf-8. You can use other encodings depending on your operating system.
To proper LaTeX document generation you must also choose a font encoding which has to support specific characters for German language, this is accomplished by the
package: fontenc \usepackage[T1]{fontenc} Even though the default encoding works well in German, using this specific encoding will avoid glitches that occur if you copy the text from the generated PDF with some specific characters. The default LaTeX encoding is
OT1.
To extended the default LaTeX capabilities, for proper hyphenation and translating the names of the document elements, import the
babel package for the German language. \usepackage[ngerman]{babel} As you may see in the example at the introduction, instead of "abstract" and "Contents" the German words "Zusammenfassung" and "Inhaltsverzeichnis" are used.
The new ortographic rules approved in 1998 are supported by
babel using
ngerman instead of the
german parameter, which supports the old ortography.
Sometimes for formatting reasons some words have to be broken up in syllables separated by a
- (
hyphen) to continue the word in a new line. For example, Mathematik could become Mathe-matik. The package babel, whose usage was described in the previous section, usually does a good job breaking up the words correctly, but if this is not the case you can use a couple of commands in your preamble.
\usepackage{hyphenat} \hyphenation{Mathe-matik wieder-gewinnen}
The first command will import the package
hyphenat and the second line is a list of space-separated words with defined hyphenation rules. On the other side, if you want a word not to be broken automatically, use the
{\nobreak word} command within your document.
Commands enabled for the German language
Command Description "a to produce the character ä, can be used with upper-case and lower-case vowels.
"s and
"z
to produce the German character ß. Works on upper-case and lower-case.
"ck
for ck to be hyphenated as k-k.
"ff
for ff to be hyphenated as ff-f, this is also implemented for l, m, n, p, r and t.
"|
disable ligature at this point.
"-
An explicit hyphen sign that allows hyphenation in the rest of the word.
"`
Left double quotes, „
"'
Right double quotes, “
"<
French left double quotes «
">
French right double quotes »
For more information see
|
How to Model Rotating Machinery in 3D
Electrical machines are an important pillar in modern industrial society. Among the different types of electrical machines, rotating machines such as generators and motors take up a central role. The
Rotating Machinery, Magnetic physics interface in COMSOL Multiphysics is designed specifically for modeling these systems. Follow along as we explore how to model rotating machinery and detail best practices for working with this feature.
The Geometry of a Rotating Machine
In any rotating magnetic machine, there are two parts: the
stator and the rotor, separated by an air gap enabling the rotor’s rotation. The Rotating Machinery, Magnetic interface uses the moving mesh approach to model this rotation, as the finite element method does not support rotations. Geometry of a DC commutated motor that includes two permanent magnets and a rotating winding.
The machine’s geometry is cut (usually along the air gap) into two parts: one containing the stator, and one containing the rotor. The two parts are then meshed separately. During simulation, the part containing the stator remains stationary while the part containing the rotor moves. The two parts with the corresponding meshes are always in contact at the cut boundary.
The geometry must include the air region between the magnets. Red represents a possible choice for the cut boundary.
By default, the last step in a geometry sequence is to finalize by forming a union, uniting all of the geometrical objects and meshing them as a single object. To mesh the two parts separately, the objects must be finalized by forming an assembly. Using unions and other operations, create a single geometry object for the stationary part and another one for the rotating part. Then, choose
Form Assembly in the finalization node of the geometry sequence. During finalization, an identity pair is automatically created under Definitions, identifying the common (contacting) boundaries of the two objects. A close-up of the DC motor’s mesh. The rotating and stationary parts are meshed separately, as indicated by the different positions of the mesh nodes on the two sides. The boundaries highlighted in blue are collected in an identity pair. During rotation, the meshes slide on each other, remaining in contact at the pair.
Watch a video to learn more about using Form Assembly in rotating machinery models.
We can now define the dynamics of the system using the
Rotating Machinery, Magnetic interface. Use the Prescribed Rotation feature to specify an angle of rotation (which can be time dependent) or the Prescribed Rotational Velocity feature to enter a constant angular velocity. After applying one of these features, the COMSOL Multiphysics software will enable moving mesh for the selected domains and the set-up of the appropriate transformations of the electromagnetic field. The Prescribed Rotation or Prescribed Rotational Velocity features must be applied to the rotating part containing the rotor.
What happens at the cut? Physically, the electromagnetic field is continuous in the air gap, assuming a homogeneous material. In contrast to other interior boundaries, continuity of the fields is not automatically imposed across the pair. To enforce this condition, use the
Continuity pair feature on the identity pair. The Mixed Formulation
The
Rotating Machinery, Magnetic interface solves Maxwell’s equations to compute the distribution of the electromagnetic field. Most quantities of interest (e.g., the applied torque) can be computed once the fields are known. In time-dependent analyses, the interface applies the quasi-static approximation, which neglects the displacement current density, or equivalently assumes that capacitive effects in the machine are negligible. With this approximation, all of the currents in the machine are either externally applied (i.e., through an excited winding) or are eddy currents induced in the machine’s conductive parts. The nonconductive parts, like the air gap, do not carry any current density.
There are two approaches used in this interface to solve Maxwell’s equations: the
vector potential formulation and the scalar potential formulation. In the former approach, a vector field, \mathbf{A} (the magnetic vector potential), is introduced and the approach defines the magnetic flux density and the electric field as
\mathbf{E} &=-\frac{\partial \mathbf{A}}{\partial t} \\
\mathbf{B} &= \nabla \times \mathbf{A}
\end{align}
With these definitions, the \mathbf{B} and \mathbf{E} fields automatically fulfill two of Maxwell’s equations: Faraday’s law and the magnetic flux conservation law (or magnetic Gauss’ law). These are written as such:
\nabla \times \mathbf{E} &=-\frac{\partial \mathbf{B}}{\partial t} \\
\nabla \cdot \mathbf{B} &= 0
\end{align}
The equation to be solved is Ampère’s law:
The vector potential formulation is used in the
Magnetic Fields physics interface.
The scalar potential formulation is only applicable in regions where the electric current density is zero. In this case, a scalar field V_\textrm{m} (the magnetic scalar potential, not to be confused with the electric potential) is introduced and the magnetic field is defined by the approach as the gradient of this potential. With this definition, Ampère’s law is automatically fulfilled and the magnetic flux conservation law is solved. This formulation is used in the
Magnetic Fields, No Currents physics interface.
Compared to the vector potential formulation, the scalar potential formulation introduces fewer degrees of freedom and leads to an “easier” problem to solve. The downside is, of course, that it can only be used in the absence of currents. Normally, this condition would restrict the applicability to special cases, like stationary studies of permanent magnets. But, thanks to the quasi-static approximation, this formulation can also be applied to nonconductive regions in time-dependent analyses.
In the case of 3D models, the scalar potential approach offers another important advantage. When used with a pair feature such as the
Continuity feature, this formulation ensures a more accurate coupling of the magnetic flux density — a quantity that is central within the modeling of magnetic machines.
These two formulations can also be used together by combining the vector potential formulation for conductive or current-carrying domains and the scalar potential formulation for the air gap and nonconductive domains. Referred to as
mixed formulation, this approach is particularly useful in 3D models due to an increased accuracy of the pair coupling given by the scalar formulation. In 2D models, for in-plane magnetic fields, the discretization scheme used for the vector potential is similar to that used for the scalar potential. Thus, in 2D in-plane cases, using the mixed formulation is not necessary.
By default, the
Rotating Machinery, Magnetic interface applies the Ampère’s Law feature (that is, the vector potential formulation) to all domains, as it is the most general formulation. Apply the Magnetic Flux Conservation feature (which implements the scalar potential formulation) to the current-free domains, such as the air gap and other nonconductive regions, overwriting Ampère’s Law. The appropriate conditions will be imposed at the interface between the scalar and the vector potential regions using the Mixed Formulation Boundary feature. Note that the Continuity pair feature couples the dependent variables on the two sides of the pair, so make sure that the same formulation is used on either side. For improved numerical stability, a Gauge Fixing for A-Field feature can be applied on all of the vector potential domains, as is often done in the Magnetic Fields interface. The Ampère’s Law feature is applied only to the inner portion of the rotating part, where there is a current-carrying winding. Note that the selected region is smaller than the entire rotating part, which extends to the cut boundary. For increased accuracy, the scalar potential formulation should be used near the pair condition.
Using the mixed formulation is quite simple and straightforward, but keep in mind the mathematical background of the formulations and its limitations. The most important condition on the applicability, the one that is most prone to causing errors, is that the scalar potential can only represent an irrotational (curl free) magnetic field. In practice, there cannot be closed curves in the scalar potential region that completely enclose (“chain”) a current.
The reason for this condition derives from the definition of the scalar potential and from Maxwell’s equations. In regions where the scalar potential formulation is used, the integral of the magnetic field along a closed curve is always zero, since the field is the gradient of the potential. At the same time, from Ampère’s law, we know that the integral of the magnetic field along a closed curve must be equal to the total current chained by the curve. Consequently, there is no solution (no possible configuration of the potential) unless the chained current is exactly zero. If we try to solve a problem that does not respect this condition in COMSOL Multiphysics, the solver will not converge. The figure below illustrates this concept, where vector potential regions are represented in blue and scalar potential regions are bounded in gray.
A closed curve in the scalar potential region “chains” a vector potential region that can carry a current (the current return path is on the outside of the geometry). This model may not have a solution.
The figures below represent valid geometries in which scalar potential regions are simply connected, meaning that they do not have vector potential “holes” going all the way through.
Relative Motion and Frames
In a rotating machine, the relative motion of the stator and rotor is central to the machine’s operation. Electromagnetic problems involving bodies in relative motion are not trivial — in fact, over a hundred years ago, questions about this topic sparked the development of the theory of relativity.
Typically, the first step in solving such a problem is to select a frame to use when formulating equations. A
frame is simply a choice of a coordinate system and axes for each point in space. A natural choice is to select a fixed Cartesian coordinate system, sometimes called the “laboratory” frame and referred to as the spatial frame in COMSOL Multiphysics. In this frame, the stationary part is fixed while the rotating part moves.
Another possible choice is to apply a Cartesian coordinate system at each point in space, as done for the spatial frame, but then let the coordinate system follow the movement of the point as it rotates. In this frame, the material constituting the machine is always stationary (the frame itself moves with it), so the frame is called the
material frame. In the stationary part of the machine, the spatial and material frames coincide, since there is no movement. Meanwhile, in the rotating part, the material frame rotates with respect to the spatial frame. Both of these frame choices are equivalent in the sense that they provide the same results, as long as the proper transformations are applied.
By default, the coordinates of the material frame are uppercase letters (
X,
Y,
Z), while the coordinates of the spatial frame are lowercase (
x,
y,
z). The names of the coordinates denote the components of a vector in a certain frame; for example, the electric field components are
Ex,
Ey,
Ez in the spatial frame and
EX,
EY,
EZ in the material frame.
The problem is automatically formulated and solved by the physics in the material frame. For postprocessing, it is often interesting to look at the variables and fields in the spatial frame, as these are quantities seen by an observer at rest with respect to the stator. For this reason, the physics automatically transforms and defines all of the vector fields in the spatial frame. Spatial and material variables are identified in the expression list by the frame in parentheses, as shown in the figure below.
Vector quantities are defined with components in both the spatial frame and the material frame.
Most vector quantities are merely rotated when transformed from the material to the spatial frame, and their norms are invariant. An important exception occurs for the electromagnetic field, particularly the electric field, which transforms according to the Lorentz transformation rules. For nonrelativistic velocities, the fields in the two frames are related by the equations
\mathbf{B}_\textrm{material} &= \mathbf{B}_\textrm{spatial} \\
\mathbf{E}_\textrm{material} &= \mathbf{E}_\textrm{spatial} + \mathbf{v} \times \mathbf{B}_\textrm{spatial}
\end{align}
Let’s begin by looking at the geometry of a 2D generator. In the figure below, the red line indicates the separation between the rotating and the stationary parts. The darker domains depict the permanent magnets in the rotor, while lighter domains indicate iron that can be saturated, and the copper domains represent the generator’s windings. The white region identifies air.
The electric field in the material frame is the field “seen” by the conductive material, driving the current density. In general, it is different from the electric field in the spatial frame, as illustrated below.
Left: The out-of-plane component of the electric field in the spatial frame during the rotation (in V/m). The magnets in the rotor move with respect to an observer in the laboratory frame, so there is an induced electric field. Right: The out-of-plane component of the electric field in the material frame (in V/m). Because the magnets are stationary in the rotating part’s frame, there is no significant induced electric field. The electric field in the stationary part is the same in the material frame and the spatial frame. Setting Up the Solvers
The solver set-up must be tailored to the desired simulation. A
Stationary study can be used to model the rotating machine’s behavior in stationary conditions in which the rotor is fixed and transient effects have decayed. Instead, a Time Dependent step can be used to study what happens during rotation.
When using the
Time Dependent step, it is important to specify the correct initial values that correspond to the physical situation under investigation. If this is the first step in the study, the initial values for the fields are taken from the Initial Value feature (by default, zero). Alternatively, a Stationary step can be solved before the Time Dependent step in order to provide a nonzero initial value for the transient simulation.
In general, a
Stationary step is added if excitations are “already active” (e.g., the permanent magnets in the generator), as opposed to excitations that are “turned on” at the beginning of the transient analysis. In models featuring both forms of excitation, like the DC commuted motor, it is important to disable the features responsible for the transient excitation in the Stationary step — that is, if the simulation is designed to model behavior when the transient excitation is “turned on”. Summary
An advanced topic by nature, the modeling of rotating machinery can be quite challenging. Here, we have presented some of the concepts involved in the modeling of a rotating magnetic machine as well as the procedures and best practices to follow when working within this interesting application. The
Rotating Machinery, Magnetic interface and the Magnetic Fields interface, which constitutes the core of the functionality, are powerful tools for analyzing and optimizing these intricate systems.
In a future blog post, we will explore the role of sector symmetry, in addition to these techniques, in modeling rotating machinery in 3D. Stay tuned!
Further Steps Editor’s note: We published a follow-up blog post on this topic on 2/18/16. Read about the key concepts behind modeling 3D rotating machines here. Comments (5) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
I speak of Math Stackexchange frequently for two reasons: because it is fantastically interesting and because I waste inordinate amounts of time on it. But I would like to again share some of the more interesting things from the exchange here.
Firstly, in my last post on factoring, I spoke of Sophie Germain’s identity. I’ve had a case of Mom’s Corollary* with this – a question was recently asked on MathSE to “prove that $latex x^4 + 4$ is composite for positive integer x.” How is this done? In one step, as $latex x^4 + 4 = (x^2 + 2x+ 2)(x^2 – 2x + 2)$. There is the minor task to recognize that for positive integers x, $latex x^2 – 2x + 2 > 1$, and so the factorization is nontrivial.
Some may be thinking, “What is Mom’s Corollary?” Mom’s Corollary is a situation named by my high school English Teacher, Dr. Covel. It is astounding how often that one repeatedly comes across a new concept right after one has learnt it. In other words, when your mother tells you something, it’s surprising how often her advice will come up within the next 3 days. When it does – it’s a Mom’s Corollary case.
Secondly, there was a question based on an old GRE question.
“A total of x feet of fencing is to form 3 sides of a level rectangular yard. What is the maximum area in terms of x?”
This is not a hard question except that it defies our normal idea of associating optimal areas as squares and circles. But as opposed to doing the typical sort of optimization route, the MathSE user Jonas Meyer gives a solution that allows our intuition to soar. The idea is, to ‘place a mirror’ next to the missing side of the rectangular yard. Then the problem becomes to maximize the area in terms of 2x, and to translate it back to the 1x case. I love it when people see such symmetries and shortcuts in problems. (It’s now a square – super handy).
Thirdly, I learned of a certain paper with some really interesting identities. It is largely known that $latex \displaystyle \int _0 ^{\infty} \frac{\sin{(x)}}{x} \mathrm{d} x = \frac{\pi}{2}$. It is not as well known that $latex \displaystyle \int _0 ^{\infty} \left( \frac{ \sin{(x)} }{x} \right) ^2 \mathrm{d}x = \frac{\pi}{2}$ as well. But this paper references the following, absolutely nonintuitive to me fact:
$latex \displaystyle \int _0 ^{\infty} \frac{ \sin{(x)} } {x} \mathrm{d} x = $ $latex \displaystyle \int _0 ^{\infty} \left( \frac{ \sin{(x)} }{x} \right) ^2 \mathrm{d}x = \frac{\pi}{2} = $ $latex \displaystyle \sum_{n = 1} ^ {\infty} \frac{\sin{( n)} } {n}+ \frac{1}{2} = \displaystyle \sum_{n = 1} ^ {\infty} \left( \frac{\sin{( n)} } {n} \right) ^2 + \frac{1}{2}$.
And therefore, also that:
$latex \displaystyle \int _{- \infty} ^{\infty} \frac{ \sin{(x)} } {x} \mathrm{d} x = $ $latex \displaystyle \int _{- \infty} ^{\infty} \left( \frac{ \sin{(x)} }{x} \right) ^2 \mathrm{d}x = $ $latex \displaystyle \sum_{n = – \infty } ^ {\infty} \frac{\sin{( n)} } {n} = \displaystyle \sum_{n = – \infty } ^ {\infty} \left( \frac{\sin{( n)} } {n} \right) ^2 = \pi$.
Finally, I happened to read a post on whether $latex \infty$ was an odd or an even number. Okay, this is silly, and not really relevant to understanding the concept of $latex \infty$. But instead of the – infinity is not a number – response, one of my favorite responses was along the following lines: an even number is a number that can be paired off into equal subdivisions. So in a sense, infinity is even, as it can certainly be paired off (associate $latex n$ with $latex n + 1$. Of course, it is also the case that $latex \omega = \omega + 1$.
Not-so-secretly, this was all just a ploy to give myself a bit more time before I finish the next post in the factorization series. That will come later.
|
This blog post is an excerpt of my ebook
Modern R with the tidyverse that you can read forfree here. This is taken from Chapter 7, which dealswith statistical models. In the text below, I explain what hyper-parameters are, and as an exampleI run a ridge regression using the
{glmnet} package. The book is still being written, socomments are more than welcome!
Hyper-parameters are parameters of the model that cannot be directly learned from the data.A linear regression does not have any hyper-parameters, but a random forest for instance has several.You might have heard of ridge regression, lasso and elasticnet. These areextensions to linear models that avoid over-fitting by penalizing
large models. Theseextensions of the linear regression have hyper-parameters that the practitioner has to tune. Thereare several ways one can tune these parameters, for example, by doing a grid-search, or a randomsearch over the grid or using more elaborate methods. To introduce hyper-parameters, let’s getto know ridge regression, also called Tikhonov regularization.
Ridge regression is used when the data you are working with has a lot of explanatory variables,or when there is a risk that a simple linear regression might overfit to the training data, because,for example, your explanatory variables are collinear.If you are training a linear model and then you notice that it generalizes very badly to new,unseen data, it is very likely that the linear model you trained overfits the data.In this case, ridge regression might prove useful. The way ridge regression works might seemcounter-intuititive; it boils down to fitting a
worse model to the training data, but in return,this worse model will generalize better to new data.
The closed form solution of the ordinary least squares estimator is defined as:
\[ \widehat{\beta} = (X'X)^{-1}X'Y \]
where \(X\) is the design matrix (the matrix made up of the explanatory variables) and \(Y\) is the dependent variable. For ridge regression, this closed form solution changes a little bit:
\[ \widehat{\beta} = (X'X + \lambda I_p)^{-1}X'Y \]
where \(\lambda \in \mathbb{R}\) is an hyper-parameter and \(I_p\) is the identity matrix of dimension \(p\) (\(p\) is the number of explanatory variables). This formula above is the closed form solution to the following optimisation program:
\[ \sum_{i=1}^n \left(y_i - \sum_{j=1}^px_{ij}\beta_j\right)^2 \]
such that:
\[ \sum_{j=1}^p(\beta_j)^2 < c \]
for any strictly positive \(c\).
The
glmnet() function from the
{glmnet} package can be used for ridge regression, by settingthe
alpha argument to 0 (setting it to 1 would do LASSO, and setting it to a number between0 and 1 would do elasticnet). But in order to compare linear regression and ridge regression,let me first divide the data into a training set and a testing set. I will be using the
Housingdata from the
{Ecdat} package:
library(tidyverse)library(Ecdat)library(glmnet)
index <- 1:nrow(Housing)set.seed(12345)train_index <- sample(index, round(0.90*nrow(Housing)), replace = FALSE)test_index <- setdiff(index, train_index)train_x <- Housing[train_index, ] %>% select(-price)train_y <- Housing[train_index, ] %>% pull(price)test_x <- Housing[test_index, ] %>% select(-price)test_y <- Housing[test_index, ] %>% pull(price)
I do the train/test split this way, because
glmnet() requires a design matrix as input, and nota formula. Design matrices can be created using the
model.matrix() function:
train_matrix <- model.matrix(train_y ~ ., data = train_x)test_matrix <- model.matrix(test_y ~ ., data = test_x)
To run an unpenalized linear regression, we can set the penalty to 0:
model_lm_ridge <- glmnet(y = train_y, x = train_matrix, alpha = 0, lambda = 0)
The model above provides the same result as a linear regression. Let’s compare the coefficients between the two:
coef(model_lm_ridge)
## 13 x 1 sparse Matrix of class "dgCMatrix"## s0## (Intercept) -3247.030393## (Intercept) . ## lotsize 3.520283## bedrooms 1745.211187## bathrms 14337.551325## stories 6736.679470## drivewayyes 5687.132236## recroomyes 5701.831289## fullbaseyes 5708.978557## gashwyes 12508.524241## aircoyes 12592.435621## garagepl 4438.918373## prefareayes 9085.172469
and now the coefficients of the linear regression (because I provide a design matrix, I have to use
lm.fit() instead of
lm() which requires a formula, not a matrix.)
coef(lm.fit(x = train_matrix, y = train_y))
## (Intercept) lotsize bedrooms bathrms stories ## -3245.146665 3.520357 1744.983863 14336.336858 6737.000410 ## drivewayyes recroomyes fullbaseyes gashwyes aircoyes ## 5686.394123 5700.210775 5709.493884 12509.005265 12592.367268 ## garagepl prefareayes ## 4439.029607 9085.409155
as you can see, the coefficients are the same. Let’s compute the RMSE for the unpenalized linear regression:
preds_lm <- predict(model_lm_ridge, test_matrix)rmse_lm <- sqrt(mean((preds_lm - test_y)^2))
The RMSE for the linear unpenalized regression is equal to 14463.08.
Let’s now run a ridge regression, with
lambda equal to 100, and see if the RMSE is smaller:
model_ridge <- glmnet(y = train_y, x = train_matrix, alpha = 0, lambda = 100)
and let’s compute the RMSE again:
preds <- predict(model_ridge, test_matrix)rmse <- sqrt(mean((preds - test_y)^2))
The RMSE for the linear penalized regression is equal to 14460.71, which is smaller than before.But which value of
lambda gives smallest RMSE? To find out, one must run model over a grid of
lambda values and pick the model with lowest RMSE. This procedure is available in the
cv.glmnet()function, which picks the best value for
lambda:
best_model <- cv.glmnet(train_matrix, train_y)# lambda that minimises the MSEbest_model$lambda.min
## [1] 66.07936
According to
cv.glmnet() the best value for
lambda is 66.0793576.In the next section, we will implement cross validation ourselves, in order to find the hyper-parametersof a random forest.
|
Prove that$$\begin{array}\\&f(x)&=\begin{cases}\frac{x|y|}{\sqrt{x^2+y^2}}&\text{ if }(x,y) \ne (0,0)\\0&\text{ otherwise }\end{cases}\end{array}$$ is differentiable in $(0,0)$.
Proof.
We know that f is differentiable in (0,0), if all partial derivatives exist and are continuous in (0,0). We have $$\frac{\partial f}{\partial x}=\lim_{h\to\ 0}\frac{f(h,0)}{h}=0$$ $$\frac{\partial f}{\partial y}=\lim_{h\to\ 0}\frac{f(0,h)}{h}=0$$ So the partial derivatives exist.
I am now stuck at prooving the continuity of the partial derivatives in (0,0). Can anyone help me with that?
|
2003.
At that time, the particle was conjectured to be so heavily bound that it would be a tachyon, \(m^2\lt 0\). I actually think that composite tachyons can't exist in tachyon-free theories, can they? (You better believe that such a tachyonic particle is impossible because such a man-made Cosmos-eating tachyonic toplet would be even worse than an Earth-eating strangelet LOL.)
The zodiac, a similarly strange bound state of 12 particles.
Unlike my numerologically driven weakly bound states of new particles, they propose that the particle could be a heavily bound state of 12 top quarks in total.
More precisely, they say that there should be 6 top quarks and 6 top antiquarks in the beast. The number 6 is preferred because all \(2\times 3 = 6\) arrangements of the spin-and-color are represented – both for quarks and antiquarks. So this complete list could potentially make a particle that is as stable as the atom of helium; or the helium-4 nucleus (the alpha-particle). The whole low-lying "shell" is occupied in all these cases!
The binding energy could come from the exchange of the virtual Higgs quanta. Note that for the odd messenger spins, \(J=1,3,5,\dots\), i.e. for electromagnetism, the like charges repel. For the \(J=2\) gravity, the like charges (positive masses) attract. For \(J=0\), the like charges must attract, too. A closer analysis of the signs in the Dirac fermionic bilinears implies that the opposite sources of the Higgs field actually attract as well – so the "sign of the top quark" is ignored. An ironic side effect of this rule is that when a top quark-antiquark pair is created, the total field they produce jumps discontinuously. But unlike the electric charge, the "charge sourcing the Higgs field" isn't conserved, so this jump isn't contradicting anything.
Twelve top quarks have the mass of \(12\times 173\GeV=2076\GeV\) so you need the interaction energy \(-1326\GeV\) to get down to \(750\GeV\). There are \(12\times 11/ 2\times 1=66\) pairs of "tops" (or antitops) in the proposed bound state. If each of them contributes \(-20\GeV\) in average, you will be fine. But do they contribute \(-20\GeV\) in such bounds states? Cannot someone just calculate these things, e.g. some lattice QCD methods? Cannot one see this \(-20\GeV\) in the toponium?
Both authors claim that \(pp\to SS\) where \(S\) is their 12-particle bound state has the cross section of 0.2 pb and 2 pb at \(8\TeV\) and \(13\TeV\), respectively, which seem good enough. The dominant decay modes should be (in this order) \(S\to t\bar t,gg,hh,W^+W^-,ZZ,\) and \(\gamma\gamma\). Given the low status of the diphoton, that doesn't look too good, does it? It is pretty hard to imagine how this complicated beast decays at all – twelve particles have to be liquidated almost simultaneously. That only occurs in some very high order, doesn't it? I am actually surprised by the high production cross section for the same reason.
But the simplicity makes the proposal attractive even if the absence of the Beyond the Standard Model physics could be disappointing at the end.
|
abraces can be used for this - swapping/mixing ofthe brace directions:
\documentclass{article}
\usepackage{abraces,mathtools}% http://ctan.org/pkg/{abraces,mathtools}
\begin{document}
\[
\setbox9=\hbox{$(X \circ X)$}
\operatorname{min} \big|
\mathrlap{\hspace{.5\wd9}\mathclap{\aunderbrace[L1U1R]{\scriptstyle\phantom{\text{Hadamard product}}}_{\text{Hadamard product}}}}
(X \circ X) - (1 \ 2 \ 3 \ \cdots \ M )\big|^2
\]
\[
\operatorname{min} \big|
\underbrace{(X \circ X)}_{\mathclap{\text{Hadamard product}}}
- (1 \ 2 \ 3 \ \cdots \ M )\big|^2
\]
\end{document}
The use of
\box9 is just for finding the correct width of
(X \circ X). That is, some box movement is required in order to place the
\aunderbrace at the correct location. The second option looks better though.
You could use an
\overbrace as well. And, using
\big (and friends) instead of
\left...
\right allows for a better appearance in terms of the absolute delimiters.
|
In this topic:
Dxxxx n+ n- model_name [TEMP=local_temp] .model modelname SRDIO ( parameters )
Name
Description
Units
Default
CJO Zero bias junction capacitance F 0.0 EG Energy gap ev 1.11 FC Forward bias depletion capacitance coefficient 0.5 IS Saturation current A 1E-15 MJ Grading coefficient 0.5 N Forward emission coefficient 1.0 RS Series resistance ???MATH???\Omega???MATH??? 0 TNOM Parameter measurement temperature 27 TT Diffusion transit time S 5e-6 TAU Minority carrier lifetime 1e-5 VJ Built-in potential V 1 XTI Saturation current temperature exponent 3
The model is based on the paper "A Simple Diode Model with Reverse Recovery" by Peter Lauritzen and Cliff Ma. (See references). The model's governing equations are quite simple and are as follows:
\[ i_d = \frac{q_e - q_m}{TT} \]
\[ \frac{dq_m}{dt} + \frac{q_m}{TAU} - \frac{q_e - q_m}{TT} = 0 \]
\[ q_e = IS \cdot TAU \cdot \left(\exp\left(\frac{v_d}{N\cdot V_t}\right)-1\right) \]
In addition the model uses the standard SPICE equations for junction capacitance and temperature dependence of IS.
Peter O. Lauritzen, Cliff L. Ma,
A Simple Diode Model with Reverse Recovery, IEEE Transactions on Power Electronics, Vol. 6, No 2, pp 188-191, April 1991.
◄ Diode - Level 1 and Level 3 GaAsFET ▶
|
Consider Helly Theorem, taken from notes by Igor Pak:
Let $X_1, \dots, X_n \in {\mathbb{R}}^2$ be convex regions in the plane such that any triple interesects $X_i \cap X_j \cap X_k \neq 0$. Then there is a point in all the sets, $X_1 \cap \dots \cap X_n \neq \varnothing$.
This result is not obvious (although Pak's proof is short). However, any explicit collection of sets I build such that three of them intersect, have a clear total intersection. How about this simpler result, also from Pak's book:
Let $P_1, \dots, P_n \in {\mathbb R}^2$ be rectangles with sides parallel to the coordinate axes, such that any two intersect each other. Then all the rectangles have a nonempty intersection.
By Helly's theorem, we only need $n = 3$. What happens if we don't use Helly's theorem and try to prove this result directly?
Let $[x_1, x_1']\times [y_1, y_1'], \dots, [x_n, x_n']\times [y_n, y_n'] \in {\mathbb R}^2$ be rectangles in the plane, sides parallel to the $x,y$-axes, such that:
$x_i < x_j < x_i' < x_j'$ (or vice-versa) and $y_i < y_j < y_i' < y_j'$ (or vice-versa).
Then $x_i < x_j'$ for all $i,j$ and $y_i < y_j'$ for all $i,j$. So $[\mathrm{max} (x_i) , \mathrm{min} (x_i')] \times [\mathrm{max} (y_i) , \mathrm{min}( y_i')]$ is a rectangle that works.
Here, it wasn't hard to find that intersection point even without the reduction from Helly's theorem.
What kind of interesting collections of convex sets result in non-trivial uses of Helly's theorem?
|
Given positive integers $a$, $m$ and $n$, let $s_{a(m)}(n)$ denote the sum of the reciprocals of the prime numbers less than or equal to $n$ which are congruent to $a$ modulo $m$.
Is there an integer $n$ such that $s_{1(3)}(n) > s_{2(3)}(n)$?
For small $n$, the function $s_{2(3)}$ is clearly ahead -- for example it exceeds $1$ already at $n = 59$, while for $s_{1(3)}$ this takes until $n = 3560503$.
Even if the answer is
no, are there still otherresidue classes $a(m)$ and $b(m)$ such that the function$$ f_{a(m),b(m)}: \mathbb{N} \rightarrow \mathbb{R}, \ \ n \mapsto s_{a(m)}(n) - s_{b(m)}(n)$$has infinitely many sign changes?
Does $f_{a(m),b(m)}(n)$ converge for $n \rightarrow \infty$, and if so, is there anything known about the constants $c_{a(m),b(m)} := \lim_{n \rightarrow \infty} f_{a(m),b(m)}(n)$?
|
A random picture of intersecting D-branes
Alternatively, if that bump were real, it could have been a sign of compositeness, a heavy scalar (instead of a spin-one boson), or a triboson pretending to be a diboson. However, on Sunday, six string phenomenologists proposed a much more exciting explanation:
Why would such an ambitious conclusion follow from such a seemingly innocent bump on the road? We need just a little bit of patience to understand this point.
They agree with the defenders of the left-right-symmetric explanation of the bump that the particle that decays in order to manifest itself as the bump is a new spin-one boson, namely a \(Z'\). But its corresponding \(U(1)_a\) symmetry may be anomalous: there may exist a mixed anomaly in the triangle\[
U(1)_a SU(2)_L SU(2)_L
\] with two copies of the regular electroweak \(SU(2)\) gauge group. An anomaly in the gauge group would mean that the field theory is inconsistent. In the characteristic field theory constructions, the right multiplicities and charges of the spectrum are needed to cancel the anomaly. However, string theory has one more trick that may cancel gauge anomalies. It's a trick that actually launched the First Superstring Revolution in 1984.
It's the Green-Schwarz mechanism.
In 1984, Green and Schwarz figured out how the anomaly works in type I superstring theory with the \(SO(32)\) gauge group – which is given by a hexagon diagram in \(d=10\) much like it needs a triangle in \(d=4\) – but the same trick may apply even after compactification. The new spin-one gauge field is told to transform surprisingly nontrivially under a gauge invariance of a seemingly independent field, a two-index field, and the hexagon is then cancelled against a 2+4 tree diagram with the exchange of the two-index field.
In the \(d=4\) case, we may see that this Green-Schwarz mechanism makes the previously anomalous \(U(1)_a\) gauge boson massive – and the "Stückelberg" mass is just an order of magnitude or so lower than the string scale (which they therefore assume to be \(M_s\approx 20\TeV\)). This is normally viewed as an extremely high energy scale which is why these possibilities don't enter the conventional quantum field theoretical models.
But string theory may also be around the corner – in the case of some stringy braneworld models, particularly the intersecting braneworlds. In these braneworlds, which are very concrete stringy realizations of the "old large dimensions" paradigm, the Standard Model fields live on stacks of branes, they have the form of open strings whose basic duty is to stay attached to a D-brane. Some string modes (particles) live near the intersections of the D-brane stacks because one of their endpoint is attached to one stack and the other to the other stack and the strings always want to be stringy short, not to carry insanely high energy.
To make the story short, the anomaly-producing triangle diagram may also be interpreted as the Feynman diagram for a decay of the new \(Z'\) boson of the \(U(1)_a\) groups into two \(SU(2)_L\) gauge bosons. When the latter pair is decomposed into the basis of the usual particles we know, the decays may be\[
\eq{
Z' &\to W^+ W^-,\\
Z' &\to Z^0 Z^0,\\
Z' &\to Z^0 \gamma
}
\] All these three decays are made unavoidable in the Green-Schwarz-mechanism-based models – and the relative branching ratios are pretty much given. Note that \(W^0\equiv W_3\) is a mixture of \(Z^0\) and \(\gamma\) so all three pairs created from \(Z^0\) and \(\gamma\) would be possible but the Landau-Yang theorem implies that the \(\gamma\gamma\) decay of \(Z'\) is forbidden (the rate is zero) for symmetry reasons.
Their storyline is so predictive that then may tell you that the new coupling constant is \(g_a\approx 0.36\), too.
So if their explanation is right, the bump near \(2\TeV\) will be growing – it may already be growing now: the first Run II results will be announced on EPS-HEP in Vienna, a meeting that starts tomorrow (follow the conference website)! Only about 1 inverse femtobarn of \(13\TeV\) data has been accumulated in 2015 so far – much less than 20-30/fb at \(8\TeV\) in 2012. And if the authors of the paper discussed here are right, one more thing is true. The decay channel \(Z\gamma\) of the new particle will soon be detected as well – and it will be a smoking gun for low-scale string theory!
No known consistent field theory predicts a nonzero \(Z\gamma\) decay rate of the new massive gauge boson. The string-theoretical Green-Schwarz mechanism mixes what looks like a field-theoretical tree-level diagram with a one-loop diagram. Their being on equal footing implies that the regular QFT-like perturbation theory breaks down and instead, there is a hidden loop inside a vertex of the would-be tree-level diagram. This loop can't be expanded in terms of regular particles in a loop, however: it implies some stringy compositeness of the particles and processes.
A smoking gun. This particular one is a smoking gun of someone else than string theory, however.
This sounds to good to be true but it may be true. I still think it's very unlikely but these smart authors obviously think it's a totally sensible scenario. It's hard to figure out whether they really impartially believe that these low-scale intersecting braneworlds are likely; or their belief mostly boils down to a wishful thinking.
If these ideas were right, we could observe megatons of stringy physics with finite-price colliders!
|
Revista Matemática Iberoamericana Rev. Mat. Iberoamericana Volume 27, Number 3 (2011), 733-750. Finiteness of endomorphism algebras of CM modular abelian varieties Abstract
Let $A_f$ be the abelian variety attached by Shimura to a normalized newform $f\in S_2(\Gamma_1(N))^{\operatorname{new}}$. We prove that for any integer $n > 1$ the set of pairs of endomorphism algebras $\big( \operatorname{End}_{\overline{\mathbb{Q}}}(A_f) \otimes \mathbb{Q}, \operatorname{End}_\mathbb{Q}(A_f) \otimes \mathbb{Q} \big)$ obtained from all normalized newforms $f$ with complex multiplication such that $\dim A_f=n$ is finite. We determine that this set has exactly 83 pairs for the particular case $n=2$ and show all of them. We also discuss a conjecture related to the finiteness of the set of number fields $\operatorname{End}_\mathbb{Q}(A_f) \otimes \mathbb{Q}$ for the non-CM case.
Article information Source Rev. Mat. Iberoamericana, Volume 27, Number 3 (2011), 733-750. Dates First available in Project Euclid: 9 August 2011 Permanent link to this document https://projecteuclid.org/euclid.rmi/1312906776 Mathematical Reviews number (MathSciNet) MR2895332 Zentralblatt MATH identifier 1256.11037 Citation
González, Josep. Finiteness of endomorphism algebras of CM modular abelian varieties. Rev. Mat. Iberoamericana 27 (2011), no. 3, 733--750. https://projecteuclid.org/euclid.rmi/1312906776
|
A search is reported for massive resonances decaying into a quark and a vector boson (W or Z), or two vector bosons (WW, WZ, or ZZ). The analysis is performed on an inclusive sample of multijet events corresponding to an integrated luminosity of 19.7 inverse femtobarns, collected in proton-proton collisions at a centre-of-mass energy of 8 TeV with the CMS detector at the LHC. The search uses novel jet-substructure identification techniques that provide sensitivity to the presence of highly boosted vector bosons decaying into a pair of quarks. Exclusion limits are set at a confidence level of 95% on the production of: (i) excited quark resonances q* decaying to qW and qZ for masses less than 3.2 TeV and 2.9 TeV, respectively, (ii) a Randall-Sundrum graviton G[RS] decaying into WW for masses below 1.2 TeV, and (iii) a heavy partner of the W boson W' decaying into WZ for masses less than 1.7 TeV. For the first time mass limits are set on W' to WZ and G[RS] to WW in the all-jets final state. The mass limits on q* to qW, q* to qZ, W' to WZ, G[RS] to WW are the most stringent to date. A model with a "bulk" graviton G[Bulk] that decays into WW or ZZ bosons is also studied.
We present the first measurement of the ratio of branching fraction R= B(t-->wb)/B(t-->Wq) from ppbar collisions at sqrt(s)=1.8 TeV. The data set corresponds to 109 pb-1 of data recorded by the Collider Detector at Fermilab during the 1992-1995 Tevatron run. We measure R=0.94+.31-.24 (stat+syst) or R>0.61 (0.56) at 90 (95) %C.L., in agreement with the standard model predictions. This measurement yields a limit of the Cabibbo-Kobayashi-Maskawa quark mixing matrix element Vtb under the assumption of three generation unitarity.
We present results of searches for diphoton resonances produced both inclusively and also in association with a vector boson (W or Z) using 100 pb^{-1} of p\bar p collisions using the CDF detector. We set upper limits on the product of cross section times branching ratio for both p\bar p\to\gamma\gamma + X and p\bar p\to\gamma\gamma + W/Z. Comparing the inclusive production to the expectations from heavy sgoldstinos we derive limits on the supersymmetry-breaking scale sqrt{F} in the TeV range, depending on the sgoldstino mass and the choice of other parameters. Also, using a NLO prediction for the associated production of a Higgs boson with a W or Z boson, we set an upper limit on the branching ratio for H\to\gamma\gamma. Finally, we set a lower limit on the mass of a `bosophilic' Higgs boson (e.g. one which couples only to \gamma, W, and Z$ bosons with standard model couplings) of 82 GeV/c^2 at 95% confidence level.
We present results from a measurement of double diffraction dissociation in $\bar pp$ collisions at the Fermilab Tevatron collider. The production cross section for events with a central pseudorapidity gap of width $\Delta\eta^0>3$ (overlapping $\eta=0$) is found to be $4.43\pm 0.02{(stat)}{\pm 1.18}{(syst) mb}$ [$3.42\pm 0.01{(stat)}{\pm 1.09}{(syst) mb}$] at $\sqrt{s}=1800$ [630] GeV. Our results are compared with previous measurements and with predictions based on Regge theory and factorization.
We present the first model-independent measurement of the helicity of $W$ bosons produced in top quark decays, based on a 1 fb$^{-1}$ sample of candidate $t\bar{t}$ events in the dilepton and lepton plus jets channels collected by the D0 detector at the Fermilab Tevatron $p\bar{p}$ Collider. We reconstruct the angle $\theta^*$ between the momenta of the down-type fermion and the top quark in the $W$ boson rest frame for each top quark decay. A fit of the resulting \costheta distribution finds that the fraction of longitudinal $W$ bosons $f_0 = 0.425 \pm 0.166 \hbox{(stat.)} \pm 0.102 \hbox{(syst.)}$ and the fraction of right-handed $W$ bosons $f_+ = 0.119 \pm 0.090 \hbox{(stat.)} \pm 0.053 \hbox{(syst.)}$, which is consistent at the 30% C.L. with the standard model.
We present a measurement of the top quark pair production cross section in ppbar collisions at sqrt(s)=1.96 TeV using 318 pb^{-1} of data collected with the Collider Detector at Fermilab. We select ttbar decays into the final states e nu + jets and mu nu + jets, in which at least one b quark from the t-quark decays is identified using a secondary vertex-finding algorithm. Assuming a top quark mass of 178 GeV/c^2, we measure a cross section of 8.7 +-0.9 (stat) +1.1-0.9 (syst) pb. We also report the first observation of ttbar with significance greater than 5 sigma in the subsample in which both b quarks are identified, corresponding to a cross section of 10.1 +1.6-1.4(stat)+2.0-1.3 (syst) pb.
We present a measurement of the ttbar cross section using high-multiplicity jet events produced in ppbar collisions at sqrt{s}=1.96 TeV. These data were recorded at the Fermilab Tevatron collider with the D0 detector. Events with at least six jets, two of them identified as b jets, were selected from a 1 fb-1 data set. The measured cross section, assuming a top quark mass of 175 GeV/c^2, is 6.9 \pm 2.0 pb, in agreement with theoretical expectations.
A measurement of the inclusive WW+WZ diboson production cross section in proton-proton collisions is reported, based on events containing a leptonically decaying W boson and exactly two jets. The data sample, collected at sqrt(s) = 7 TeV with the CMS detector at the LHC, corresponds to an integrated luminosity of 5.0 inverse femtobarns. The measured value of the sum of the inclusive WW and WZ cross sections is sigma(pp to WW+WZ) = 68.9 +/- 8.7 (stat.) +/- 9.7 (syst.) +/- 1.5 (lum.) pb, consistent with the standard model prediction of 65.6 +/- 2.2 pb. This is the first measurement of WW+WZ production in pp collisions using this signature. No evidence for anomalous triple gauge couplings is found and upper limits are set on their magnitudes.
The top-quark pair production cross section in 7 TeV center-of-mass energy proton–proton collisions is measured using data collected by the CMS detector at the LHC. The measurement uses events with one jet identified as a hadronically decaying τ lepton and at least four additional energetic jets, at least one of which is identified as coming from a b quark. The analyzed data sample corresponds to an integrated luminosity of 3.9 fb(−1) recorded by a dedicated multijet plus hadronically decaying τ trigger. A neural network has been developed to separate the top-quark pairs from the W+jets and multijet backgrounds. The measured value of is consistent with the standard model predictions.
A search is performed for pair-produced spin-3/2 excited top quarks t ∗ t ¯ ∗ , each decaying to a top quark and a gluon. The search uses data collected with the CMS detector from pp collisions at a center-of-mass energy of s = 8 TeV, selecting events that have a single isolated muon or electron, an imbalance in transverse momentum, and at least six jets, of which one must be compatible with originating from the fragmentation of a b quark. The data, corresponding to an integrated luminosity of 19.5 fb(−1), show no significant excess over standard model predictions, and provide a lower limit of 803 GeV at 95% confidence on the mass of the spin-3/2 t(*) quark in an extension of the Randall-Sundrum model, assuming a 100% branching fraction of its decay into a top quark and a gluon. This is the first search for a spin-3/2 excited top quark performed at the LHC.
A search is presented for particle dark matter produced in association with a pair of top quarks in pp collisions at a centre-of-mass energy of $ \sqrt{s}=8 $ TeV. The data were collected with the CMS detector at the LHC and correspond to an integrated luminosity of 19.7 fb$^{−1}$. This search requires the presence of one lepton, multiple jets, and large missing transverse energy. No excess of events is found above the SM expectation, and upper limits are derived on the production cross section. Interpreting the findings in the context of a scalar contact interaction between fermionic dark matter particles and top quarks, lower limits on the interaction scale are set. These limits are also interpreted in terms of the dark matter-nucleon scattering cross sections for the spin-independent scalar operator and they complement direct searches for dark matter particles in the low mass region.
This paper presents distributions of topological observables in inclusive three- and four-jet events produced in pp collisions at a centre-of-mass energy of 7 $\,\text {TeV}$ with a data sample collected by the CMS experiment corresponding to a luminosity of 5.1 $\,\text {fb}^{-1}$ . The distributions are corrected for detector effects, and compared with several event generators based on two- and multi-parton matrix elements at leading order. Among the considered calculations, MadGraph interfaced with pythia6 displays the overall best agreement with data.
A search for new resonances decaying to WW, ZZ, or WZ is presented. Final states are considered in which one of the vector bosons decays leptonically and the other hadronically. Results are based on data corresponding to an integrated luminosity of 19.7 inverse femtobarns recorded in proton-proton collisions at $\sqrt{s}$ = 8 TeV with the CMS detector at the CERN LHC. Techniques aiming at identifying jet substructures are used to analyze signal events in which the hadronization products from the decay of highly boosted W or Z bosons are contained within a single reconstructed jet. Upper limits on the production of generic WW, ZZ, or WZ resonances are set as a function of the resonance mass and width. We increase the sensitivity of the analysis by statistically combining the results of this search with a complementary study of the all-hadronic final state. Upper limits at 95% confidence level are set on the bulk graviton production cross section in the range from 700 to 10 femtobarns for resonance masses between 600 and 2500 GeV, respectively. These limits on the bulk graviton model are the most stringent to date in the diboson final state.
We report a precise measurement of the weak mixing angle from the ratio of neutral current to charged current inclusive cross-sections in deep-inelastic neutrino-nucleon scattering. The data were gathered at the CCFR neutrino detector in the Fermilab quadrupole-triplet neutrino beam, with neutrino energies up to 600 GeV. Using the on-shell definition, ${\rm sin ~2\theta_W} \equiv 1 - \frac{{\rm M_W} ~2}{{\rm M_Z} ~2}$, we obtain ${\rm sin ~2\theta_W} = 0.2218 \pm 0.0025 ({\rm stat.}) \pm 0.0036 ({\rm exp.\: syst.}) \pm 0.0040 ({\rm model})$.
|
Quick definition of the Laplacian matrix on a triangular mesh. I give an in depth explanation here.
What is commonly called the
Laplacian matrix \( \mathbf L \) in the literature of geometry processing is: $$ \mathbf L_{ij} = \left \{ \begin{matrix} w_{ij} = \frac{1}{2} \cot \alpha_{ij} + \cot \beta_{ij} & \text{ if j adjacent to i} \\ -\sum\limits_{{j \in \mathcal N(i)}} { w_{ij} } & \text{ when } i = j \\ 0 & \text{ otherwise } \\ \end{matrix} \right . $$ \( \mathbf L \in \mathbb R^{n \times n} \) with \( n \) the number of vertices of the mesh \( \mathbf L_{ij} \) is a single element of the matrix The row \( i \) and column \( j \) of the matrix represent vertex indices as well \( {j \in \mathcal N(i)} \) is the list of vertices directly adjacent to the vertex \( i \) Each row of \( \mathbf L \) contains the list of vertices weights adjacent to \( i \) \( \mathbf L \) is symmetric positive semi-definite
On the other hand the
Laplacian *operator* is defined as \( \mathbf {\Delta f} = {\mathbf M}^{-1} \mathbf L \) with the Mass matrix \( \mathbf M \), a diagonal matrix that stores the cell area (blue area on the figure) of each vertex:
$$
\mathbf M^{-1} = \begin{bmatrix} \frac{1}{A_0} & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \frac{1}{A_n}\\ \end{bmatrix} $$ $$A_i = 3 \sum\limits_{{T_j \in \mathcal N(i)}} {area(T_j)}$$ \( T_j \in \mathcal N(i) \) list of triangles adjacent to \( i \) \( A_i \) can also be computed with mixed voronoi area. Code
See the [ C++ code ] to build the laplacian matrix with cotan weights (
get_laplacian() procedure)
If your mesh is represented with an half-edge data structure (each vertex knows its direct neighbours) the pseudo code to compute \( \mathbf L \) is:
// angle(i,j,t) -> angle at the vertex opposite to the edge (i,j) for(int i : vertices) { for(int j : one_ring(i)) { sum = 0; for(int t : triangle_on_edge(i,j)) { w = cot(angle(i,j,t)); L(i,j) = w; sum += w; } L(i,i) = -sum; } }
On the other hand the Laplacian \( \mathbf L \) may be built by summing together contributions for each triangle, this way only the list of triangles is needed:
for(triangle t : triangles) { for(edge i,j : t) { w = cot(angle(i,j,t)); L(i,j) += w; L(j,i) += w; L(i,i) -= w; L(j,j) -= w; } }
No comments
|
Bob "Hey Alice let's play paper-scissor-rocks :) I will always play paper."
Alice "What if you don't play paper?"
Bob "If I play don't play paper you win if it's a draw [under original rules] and you draw if you lose [under original rules]. If I win you have to obey my order."
Alice "What if I get a draw?"
Bob "Then you will have to do my little favour :P"
Alice "What favour?"
Bob:
The payout table is as follows
It turned out that Alice had scissor and Bob had rock --- a draw --- with consequences equivalent to a lose as Bob defined. The analysis from Alice: based on random choices there are a 2/3 chance to win for rocks and scissors, but Bob claimed to play scissor so she'd better play scissor. Bob made a conclusion that the risk for Alice is invariant despite the rules:
--------------------------------------------------
Consider the following payout matrix.
$P_1 = \begin{bmatrix}1&0&1\\1&1&0\\-1&1&0\end{bmatrix}$, $P_2 = \begin{bmatrix}1&-1&1\\1&1&-1\\-1&1&-1\end{bmatrix}$
Where $P_1$ is the matrix that Alice thought to be but $P_2$ is the real one. The matrix $P$ is defined as follows: $[P]_{ij}$ is the amount you win if you play the j-th option (vertical) and the opp plays the i-th option (horizontal). (Some textbooks may switch between you and your opp but it's just a matter of perspective on whether you, the analyzer is taking part in the game.)
By strong duality we know that the optimal strategy for both Bob and Alice should have the same expectation. Now suppose Alice has a probability vector $\vec{x}$ to play the three choices with payout matrix $P$. Then the expected outcome for Bob is given by $P\vec{x}$ for different choices from Bob. He as the payer would of course want to minimize the paid. Then we can set up the following linear program for Alice, to maximize the gain when Bob minimizes her gain among the three choices.
$\max (\min ([P\vec{x}]_i)) ~~~\text{subject to}~~~\sum [\vec{x}]_i = 1, \vec{x}\geq \vec{0}$.
And by adding dummy variable we have:
$\max z ~~~\text{subject to}~~~P\vec{x} - z\vec{b}\geq \vec{0},~\sum [\vec{x}]_i = 1,~\vec{x}\geq \vec{0}$
Where $\vec{b} = (1,1,1)^T$. This will be the optimal strategy for Alice.
Using $P_1$ we have $\vec{x}^* = (0,0.5,0.5)^T$ with expected gain $0.5$. Using $P_2$ we have the same optimal vector, but the expected gain is zero. It can be concluded that Alice had the perception that she has an advantage in the game, while actually she had not.
The matrix with four -1 and five 1, turns out to be a fair game (mathematically).
-----------------------------------------------
Gambling at high stakes are of course a totally different game. There are not enough games for one to apply central limit theorem so that the result converges so that they can play in the most probable way. In these games all factors including the psychological status shall be considered and here Alice obviously did not have a good enough mind to do such analysis, so she eventually lose the game.
Can you think of other non-trivial matrix which is fair as well?
Can you think of other non-trivial matrix which is fair as well?
|
In the sequence 1,2,2,4,8,32,256...
each term (starting from the third term) is the product of the two terms before it. For example, the seventh term is 256 , which is the product of the fifth term (8) and the sixth term (32). This sequence can be continued forever, though the numbers very quickly grow enormous! (For example, the 14'th term is close to some estimates of the number of particles in the observable universe.) What is the last digit of the term of the sequence
I assume he/she means the 14th term. If so, then the 14th is:
1
2 2 4 8 32 256 8192 2097152 1 7179869184 3602879 7018963968 6189700 1964269013 7449562112 2230 0745198530 6231415357 1827264836 1505980416 1 3803492693 5811275748 6951172455 4050904902 2179443407 7311032504 8447598592
This is simple pattern recognizing
the first few terms are 1, 2, 2, 4, 8, 32, 256, 8192, 2097152...
Notice how the last digits are forming a pattern, (2, 2, 4, 8, 2, 6, 2, 2, 4, 8, 2, 6, 2...
The pattern is 2 -> 2 -> 4 -> 8 -> 2 -> 6
So if the pattern repeats every 6 terms. And assuming you are trying to find the 14th term. You divide 14 by 6 and find the remainder, which is 2.
So look at the 2nd term in the pattern. which is 2.
But remember the first term is 1, so you have to count back a term
So the answer is 2
In the sequence 1,2,2,4,8,32,256... each term (starting from the third term) is the product of the two terms before it. For example, the seventh term is 256 , which is the product of the fifth term (8) and the sixth term (32). This sequence can be continued forever, though the numbers very quickly grow enormous! (For example, the 14'th term is close to some estimates of the number of particles in the observable universe.) What is the last digit of the term of the sequence
\(\begin{array}{|lcll|} \hline a_1 &=& 1 \\ a_2 &=& 2\\ a_3 = a_2\cdot a_1 &=& a_2^1 \\ a_4 = a_3\cdot a_2 = a_2^1\cdot a_2 &=& a_2^2 \\ a_5 = a_4\cdot a_3 = a_2^2\cdot a_2^1 &=& a_2^3 \\ a_6 = a_5\cdot a_4 = a_2^3\cdot a_2^3 &=& a_2^5 \\ a_7 = a_6\cdot a_5 = a_2^5\cdot a_2^3 &=& a_2^8 \\ a_8 = a_7\cdot a_6 = a_2^8\cdot a_2^5 &=& a_2^{13}\\ a_9 = a_8\cdot a_7 = a_2^{13}\cdot a_2^8 &=& a_2^{21} \\ \ldots \\ \hline \end{array} \)
\(\begin{array}{|rcll|} \hline && \text{Let } \mathcal{F} \text{ are the Fibonacci number }\\\\ \mathcal{F}_1 &=& 1\\ \mathcal{F}_2 &=& 1\\ \mathcal{F}_3 &=& 2\\ \mathcal{F}_4 &=& 3\\ \mathcal{F}_5 &=& 4\\ \mathcal{F}_6 &=& 5\\ \mathcal{F}_7 &=& 13\\ \mathcal{F}_8 &=& 21\\ \mathcal{F}_9 &=& 34\\ \mathcal{F}_{10} &=& 55\\ \mathcal{F}_{11} &=& 89\\ \mathcal{F}_{12} &=& 144\\ \mathcal{F}_{13} &=& 233\\ \mathcal{F}_{14} &=& 377\\ \ldots \\ \hline \end{array}\)
\(\begin{array}{|rcll|} \hline a_n &=& a_2^{\mathcal{F}_{n-1}} \quad & | \quad a_2 = 2 \\ a_n &=& 2^{\mathcal{F}_{n-1}} \\ \hline \end{array} \)
If n = 14:
\(\begin{array}{|rcll|} \hline a_{14} &=& 2^{\mathcal{F}_{14-1}} \\ a_{14} &=& 2^{\mathcal{F}_{13}} \quad & | \quad \mathcal{F}_{13} = 233 \\ a_{14} &=& 2^{233} \\\\ && \text{ The last digit of the term } a_{14} \\ && 2^{233} \pmod{10} \\ &\equiv & 2^{13\cdot 17+12} \pmod{10}\\ &\equiv & \left(2^{13}\right)^{17} 2^{12} \pmod{10} \quad & | \quad 2^{13} \equiv 2 \pmod{10} \\ &\equiv & 2^{17}\cdot 2^{12} \pmod{10} \\ &\equiv & 2^{29} \pmod{10} \\ &\equiv & 2^{13\cdot 2+3} \pmod{10} \\ &\equiv & \left(2^{13}\right)^{2} 2^{3} \pmod{10} \quad & | \quad 2^{13} \equiv 2 \pmod{10} \\ &\equiv & 2^2\cdot 2^3 \pmod{10} \\ &\equiv & 2^{5} \pmod{10} \\ &\equiv & 32 \pmod{10} \\ &\mathbf{\equiv} & \mathbf{ 2 \pmod{10}} \\ \hline \end{array}\)
\(\text{ The last digit of the term $a_{14}$ is $\mathbf{2}$}\)
|
Given: $L_1, L_2$ are regular languages. We define the language $L$.
$L = \{uw \mid \exists v \in \Sigma^*, uv \in L_1, vw \in L_2\}$.
I need to prove that $L$ is a regular language using only Closure Properties.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Given a language $K$ of $A^*$ and a word $v\in A^*$, recall that $$ v^{-1}K = \{ u \in A^* \mid vu \in K\} \quad \text{and} \quad Kv^{-1} = \{ u \in A^* \mid uv \in K\} $$ If $K$ is regular, then $v^{-1}K$ and $Kv^{-1}$ are regular. Furthermore, there are only finitely many sets of the form $v^{-1}K$ (respectively $Kv^{-1}$). Now your language can be written as $$ L = \bigcup_{v\in A^*} (L_1v^{-1}) (v^{-1}L_2) $$ and hence $L$ is regular.
|
Find all functions $f:\mathbb{Z} \to \mathbb{Z} $ such that$f(0)=1$ and $$f(f(n))=f(f(n+2)+2)=n. \quad \forall n \in \mathbb{Z} $$
My approach: Plugging in some values, it is not hard to see that $f(n)=1-n$ satisfies the given relation. I claim that $ f(k)=1-k $ for some $k \in \mathbb{Z}$.
I just cannot see a way to use the relation and induct on $k$ to prove my hypothesis. Am I missing something obvious? Please help as I am new to functional equations. Also please share some online resources to solve functional equations as I am preparing for Olympiads.
Thank you.
|
Search
Now showing items 1-6 of 6
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Linear and non-linear flow modes in Pb-Pb collisions at $\sqrt{s_{\rm NN}} =$ 2.76 TeV
(Elsevier, 2017-10)
The second and the third order anisotropic flow, $V_{2}$ and $V_3$, are mostly determined by the corresponding initial spatial anisotropy coefficients, $\varepsilon_{2}$ and $\varepsilon_{3}$, in the initial density ...
Measurement of deuteron spectra and elliptic flow in Pb–Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV at the LHC
(Springer, 2017-10)
The transverse momentum ($p_{\rm T}$) spectra and elliptic flow coefficient ($v_2$) of deuterons and anti-deuterons at mid-rapidity ($|y|<0.5$) are measured with the ALICE detector at the LHC in Pb-Pb collisions at ...
Measuring K$^0_{\rm S}$K$^{\rm \pm}$ interactions using Pb-Pb collisions at ${\sqrt{s_{\rm NN}}=2.76}$ TeV
(Elsevier, 2017-11)
We present the first ever measurements of femtoscopic correlations between the K$^0_{\rm S}$ and K$^{\rm \pm}$ particles. The analysis was performed on the data from Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV measured ...
Charged-particle multiplicity distributions over a wide pseudorapidity range in proton-proton collisions at √s = 0.9, 7, and 8 TeV
(Springer, 2017-12-09)
We present the charged-particle multiplicity distributions over a wide pseudorapidity range (−3.4<η<5.0 ) for pp collisions at √s=0.9,7 , and 8 TeV at the LHC. Results are based on information from the Silicon Pixel Detector ...
|
Here are some notes for my talk
Finding Congruent Numbers, Arithmetic Progressions of Squares, and Triangles (an invitation to analytic number theory), which I’m giving on Tuesday 26 February at Macalester College.
The slides for my talk are available here.
The overarching idea of the talk is to explore the deep relationship between
right triangles with rational side lengths and area $n$, three-term arithmetic progressions of squares with common difference $n$, and rational points on the elliptic curve $Y^2 = X^3 – n^2 X$.
If one of these exist, then all three exist, and in fact there are one-to-one correspondences between each of them. Such an $n$ is called a
congruent number.
By understanding this relationship, we also describe the ideas and results in the paper A Shifted Sum for the Congruent Number Problem, which I wrote jointly with Tom Hulse, Chan Ieong Kuan, and Alex Walker.
Towards the end of the talk, I say that in practice, the best way to decide if a (reasonably sized) number is congruent is through elliptic curves. Given a computer, we can investigate whether the number $n$ is congruent through a computer algebra system like sage.
1
For the rest of this note, I’ll describe how one can use sage to determine whether a number is congruent, and how to use sage to add points on elliptic curves to generate more triangles corresponding to a particular congruent number.
Firstly, one needs access to sage. It’s free to install, but it’s quite large. The easiest way to begin using sage immediately is to use cocalc.com, a free interface to sage (and other tools) that was created by William Stein, who also created sage.
In a sage session, we can create an elliptic curve through
> E6 = EllipticCurve([-36, 0])> E6Elliptic Curve defined by y^2 = x^3 - 36*x over Rational Field
More generally, to create the curve corresponding to whether or not $n$ is congruent, you can use
> n = 6 # (or anything you want)> E = EllipticCurve([-n**2, 0])
We can ask sage whether our curve has many rational points by asking it to (try to) compute the rank.
> E6.rank()1
If the rank is at least $1$, then there are infinitely many rational points on the curve and $n$ is a congruent number. If the rank is $0$, then $n$ is not congruent.
2
For the curve $Y^2 = X^3 – 36 X$ corresponding to whether $6$ is congruent, sage returns that the rank is $1$. We can ask sage to try to find a rational point on the elliptic curve through
> E6.point_search(10)[(-3 : 9 : 1)]
The
10 in this code is a limit on the complexity of the point. The precise definition isn’t important — using $10$ is a reasonable limit for us.
We see that this output something. When sage examines the elliptic curve, it uses the equation $Y^2 Z = X^3 – 36 X Z^2$ — it turns out that in many cases, it’s easier to perform computations when every term is a polynomial of the same degree. The coordinates it’s giving us are of the form $(X : Y : Z)$, which looks a bit odd. We can ask sage to return just the XY coordinates as well.
> Pt = E6.point_search(10)[0] # The [0] means to return the first element of the list> Pt.xy()(-3, 9)
In my talk, I describe a correspondence between points on elliptic curves and rational right triangles. In the talk, it arises as the choice of coordinates. But what matters for us right now is that the correspondence taking a point $(x, y)$ on an elliptic curve to a triangle $(a, b, c)$ is given by
$$(x, y) \mapsto \Big( \frac{n^2-x^2}{y}, \frac{-2 \cdot x \cdot y}{y}, \frac{n^2 + x^2}{y} \Big).$$
We can write a sage function to perform this map for us, through
> def pt_to_triangle(P): x, y = P.xy() return (36 - x**2)/y, (-2*x*6/y), (36+x**2)/y> pt_to_triangle(Pt)(3, 4, 5)
This returns the $(3, 4, 5)$ triangle!
Of course, we knew this triangle the whole time. But we can use sage to get more points. A very cool fact is that rational points on elliptic curves form a group under a sort of addition — we can add points on elliptic curves together and get more rational points. Sage is very happy to perform this addition for us, and then to see what triangle results.
> Pt2 = Pt + Pt> Pt2.xy()(25/4, -35/8)> pt_to_triangle(Pt2)(7/10, 120/7, -1201/70)
Another rational triangle with area $6$ is the $(7/10, 120/7, 1201/70)$ triangle. (You might notice that sage returned a negative hypotenuse, but it’s the absolute values that matter for the area). After scaling this to an integer triangle, we get the integer right triangle $(49, 1200, 1201)$ (and we can check that the squarefree part of the area is $6$).
Let’s do one more.
> Pt3 = Pt + Pt + Pt> Pt3.xy()(-1587/1369, -321057/50653)> pt_to_triangle(Pt3)(-4653/851, -3404/1551, -7776485/1319901)
That’s a complicated triangle! It may be fun to experiment some more — the triangles rapidly become very, very complicated. In fact, it was very important to the main result of our paper that these triangles become so complicated so quickly!
|
To amplify Yehuda Lindell's answer even within a single family of groups, here is an example of a 2046-bit modulus $p$ for which discrete logs in $(\mathbb Z/p\mathbb Z)^\times$, as in standard modular multiplication Diffie–Hellman, are extraordinarily easy to compute:
0x2465a7bd85011e1c9e0527929fff268c82ef7efa416863baa5acdb0971dba0ccac3ee4999345029f2cf810b99e406aac5fce5dd69d1c717daea5d18ab913f456505679bc91c57d46d9888857862b36e2ede2e473c1f0ab359da25271affe15ff240e299d0b04f4cd0e4d7c0e47b1a7ba007de89aae848fd5bdcd7f9815564eb060ae14f19cb50c291f0bbd8ed1c4c7f8fc5fba51662001939b532d92dac844a8431d400c832d039f5f900b278a75219c2986140c79045d7759540854c31504dc56f1df5eebe7bee447658b917bf696d6927f2e2428fbeb340e515cb9835d63871be8bbe09cf13445799f2e67788151571a93b4c1eee55d1b9072e0b2f5c4607f.
2046-bit moduli should be secure, right? No, not at all. First of all, it helps if they're prime (or at least if you don't know their prime factorization, but for public DH or DSA parameters, you would like to know this sort of thing about them). This number $p$ is not prime; rather, it is the product of the first 232 prime numbers except 2. So, given fixed $h$, to solve $$g^x \equiv h \pmod p$$ for $x$, it suffices to compute the discrete logs $x_3$, $x_5$, $x_7$,
etc., satisfying
\begin{align*} g^{x_3} &\equiv h \pmod 3, \\ g^{x_5} &\equiv h \pmod 5, \\ &\vdots\end{align*}
independently, and then assemble them into a solution for $x$ with the Chinese remainder theorem:
\begin{align*} x &\equiv x_3 \pmod{\phi(3)}, \\ x &\equiv x_5 \pmod{\phi(5)}, \\ &\vdots\end{align*}
where $\phi(3) = 3 - 1$, $\phi(5) = 5 - 1$,
etc., since they're all prime.
This algorithm is so cheap that you could do it with
pen and paper using schoolbook long division and schoolbook multiplication in a few hours. But the easiness of this algorithm means nothing about how easy or hard it is to compute discrete logs in $(\mathbb Z/q\mathbb Z)^\times$ where $q$ is the RFC 3526 Group #14 modulus
0xffffffffffffffffc90fdaa22168c234c4c6628b80dc1cd129024e088a67cc74020bbea63b139b22514a08798e3404ddef9519b3cd3a431b302b0a6df25f14374fe1356d6d51c245e485b576625e7ec6f44c42e9a637ed6b0bff5cb6f406b7edee386bfb5a899fa5ae9f24117c4b1fe649286651ece45b3dc2007cb8a163bf0598da48361c55d39a69163fa8fd24cf5f83655d23dca3ad961c62f356208552bb9ed529077096966d670c354e4abc9804f1746c08ca18217c32905e462e36ce3be39e772c180e86039b2783a2ec07a28fb5c55df06f4c52c9de2bcbf6955817183995497cea956ae515d2261898fa051015728e5a8aacaa68ffffffffffffffff.
The same algorithm using the Chinese remainder theorem doesn't work because $q$ is prime, so you can't solve the problem independently modulo all its factors and then recombine the results into a solution.
The choice of field, curve shape, curve parameters,
etc., has the same kind of impact on the ease or difficulty of computing elliptic-curve discrete logs as the choice of modulus has on the ease or difficulty of computing modular multiplication discrete logs.
|
I'm studying for qualifying exams and ran into this problem.
Show that if $\{a_n\}$ is a nonincreasing sequence of positive real numbers such that $\sum_n a_n$ converges, then $\lim_{n \rightarrow \infty} n a_n = 0$.
Using the definition of the limit, this is equivalent to showing
\begin{equation} \forall \varepsilon > 0 \; \exists n_0 \text{ such that } |n a_n| < \varepsilon \; \forall n > n_0 \end{equation}
or
\begin{equation} \forall \varepsilon > 0 \; \exists n_0 \text{ such that } a_n < \frac{\varepsilon}{n} \; \forall n > n_0 \end{equation}
Basically, the terms must be bounded by the harmonic series. Thanks, I'm really stuck on this seemingly simple problem!
|
Refine Keywords
We present an algebraic multigrid preconditioner which uses only the graphs of system matrices. Some elementary coarsening rules are stated, from which an advancing front algorithm for the selection of coarse grid nodes is derived. This technique can be applied to linear Lagrange-type finite element discretizations; for higher-order elements an extension of the multigrid algorithm is provided. Both two- and three-dimensional second order elliptic problems can be handled. Numerical experiments show that the resulting convergence acceleration is comparable to classical geometric multigrid.
Our focus is on Maxwell's equations in the low frequency range; two specific applications we aim at are time-stepping schemes for eddy current computations and the stationary double-curl equation for time-harmonic fields. We assume that the computational domain is discretized by triangles or tetrahedrons; for the finite element approximation we choose N\'{e}d\'{e}lec's $H(curl)$-conforming edge elements of the lowest order. For the solution of the arising linear equation systems we devise an algebraic multigrid preconditioner based on a spatial component splitting of the field. Mesh coarsening takes place in an auxiliary subspace, which is constructed with the aid of a nodal vector basis. Within this subspace coarse grids are created by exploiting the matrix graphs. Additionally, we have to cope with the kernel of the $curl$-operator, which comprises a considerable part of the spectral modes on the grid. Fortunately, the kernel modes are accessible via a discrete Helmholtz decomposition of the fields; they are smoothed by additional algebraic multigrid cycles. Numerical experiments are included in order to assess the efficacy of the proposed algorithms.
KASKADE 3.0 User's Guide (1995)
KASKADE 3.x was developed for the solution of partial differential equations in one, two, or three space dimensions. Its object-oriented implementation concept is based on the programming language C++$\,$.~Adaptive finite element techniques are employed to provide solution procedures of optimal computational complexity. This implies a posteriori error estimation, local mesh refinement and multilevel preconditioning. The program was designed both as a platform for further developments of adaptive multilevel codes and as a tool to tackle practical problems. Up to now we have implemented scalar problem types like stationary or transient heat conduction. The latter one is solved with the Rothe method, enabling adaptivity both in space and time. Some nonlinear phenomena like obstacle problems or two-phase Stefan problems are incorporated as well. Extensions to vector-valued functions and complex arithmetic are provided. This report helps to work with KASKADE Especially we \begin{itemize} \setlength{\parskip}{0ex} \item [{\bf --}] study a set of examples, \item [{\bf --}] explain how to define a user's problem and \item [{\bf --}] introduce a graphical user interface. \end{itemize} We are extending this guide continuously. The latest version is available by network.
KASKADE 3.0 was developed for the solution of partial differential equations in one, two, or three space dimensions. Its object-oriented implementation concept is based on the programming language C++$\,$.~Adaptive finite element techniques are employed to provide solution procedures of optimal computational complexity. This implies a posteriori error estimation, local mesh refinement and multilevel preconditioning. The program was designed both as a platform for further developments of adaptive multilevel codes and as a tool to tackle practical problems. Up to now we have implemented scalar problem types like stationary or transient heat conduction. The latter one is solved with the Rothe method, enabling adaptivity both in space and time. Some nonlinear phenomena like obstacle problems or two-phase Stefan problems are incorporated as well. Extensions to vector-valued functions and complex arithmetic are provided. %Such free boundary problems ... We have implemented several iterative solvers for both symmetric and unsymmetric systems together with multiplicative and additive multilevel preconditioners. Systems arising from the nonlinear problems can be solved with lately developed monotone multigrid methods. %An object-oriented concept was chosen for KASKADE~3.0, based on the programming %language C++$\,$. This should provide the desired extensibilty and clearly %reflect the structure of the code. %A direct sparse matrix solver (Harwell MA28) is included.
The focus of this paper is on the efficient solution of boundary value problems involving the double-- curl operator. Those arise in the computation of electromagnetic fields in various settings, for instance when solving the electric or magnetic wave equation with implicit timestepping, when tackling time--harmonic problems or in the context of eddy--current computations. Their discretization is based on on N\'ed\'elec's {\bf H(curl}; $\Omega$)--conforming edge elements on unstructured grids. In order to capture local effects and to guarantee a prescribed accuracy of the approximate solution adaptive refinement of the grid controlled by a posteriori error estimators is employed. The hierarchy of meshes created through adaptive refinement forms the foundation for the fast iterative solution of the resulting linear systems by a multigrid method. The guiding principle underlying the design of both the error estimators and the multigrid method is the separate treatment of the kernel of the curl--operator and its orthogonal complement. Only on the latter we have proper ellipticity of the problem. Yet, exploiting the existence of computationally available discrete potentials for edge element spaces, we can switch to an elliptic problem in potential space to deal with nullspace of curl. Thus both cases become amenable to strategies of error estimation and multigrid solution developed for second order elliptic problems. The efficacy of the approach is confirmed by numerical experiments which cover several model problems and an application to waveguide simulation.
Hyperthermia Treatment Planning in Clinical Cancer Therapy: Modelling, Simulation and Visualization (1997)
\noindent The speaker and his co-workers in Scientific Computing and Visualization have established a close cooperation with medical doctors at the Rudolf--Virchow--Klinikum of the Humboldt University in Berlin on the topic of regional hyperthermia. In order to permit a patient--specific treatment planning, a special software system ({\sf\small HyperPlan}) has been developed. \noindent A mathematical model of the clinical system ({\it radio frequency applicator with 8 antennas, water bolus, individual patient body}) involves Maxwell's equations in inhomogeneous media and a so--called bio--heat transfer PDE describing the temperature distribution in the human body. The electromagnetic field and the thermal phenomena need to be computed at a speed suitable for the clinical environment. An individual geometric patient model is generated as a quite complicated tetrahedral ``coarse'' grid (several thousands of nodes). Both Maxwell's equations and the bio--heat transfer equation are solved on that 3D--grid by means of {\em adaptive} multilevel finite element methods, which automatically refine the grid where necessary in view of the required accuracy. Finally optimal antenna parameters for the applicator are determined . \noindent All steps of the planning process are supported by powerful visualization methods. Medical images, contours, grids, simulated electromagnetic fields and temperature distributions can be displayed in combination. A number of new algorithms and techniques had to be developed and implemented. Special emphasis has been put on advanced 3D interaction methods and user interface issues.
A widely used approach for the computation of time-harmonic electromagnetic fields is based on the well-known double-curl equation for either $\vec E$ or $\vec H$. An appealing choice for finite element discretizations are edge elements, the lowest order variant of a $H(curl)$-conforming basis. However, the large nullspace of the curl-operator gives rise to serious drawbacks. It comprises a considerable part of all spectral modes on the finite element grid, polluting the solution with non-physical contributions and causing the deterioration of standard iterative solvers. We tackle these problems by a nested multilevel algorithm. After every V-cycle in the $H(curl)$-conforming basis, the non-physical contributions are removed by a projection scheme. It requires the solution of Poisson's equation in the nullspace, which can be carried out efficiently by another multilevel iteration. The whole procedure yields convergence rates independent of the refinement level of the mesh. Numerical examples demonstrate the efficiency of the method.
After a short summary on therapy planning and the underlying technologies we discuss quantitative medicine by giving a short overview on medical image data, summarizing some applications of computer based treatment planning, and outlining requirements on medical planning systems. Then we continue with a description of our medical planning system {\sf HyperPlan}. It supports typical working steps in therapy planning, like data aquisition, segmentation, grid generation, numerical simulation and optimization, accompanying these with powerful visualization and interaction techniques.
Electromagnetic phased arrays for regional hyperthermia -- optimal frequency and antenna arrangement (2000)
In this paper we investigate the effects of the three-dimensional arrangement of antennas and frequency on temperature distributions that can be achieved in regional hyperthermia using an electromagnetic phased array. We compare the results of power-based and temperature-based optimization. Thus we are able to explain the discrepancies between previous studies favouring more antenna rings on the one hand and more antennas per ring on the other hand. We analyze the sensitivity of the results with respect to changes in amplitudes and phases as well as patient position. This analysis can be used for different purposes. First, it provides additional criteria for selecting the optimal frequency. Second, it can be used for specifying the required phase and amplitude accuracy for a real phased array system. Furthermore, it may serve as a basis for technological developments in order to reduce both types of sensitivities described above.
|
Brazilian Journal of Probability and Statistics Braz. J. Probab. Stat. Volume 30, Number 1 (2016), 127-144. Assigning probabilities to hypotheses in the context of a binomial distribution Abstract
Given is the outcome $s$ of $S\sim{\mathrm{B}}(n,p)$ ($n$ known, $p$ fully unknown) and two numbers $0<a\leq b<1$. Required are probabilities $\alpha_{<}(s)$, $\alpha_{a,b}(s)$, and $\alpha_{>}(s)$ of the hypotheses $\mathrm{H}_{<}$: $p<a$, $\mathrm{H}_{a,b}$: $a\leq p\leq b$, and $\mathrm{H}_{>}$: $p>b$, such that their sum is equal to 1. The degenerate case $a=b(=c)$ is of special interest. A method, optimal with respect to a class of functions, is derived under Neyman–Pearsonian restrictions, and applied to a case from medicine.
Article information Source Braz. J. Probab. Stat., Volume 30, Number 1 (2016), 127-144. Dates Received: January 2013 Accepted: September 2014 First available in Project Euclid: 19 January 2016 Permanent link to this document https://projecteuclid.org/euclid.bjps/1453211806 Digital Object Identifier doi:10.1214/14-BJPS264 Mathematical Reviews number (MathSciNet) MR3453518 Zentralblatt MATH identifier 1381.62046 Citation
Albers, Casper J.; Kardaun, Otto J. W. F.; Schaafsma, Willem. Assigning probabilities to hypotheses in the context of a binomial distribution. Braz. J. Probab. Stat. 30 (2016), no. 1, 127--144. doi:10.1214/14-BJPS264. https://projecteuclid.org/euclid.bjps/1453211806
|
In my last post, I mentioned I would post my article proper on WordPress. Someone then told me about latex2wp, a python script that will translate a tex file into something postable on WordPress. So I did it, and it works pretty well! Other than changing references (removing them) and a few stylistic things here and there, and any \begin{align} type environments, it works perfectly.
So here it is:
In this, I present a method of quickly counting the number of lattice points below a quadratic of the form $latex {y = \frac{a}{q}x^2}$. In particular, I show that knowing the number of lattice points in the interval $latex {[0,q-1]}$, then we have a closed form for the number of lattice points in any interval $latex {[hq, (h+1)q – 1]}$. This method was inspired by the collaborative Polymath4 Finding Primes Project, and in particular the guidance of Dr. Croot from Georgia Tech.
1. Intro
Suppose we have the quadratic $latex {f(x) = \frac{p}{q}x^2}$. In short, we seperate the lattice points into regions and find a relationship between the number of lattice points in one region with the number of lattice points in other regions. Unfortunately, the width of each region is $latex {q}$, so that this does not always guarantee much time-savings.
$latex \displaystyle \sum_{d \leq x \leq m} \left\lfloor \frac{N}{x} \right\rfloor \ \ \ \ \ (1)$
In particular, suppose we write $latex {x = d + n}$, so that we have $latex {\left\lfloor \dfrac{N}{d + n} \right\rfloor}$. Then, expanding $latex {\dfrac{N}{d+n}}$ like $latex {\dfrac{1}{x}}$, we see that
$latex \displaystyle \frac{N}{d+n} = \frac{N}{d} – \frac{N}{d^2} (n – d) + O\left(\frac{N}{d^3} \cdot (n-d)^2 \right) \ \ \ \ \ (2)$
$latex \displaystyle \sum \left\lfloor \frac{N}{d+n} \right\rfloor = \sum \left\lfloor \frac{N}{d} – \frac{N}{d^2} (n – d) + O\left(\frac{N}{d^3} \cdot (n-d)^2 \right) \right\rfloor \ \ \ \ \ (3)$
Now, I make a great, largely unfounded leap. This is
almost like a quadratic, so what if it were? And then, what if that quadratic were tremendously simple, with no constant nor linear term, and with the only remaining term having a rational coefficient? Then what could we do? 2. The Method
$latex \displaystyle \left\lfloor \frac{a}{q} (x+q)^2 \right\rfloor = \left\lfloor \frac{a}{q} (x^2 + 2xq + q^2) \right\rfloor = \left\lfloor \frac{a}{q}x^2 \right\rfloor + 2ax + aq \ \ \ \ \ (4)$
$latex \displaystyle \sum_{x=0}^{q-1} \left\lfloor\frac{a}{q}x^2\right\rfloor = \sum_{x=q}^{2q-1} \left\lfloor \frac{a}{q}x^2 \right\rfloor – \sum_{x=0}^{q-1} (2ax + aq) \ \ \ \ \ (5)$
Now I adopt the notation $latex {S_{a,b} := \sum_{x = a}^b \left\lfloor \frac{a}{q}x^2 \right\rfloor}$, so that we can rewrite equation 5 as
$latex \displaystyle S_{0,q-1} = S_{q, 2q-1} – \sum_0^{q-1} (2ax + aq) $
$latex \displaystyle S_{0,q-1} = S_{q, 2q-1} – a(q-1)(q) – aq^2 \ \ \ \ \ (6)$
$latex \displaystyle S_{0,q-1} = S_{hq, (h+1)q-1} – \sum_0^{q-1}(2ahx + ahq) \ \ \ \ \ (7)$
$latex \lambda S_{0,q-1} = \sum_{h=1}^\lambda \left( S_{hq, (h+1)q – 1} – h\sum_0^{q-1}(2ax + aq) \right) $
$latex S_{q, (\lambda + 1)q-1}-$ $latex \sum_{h=1}^{\lambda} $ $latex h \left(\sum_0^{q-1} (2ax + aq) \right) $
$latex S_{q, (\lambda + 1)q-1}-$ $latex \frac{\lambda (\lambda +1)}{2}[aq(q+1) + aq^2]$
So, in short, if we know the number of lattice points under the parabola on the interval $latex {[0,q-1]}$, then we know in $latex {O(1)}$ time the number of lattice points under the parabola on an interval $latex {[0,(\lambda + 1)q-1]}$.
Unfortunately, when I have tried to take this method back to the Polymath4-type problem, I haven’t yet been able to reign in the error terms. But I suspect that there is more to be done using this method.
|
The original article rightfully neglects the cost of DES computations (there are less than $2^{90}$) and everything except memory accesses to its
Table 1 and Table 2. I go one step further: considering that Table 1 is initialized only once and then read-only, it could be in ROM, and I neglect all except the accesses to Table 2. The attack requires an expected $2^{88}$ random writes and as many random reads to Table 2, organized as $2^{25}\cdot 24$-bit words.
The cheap PC that I bought today (of May 2012) came with 4 GByte of DDR3 DRAM, as a single 64-bit-wide DIMM with 16 DRAM chips each $2^{28}\cdot 8$-bit, costing about \$1 per chip in volume. Bigger chips exists: my brand new 32-GByte server uses 64 chips each $2^{29}\cdot 8$-bit, and these are becoming increasingly common (though price per bit is still higher than for the mainstream $2^{28}\cdot 8$-bit chips).
Two mainstream $2^{28}\cdot 8$-bit chips hold one instance of
Table 2, and one 124-bit word can be accessed as 8 consecutive 8-bit locations in each of the two chips simultaneously (consecutive accesses are like 15 times faster than random accesses). One $2^{29}\cdot 8$-bit chip would be slightly slower.
Assuming DDR3-1066 with 7-cycles latency (resp. DDR3-1333 with 9-cycles latency), 8 consecutive access require at least $(7\cdot 2+7)/1066\approx 0.020$ µs (resp. $(9\cdot 2+7)/1333\approx 0.019$ µs). This is a decimal order of magnitude less than considered in the original article. For each instance of
Table 2, that is 0.5 GByte, we can perform at most $365\cdot 86400\cdot 10^6/0.019/2\approx 2^{49.6}$ read+write accesses per year to Table 2 using mainstream DRAM. Thus with $n$ GByte of mainstream DRAM, and unless I err somewhere, the expected duration is $2^{37.4}/n$ years.
Based on press releases of a serious reference, there are less than $2^{31}$ PCs around, and assuming that my cheap PC is representative, that's $2^{33}$ GByte. Another way to look at that is that each 0.25-GByte chip cost about \$$1$; and the DRAM revenues in 2011 is less than \$$2^{35}$, thus enough for $2^{33}$ GByte (but notice that most of the revenue is from chips that are not optimized for cost per bit). I'll guesstimate all the RAM ever built is equivalent to at most $2^{35}$ GByte of mainstream DRAM for the purpose of the attack.
Thus at the end of the day, my answer is: the attack in the original article, updated to use all the RAM chips ever built by mankind to mid 2012 at the maximum of their potential, has an expected duration of
at least 5 years; or equivalently has odds at best 20% to succeed in one year.
Update: as noted by the authors of the original article, "the execution time is not particularly sensitive to the number of plaintext/ciphertext pairs $n$ (provided that $n$ is not too small) because as $n$ increases, the number ofoperations required for the attack ($2^{120-\log_2 n}$) decreases, but memory requirements increase, and the number of machines that can be built with a fixed amount of money decreases". By the same argument, our required amount of RAM is not much changed if we get more known plaintext/ciphertext pairs.
|
I think I understand the mechanism of water vapor feedback in climate change pretty well: a (theoretical) temperature increase due to some isolated factor (e.g. increase in $CO_2$ concentration) causes a shift in water vapor saturation density and an additional evaporation rate. The consequential shift in actual water vapor concentration causes additional absorption together with isotropic re-emission, slowing down the radiation transport. So the theoretical temperature rise $\Delta T_{nofb}$ without consideration of feedback is "amplified" by some factor $\beta$, giving the real temperature rise, with feedback
$\Delta T_{wfb} = \beta \Delta T_{nofb}$
Of course I know that this argument has to be replaced by solving a differential equation in a more rigorous treatment. There is no "before" and "after", there is only the combined effect.
But what if we consider the "theoretical" temperature change due to the change in solar irradiation from orbital eccentricity? Shouldn't this be amplified by the same water vapor feedback mechanism as well?
Solar irradiation varies over the year between $1310$ $W/m^2$ and $1420$ $W/m^2$. According to Stefan-Boltzmann's Law in differential form we would have a heating of the ground by direct absorption of sunlight
$\frac{dP}{P}=4 \frac{dT}{T}$
or
$\frac{\Delta P}{P}\approx 4 \frac{\Delta T}{T}$
So I would expect the temperature change due to yearly changes in solar irradiation (but without the effect of water vapor feedback) to be about
$\Delta T \approx \frac{T}{4} \frac{\Delta P}{P} = \frac{290 K}{4} \frac{90 W/m^2 }{1366 W/m^2}\approx 4.8 K$
The favorite answer to this question shows a plot that seems to indicate that the order of magnitude is indeed like that, the global line shows a variation of about $3.8$ $K$. I don't know if this was derived from measurements, but I assume that it is somehow substantiated.
But if we now adopt the assumption that these ~5 Kelvin get amplified by the water vapor feedback as well, and take into account that this amplification is said to be in the order of magnitude ~2 in the context of $CO_2$ entry by mankind, I would expect the amplified yearly temperature change due to the orbital eccentricity of earth to be $~10$ $K$.
Is the linked answer wrong, or is there some flaw in the argument?
|
Let $K$ be a number field with integral basis $\{\omega_1,\ldots,\omega_n\}$. The affine variety $A_K$ defined by $$ N_{K/{\mathbb Q}}(X_1 \omega_1 + \ldots + X_n \omega_n) = 1 $$ is an algebraic group, the group structure coming from multiplication of units with norm $1$; in fact, $A_K$ is a norm-1 torus. For pure cubic extensions ${\mathbb Q}(\sqrt[3]{m})$ with $m \not \equiv \pm 1 \bmod 9$, the unit variety is defined by $$ X_1^3 + mX_2^3 + m^2X_3^2 - 3mX_1X_2X_3 = 1, $$ for example.
The affine part of the variety $A_K$ is smooth; the affine part of its reduction modulo $p$ is smooth if and only if $p \nmid \Delta$, where $\Delta$ denotes the discriminant of $K$.
For each prime $p \nmid \Delta$ let $N_r$ denote the number of ${\mathbb F}_q$-rational points on $A_K$, where $q = p^r$. Define the Hasse-Weil zeta function $$ Z_p(T) = \exp\bigg( \sum_{r=1}^\infty N_r \frac{T^r}r \bigg). $$ This zeta function has the following properties:
The zeta function $Z_p(T)$ is a rational function of $T$; the degrees of numerator and denominators are equal. More exactly, $Z_p(T)$ can be written in the form $$ Z_p(T) = \begin{cases} \frac{P_0(T) P_2(T) \cdots P_{n-1}(T)}{P_1(T)P_3(T) \cdots P_{n}(T)} & \text{ if $n$ is odd}, \\\ \frac{P_1(T)P_3(T) \cdots P_{n-1}(T)}{P_0(T) P_2(T) \cdots P_{n}(T)} & \text{ if $n$ is even}, \end{cases} $$ where $P_j(T)$ is a product of terms of the form $1 - \zeta p^{j}T$ for suitable roots of unity $\zeta$. The actual factors $P_j(T)$ essentially depend only on the prime ideal factorization of $p$ in $K$.
Moreover, $Z_p(\infty) = \lim_{T \to \infty} Z_p(T)$ exists and satisfies $Z_p(\infty) = \epsilon_p$, where $\epsilon_p = \chi(p)/p$ for Pell conics (unit varieties for quadratic extensions) and $\epsilon_p = \pm 1$ in general.
The zeta function $Z_p(T)$ admits a functional equation of the form $$ Z_p\Big(\epsilon_p \frac1{p^nT}\Big) = \eta_p Z_p(T)^{(-1)^n} $$ for some constant $\eta_p$ depending only on $p$.
The global zeta function $Z_K(s)$ is constructed as follows: set $L_p(s) = P_{n-1}(p^{-s})$ and $Z(s) = \prod_p L_p(s)$. Then, up to Euler factors at the ramified primes, $Z(s) = \zeta_K(s+n-2)/\zeta(s+n-2)$, where $\zeta_K$ is the Dedekind zeta function of $K$.
My impression is that the case of Pell conics is slightly different from the general case because Pell conics are smooth even at infinity.
I am unaware of almost any of the results on norm-1 tori obtained in the last 30 years, and my main question is:
Is all of this a special case of known results on algebraic tori, and if yes, what are the relevant references?
BTW, the rationality and the functional equation seem to be known in quite general situations. So a more precise question would be whether these unit varieties have received any special attention.
|
Search
Now showing items 1-10 of 167
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
|
I gave an introduction to sage tutorial at the University of Warwick Computational Group seminar today, 2 November 2017. Below is a conversion of the sage/jupyter notebook I based the rest of the tutorial on. I said many things which are not included in the notebook, and during the seminar we added a few additional examples and took extra consideration to a few different calls. But for reference, the notebook is here.
The notebook itself (as a jupyter notebook) can be found and viewed on my github (link to jupyter notebook). When written, this notebook used a Sage 8.0.0.rc1 backend kernel and ran fine on the standard Sage 8.0 release , though I expect it to work fine with any recent official version of sage. The last cell requires an active notebook to be seen (or some way to export jupyter widgets to standalone javascript or something; this either doesn’t yet exist, or I am not aware of it).
I will also note that I converted the notebook for display on this website using jupyter’s nbconvert package. I have some CSS and syntax coloring set up that affects the display.
Good luck learning sage, and happy hacking.
Sage¶
Sage (also known as SageMath) is a general purpose computer algebra system written on top of the python language. In Mathematica, Magma, and Maple, one writes code in the mathematica-language, the magma-language, or the maple-language. Sage is python.
But no python background is necessary for the rest of today’s guided tutorial. The purpose of today’s tutorial is to give an indication about how one really
uses sage, and what might be available to you if you want to try it out.
I will spoil the surprise by telling you upfront the two main points I hope you’ll take away from this tutorial.
With tab-completion and documentation, you can do many things in sage without ever having done them before. The ecosystem of libraries and functionality available in sage is tremendous, and (usually) pretty easy to use. Lightning Preview¶
Let’s first get a small feel for sage by seeing some standard operations and what typical use looks like through a series of trivial, mostly unconnected examples.
# Fundamental manipulations work as you hope2+3
5
You can also subtract, multiply, divide, exponentiate…
>>> 3-21>>> 2*36>>> 2^38>>> 2**3 # (also exponentiation)8
There is an order of operations, but these things work pretty much as you want them to work. You might try out several different operations.
Sage includes a lot of functionality, too. For instance,
factor(-1008)
-1 * 2^4 * 3^2 * 7
list(factor(1008))
[(2, 4), (3, 2), (7, 1)]
In the background, Sage is actually calling on pari/GP to do this factorization. Sage bundles lots of free and open source math software within it (which is why it’s so large), and provides a common access point. The great thing here is that you can often use sage without needing to know much pari/GP (or other software).
Sage knows many functions and constants, and these are accessible.
sin(pi)
0
exp(2)
e^2
Sage tries to internally keep expressions in exact form. To present approximations, use
N().
N(exp(2))
7.38905609893065
pi
pi
N(pi)
3.14159265358979
You can ask for a number of digits in the approximation by giving a
digits keyword to
N().
N(pi, digits=60)
3.14159265358979323846264338327950288419716939937510582097494
There are some benefits to having smart representations. In many computer algebra systems, computing (sqrt(2))^2 doesn’t give you back 2, due to finite floating point arithmetic. But in sage, this problem is sometimes avoided.
sqrt(2)
sqrt(2)
sqrt(2)**2
2
Of course, there are examples where floating point arithmetic gets in the way.
In sage/python, integers have unlimited digit length. Real precision arithmetic is a bit more complicated, which is why sage tries to keep exact representations internally. We don’t go into tracking digits of precision in sage, but it is usually possible to prescribe levels of precision.
Sage is written in python. You can use python functions in sage. [You can also import python libraries in sage, which extends sage’s reach significantly]. The
range function in python counts up to a given number, starting at 0.
range(16)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
A = matrix(4,4, range(16))A
[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]
B = matrix(4,4, range(-5, 11))B
[-5 -4 -3 -2] [-1 0 1 2] [ 3 4 5 6] [ 7 8 9 10]
Sage can be smart about using the same operators in different contexts. (i.e. sage is very polymorphic). Multiplying, adding, subtracting, and dividing matrices works as expected.
A*B
[ 26 32 38 44] [ 42 64 86 108] [ 58 96 134 172] [ 74 128 182 236]
There are two sorts of ways that functions are called in sage. For some functions, you create the object (in this case, the matrix A), type a dot
., and then call the function.
A.charpoly()
x^4 - 30*x^3 - 80*x^2
There are some top-level functions as well.
factor(A.charpoly())
x^2 * (x^2 - 30*x - 80)
Sometimes you start with an object, such as a matrix, and you wonder what you can do with it. Sage has very good tab-completion and introspection in its notebook interface.
Try typing
A.
and hit
<Tab>. Sage should display a list of things it thinks it can do to the matrix A.
Warning¶
Note that on CoCalc or external servers, tab completion sometimes has a small delay.
A.
Some of these are more meaningful than others, but you have a list of options. If you want to find out what an option does, like
A.eigenvalues(), then type
A.eigenvalues?
and hit enter.
A.eigenvalues?
A.eigenvalues()
[0, 0, -2.464249196572981?, 32.46424919657298?]
If you’re really curious about what’s going on, you can type
A.eigenvalues??
which will also show you the implementation of that functionality. (You usually don’t need this).
A.eigenvalues??
There is a lot of domain-specific functionality within sage as well. We won’t dwell too much on any particular functionality in this tutorial, but for example sage knows about elliptic curves.
E = EllipticCurve([1,2,3,4,5])E
Elliptic Curve defined by y^2 + x*y + 3*y = x^3 + 2*x^2 + 4*x + 5 over Rational Field
# Tab complete me to see what's availableE.
E.conductor()
10351
E.rank()
1
Sage knows about complex numbers as well. Use
i or
I to mean a $\sqrt{-1}$.
(1+2*I) * (pi - sqrt(5)*I)
(2*I + 1)*pi - (I - 2)*sqrt(5)
c = 1/(sqrt(3)*I + 3/4 + sqrt(29)*2/3)
Sage tries to keep computations in exact form. So
c is stored with perfect representations of square roots.
c
12/(8*sqrt(29) + 12*I*sqrt(3) + 9)
But we can have sage give numerical estimates of objects by calling
N() on them.
N(c)
0.198754342458965 - 0.0793188720015053*I
N(c, 20) # Keep 20 "bits" of information
0.19875 - 0.079319*I
As one more general topic before we jump into a few deeper examples, sage is also very good at representing objects in latex. Use
latex(<object>) to give a latex representation.
latex(c)
\frac{12}{8 \, \sqrt{29} + 12 i \, \sqrt{3} + 9}
latex(E)
y^2 + x y + 3 y = x^{3} + 2 x^{2} + 4 x + 5
latex(A)
\left(\begin{array}{rrrr} 0 & 1 & 2 & 3 \\ 4 & 5 & 6 & 7 \\ 8 & 9 & 10 & 11 \\ 12 & 13 & 14 & 15 \end{array}\right)
You can have sage print the LaTeX version in the notebook by using
pretty_print
pretty_print(A)
H = DihedralGroup(6)H.list()
[(), (1,6)(2,5)(3,4), (1,2,3,4,5,6), (1,5)(2,4), (2,6)(3,5), (1,3,5)(2,4,6), (1,4)(2,3)(5,6), (1,6,5,4,3,2), (1,4)(2,5)(3,6), (1,2)(3,6)(4,5), (1,5,3)(2,6,4), (1,3)(4,6)]
a = H[1]a
(1,6)(2,5)(3,4)
a.order()
2
b = H[2]b
(1,2,3,4,5,6)
a*b
(2,6)(3,5)
for elem in H: if elem.order() == 2: print elem
(1,6)(2,5)(3,4) (1,5)(2,4) (2,6)(3,5) (1,4)(2,3)(5,6) (1,4)(2,5)(3,6) (1,2)(3,6)(4,5) (1,3)(4,6)
# Or, in the "pythonic" wayelements_of_order_2 = [elem for elem in H if elem.order() == 2]elements_of_order_2
[(1,6)(2,5)(3,4), (1,5)(2,4), (2,6)(3,5), (1,4)(2,3)(5,6), (1,4)(2,5)(3,6), (1,2)(3,6)(4,5), (1,3)(4,6)]
rand_elem = H.random_element()rand_elem
(1,3)(4,6)
G_sub = H.subgroup([rand_elem])G_sub
Subgroup of (Dihedral group of order 12 as a permutation group) generated by [(1,3)(4,6)]
# Explicitly using elements of a groupH("(1,2,3,4,5,6)") * H("(1,5)(2,4)")
(1,4)(2,3)(5,6) Exercises¶
The real purpose of these exercises are to show you that it’s often possible to use tab-completion to quickly find out what is and isn’t possible to do within sage.
What things does sage know about this subgroup? Can you find the cardinality of the subgroup? (Note that the subgroup is generated by a random element, and your subgroup might be different than your neighbor’s). Can you list all subgroups of the dihedral group in sage? Sage knows other groups as well. Create a symmetric group on 5 elements. What does sage know about that? Can you verify that S5 isn’t simple? Create some cyclic groups? Changing the Field¶
It’s pretty easy to work over different fields in sage as well. I show a few examples of this
# It may be necessary to use `reset('x')` if x has otherwise been definedK.<alpha> = NumberField(x**3 - 5)
K
Number Field in alpha with defining polynomial x^3 - 5
alpha
alpha
alpha**3
5
(alpha+1)**3
3*alpha^2 + 3*alpha + 6
GF?
F7 = GF(7)
a = 2/5a
2/5
F7(a)
6
var('x')
x
eqn = x**3 + sqrt(2)*x + 5 == 0a = solve(eqn, x)[0].rhs()
a
-1/2*(I*sqrt(3) + 1)*(1/6*sqrt(8/3*sqrt(2) + 225) - 5/2)^(1/3) + 1/6*sqrt(2)*(-I*sqrt(3) + 1)/(1/6*sqrt(8/3*sqrt(2) + 225) - 5/2)^(1/3)
latex(a)
-\frac{1}{2} \, {\left(i \, \sqrt{3} + 1\right)} {\left(\frac{1}{6} \, \sqrt{\frac{8}{3} \, \sqrt{2} + 225} - \frac{5}{2}\right)}^{\frac{1}{3}} + \frac{\sqrt{2} {\left(-i \, \sqrt{3} + 1\right)}}{6 \, {\left(\frac{1}{6} \, \sqrt{\frac{8}{3} \, \sqrt{2} + 225} - \frac{5}{2}\right)}^{\frac{1}{3}}}
pretty_print(a)
# Also RR, CCQQ
Rational Field
K.<b> = QQ[a]
K
Number Field in a with defining polynomial x^6 + 10*x^3 - 2*x^2 + 25
a.minpoly()
x^6 + 10*x^3 - 2*x^2 + 25
K.class_number()
5 Exercise¶
Sage tries to keep the same syntax even across different applications. Above, we factored a few integers. But we may also try to factor over a number field. You can factor 2 over the Gaussian integers by:
Create the Gaussian integers. The constructor CC[I] works. Get the Gaussian integer 2 (which is programmatically different than the typical integer 2), by something like
CC[I](2).
factorthat 2.
# Let's declare that we want x and y to mean symbolic variablesx = 1y = 2print(x+y)reset('x')reset('y')var('x')var('y')print(x+y)
3 x + y
solve(x^2 + 3*x + 2, x)
[x == -2, x == -1]
solve(x^2 + y*x + 2 == 0, x)
[x == -1/2*y - 1/2*sqrt(y^2 - 8), x == -1/2*y + 1/2*sqrt(y^2 - 8)]
# Nonlinear systems with complicated solutions can be solved as wellvar('p,q')eq1 = p+1==9eq2 = q*y+p*x==-6eq3 = q*y**2+p*x**2==24s = solve([eq1, eq2, eq3, y==1], p,q,x,y)s
[[p == 8, q == -26, x == (5/2), y == 1], [p == 8, q == 6, x == (-3/2), y == 1]]
s[0]
[p == 8, q == -26, x == (5/2), y == 1]
latex(s[0])
\left[p = 8, q = \left(-26\right), x = \left(\frac{5}{2}\right), y = 1\right]
# We can also do some symbolic calculusf = x**2 + 2*x + 1f
x^2 + 2*x + 1
diff(f, x)
2*x + 2
integral(f, x)
1/3*x^3 + x^2 + x
F = integral(f, x)F(x=1)
7/3
diff(sin(x**3), x)
3*x^2*cos(x^3)
# Compute the 4th derivativediff(sin(x**3), x, 4)
81*x^8*sin(x^3) - 324*x^5*cos(x^3) - 180*x^2*sin(x^3)
# We can try to foil sage by giving it a hard integralintegral(sin(x)/x, x)
-1/2*I*Ei(I*x) + 1/2*I*Ei(-I*x)
f = sin(x**2)f
sin(x^2)
# And sage can give Taylor expansionsf.taylor(x, 0, 20)
1/362880*x^18 - 1/5040*x^14 + 1/120*x^10 - 1/6*x^6 + x^2
f(x,y)=y^2+1-x^3-xcontour_plot(f, (x,-pi,pi), (y,-pi,pi))
contour_plot(f, (x,-pi,pi), (y,-pi,pi), colorbar=True, labels=True)
# Implicit plotsf(x,y) = -x**3 + y**2 - y + x + 1implicit_plot(f(x,y)==0,(x,0,2*pi),(y,-pi,pi))
Exercises¶ Experiment with the above examples by trying out different functions and plots. Sage can do partial fractions for you as well. To do this, you first define your function you want to split up. Suppose you call it
f. Then you use
f.partial_fraction(x). Try this out
Sage can also create 3d plots. Create one. Start by looking at the documentation for
plot3d.
Of the various math software, sage+python provides my preferred plotting environment. I have used sage to create plots for notes, lectures, classes, experimentation, and publications. You can quickly create good-looking plots. For example, I used sage/python extensively in creating this note for my students on Taylor Series (which is a classic “hard topic” that students have lots of questions about, at least in the US universities I’m familiar with. To this day, about 1/6 of the traffic to my website is to see that page).
As a non-trivial example, I present the following interactive plot.
@interactdef g(f=sin(x), c=0, n=(1..30), xinterval=range_slider(-10, 10, 1, default=(-8,8), label="x-interval"), yinterval=range_slider(-50, 50, 1, default=(-3,3), label="y-interval")): x0 = c degree = n xmin,xmax = xinterval ymin,ymax = yinterval p = plot(f, xmin, xmax, thickness=4) dot = point((x0,f(x=x0)),pointsize=80,rgbcolor=(1,0,0)) ft = f.taylor(x,x0,degree) pt = plot(ft, xmin, xmax, color='red', thickness=2, fill=f) show(dot + p + pt, ymin=ymin, ymax=ymax, xmin=xmin, xmax=xmax) html('$f(x)\;=\;%s$'%latex(f)) html('$P_{%s}(x)\;=\;%s+R_{%s}(x)$'%(degree,latex(ft),degree))
Additional Resources and Comments¶
There are a variety of tutorials and resources for learning more about sage. I list several here.
Sage provides some tutorials of its own. These include its Guided Tour and the Standard Sage Tutorial. The Standard Sage Tutorial is designed to take 2-4 hours to work through, and afterwards you should have a pretty good sense of the sage environment. PREP Tutorials are a set of tutorials created in a program sponsored by the Mathematics Association of America, aimed at working with university students with sage. These tutorials are designed for people both new to sage and to programming.
See also the main sage website.
It isn’t necessary to know python to use sage, but a heavy sage user will benefit significantly from learning some python. Conversely, sage is very easy to use if you know python.
|
At a recent colloquium at the University of Warwick, the fact that
\begin{equation}\label{question} \sum_ {n \geq 1} \frac{\varphi(n)}{2^n – 1} = 2. \end{equation} Although this was mentioned in passing, John Cremona asked — How do you prove that?
It almost fails a heuristic check, as one can quickly check that
\begin{equation}\label{similar} \sum_ {n \geq 1} \frac{n}{2^n} = 2, \end{equation} which is surprisingly similar to \eqref{question}. I wish I knew more examples of pairs with a similar flavor.
[Edit: Note that an addendum to this note has been added here. In it, we see that there is a way to shortcut the “hard part” of the long computation. ] The right way
Shortly afterwards, Adam Harper and Samir Siksek pointed out that this can be determined from Lambert series, and in fact that Hardy and Wright include a similar exercise in their book. This proof is delightful and short.
The idea is that, by expanding the denominator in power series, one has that
\begin{equation} \sum_{n \geq 1} a(n) \frac{x^n}{1 – x^n} \notag = \sum_ {n \geq 1} a(n) \sum_{m \geq 1} x^{mn} = \sum_ {n \geq 1} \Big( \sum_{d \mid n} a(d) \Big) x^n, \end{equation} where the inner sum is a sum over the divisors of $d$. This all converges beautifully for $\lvert x \rvert < 1$.
Applied to \eqref{question}, we find that
\begin{equation} \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} \notag = \sum_ {n \geq 1} \varphi(n) \frac{2^{-n}}{1 – 2^{-n}} = \sum_ {n \geq 1} 2^{-n} \sum_{d \mid n} \varphi(d), \end{equation} and as \begin{equation} \sum_ {d \mid n} \varphi(d) = n, \notag \end{equation} we see that \eqref{question} can be rewritten as \eqref{similar} after all, and thus both evaluate to $2$.
That’s a nice derivation using a series that I hadn’t come across before. But that’s not what this short note is about. This note is about evaluating \eqref{question} in a different way, arguably the wrong way. But it’s a wrong way that works out in a nice way that at least one person finds appealing.
The wrong way
We will use Mellin inversion — this is essentially Fourier inversion, but in a change of coordinates.
Let $f$ denote the function
\begin{equation} f(x) = \frac{1}{2^x – 1}. \notag \end{equation} Denote by $f^ * $ the Mellin transform of $f$, \begin{equation} f * (s):= \mathcal{M} [f(x)] (s) := \int_ 0^\infty f(x) x^s \frac{dx}{x} = \frac{1}{(\log 2)^2} \Gamma(s)\zeta(s),\notag \end{equation} where $\Gamma(s)$ and $\zeta(s)$ are the Gamma function and Riemann zeta functions.
For a general nice function $g(x)$, its Mellin transform satisfies
\begin{equation} \mathcal{M}[f(nx)] (s) = \int_0^\infty g(nx) x^s \frac{dx}{x} = \frac{1}{n^s} \int_0^\infty g(x) x^s \frac{dx}{x} = \frac{1}{n^s} g^ * (s).\notag \end{equation} Further, the Mellin transform is linear. Thus \begin{equation}\label{mellinbase} \mathcal{M}[\sum_{n \geq 1} \varphi(n) f(nx)] (s) = \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} f^ * (s) = \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} \frac{\Gamma(s) \zeta(s)}{(\log 2)^s}. \end{equation}
The Euler phi function $\varphi(n)$ is multiplicative and nice, and its Dirichlet series can be rewritten as
\begin{equation} \sum_{n \geq 1} \frac{\varphi(n)}{n^s} \notag = \frac{\zeta(s-1)}{\zeta(s)}. \end{equation} Thus the Mellin transform in \eqref{mellinbase} can be written as \begin{equation} \frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1). \notag \end{equation}
By the fundamental theorem of Mellin inversion (which is analogous to Fourier inversion, but again in different coordinates), the inverse Mellin transform will return the original function. The inverse Mellin transform of a function $h(s)$ is defined to be
\begin{equation} \mathcal{M}^{-1}[h(s)] (x) \notag := \frac{1}{2\pi i} \int_ {c – i \infty}^{c + i\infty} x^s h(s) ds, \end{equation} where $c$ is taken so that the integral converges beautifully, and the integral is over the vertical line with real part $c$. I’ll write $(c)$ as a shorthand for the limits of integration. Thus \begin{equation}\label{mellininverse} \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1} = \frac{1}{2\pi i} \int_ {(3)} \frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1) x^{-s} ds. \end{equation}
We can now describe the end goal: evaluate \eqref{mellininverse} at $x=1$, which will recover the value of the original sum in \eqref{question}.
How can we hope to do that? The idea is to shift the line of integration arbitrarily far to the left, pick up the infinitely many residues guaranteed by Cauchy’s residue theorem, and to recognize the infinite sum as a classical series.
The integrand has residues at $s = 2, 0, -2, -4, \ldots$, coming from the zeta function ($s = 2$) and the Gamma function (all the others). Note that there aren’t poles at negative odd integers, since the zeta function has zeroes at negative even integers.
Recall, $\zeta(s)$ has residue $1$ at $s = 1$ and $\Gamma(s)$ has residue $(-1)^n/{n!}$ at $s = -n$. Then shifting the line of integration and picking up all the residues reveals that
\begin{equation} \sum_{n \geq 1} \frac{\varphi(n)}{2^{n} – 1} \notag =\frac{1}{\log^2 2} + \zeta(-1) + \frac{\zeta(-3)}{2!} \log^2 2 + \frac{\zeta(-5)}{4!} \log^4 2 + \cdots \end{equation}
The zeta function at negative integers has a very well-known relation to the Bernoulli numbers,
\begin{equation}\label{zeta_bern} \zeta(-n) = – \frac{B_ {n+1}}{n+1}, \end{equation} where Bernoulli numbers are the coefficients in the expansion \begin{equation}\label{bern_gen} \frac{t}{1 – e^{-t}} = \sum_{m \geq 0} B_m \frac{t^m}{m!}. \end{equation} Many general proofs for the values of $\zeta(2n)$ use this relation and the functional equation, as well as a computation of the Bernoulli numbers themselves. Another important aspect of Bernoulli numbers that is apparent through \eqref{zeta_bern} is that $B_{2n+1} = 0$ for $n \geq 1$, lining up with the trivial zeroes of the zeta function.
Translating the zeta values into Bernoulli numbers, we find that
\eqref{question} is equal to \begin{align} &\frac{1}{\log^2 2} – \frac{B_2}{2} – \frac{B_4}{2! \cdot 4} \log^2 2 – \frac{B_6}{4! \cdot 6} \log^4 2 – \frac{B_8}{6! \cdot 8} \cdots \notag \\ &= -\sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!}. \label{recog} \end{align} This last sum is excellent, and can be recognized.
For a general exponential generating series
\begin{equation} F(t) = \sum_{m \geq 0} a(m) \frac{t^m}{m!},\notag \end{equation} we see that \begin{equation} \frac{d}{dt} \frac{1}{t} F(t) \notag =\sum_{m \geq 0} (m-1) a(m) \frac{t^{m-2}}{m!}. \end{equation} Applying this to the series defining the Bernoulli numbers from \eqref{bern_gen}, we find that \begin{equation} \frac{d}{dt} \frac{1}{t} \frac{t}{1 – e^{-t}} \notag =- \frac{e^{-t}}{(1 – e^{-t})^2}, \end{equation} and also that \begin{equation} \frac{d}{dt} \frac{1}{t} \frac{t}{1 – e^{-t}} \notag =\sum_{m \geq 0} (m-1) B_m \frac{(t)^{m-2}}{m!}. \end{equation} This is exactly the sum that appears in \eqref{recog}, with $t = \log 2$.
Putting this together, we find that
\begin{equation} \sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!} \notag =\frac{e^{-\log 2}}{(1 – e^{-\log 2})^2} = \frac{1/2}{(1/2)^2} = 2. \end{equation} Thus we find that \eqref{question} really is equal to $2$, as we had sought to show.
|
257 23 Homework Statement Two identical uniform triangular metal played held together by light rods. Caluclate the x coordinate of centre of mass of the two mass object. Give that mass per unit area of plate is 1.4g/cm square and total mass = 25.2g Homework Equations -
Not sure what I went wrong here, anyone can help me out on this? Thanks.
EDIT: Reformatted my request. Diagram: So as far as I know to calculate the center of mass for x, I have to use the following equation: COM(x): ##\frac{1}{M}\int x dm## And I also figured that to find center of mass, I will have to sum the mass of the 2 plates by 'cutting' them into stripes, giving me the following formula: ##dm = \mu * dx * y## where ##\mu## is the mass per unit area. So subbing in the above equation into the first, I get: ##\frac{1}{M}\int x (\mu * dx *y) ## ##\frac{\mu}{M}\int xy dx## Since the 2 triangles are identical, I can assume triangle on the left has equation ##y = 1/4x +4## This is the part where I'm not sure. Do I calculate each of the triangle's center of moment, sum them and divide by 2? Or am I suppose to use another method? Regardless of what, supposed I am correct: COM for right triangle: ##\frac{\mu}{M}\int_{4}^{16}x(\frac{1}{4}x+4) dx## = 8 (expected) COM for left triangle: ##\frac{\mu}{M}\int_{-11}^{1}x(-\frac{1}{4}x+4) dx## = 5.63... Total COM = ##8+5.63/2## which is wrong :( Thanks
EDIT: Reformatted my request.
Diagram:
So as far as I know to calculate the center of mass for x, I have to use the following equation:
COM(x):
##\frac{1}{M}\int x dm##
And I also figured that to find center of mass, I will have to sum the mass of the 2 plates by 'cutting' them into stripes, giving me the following formula:
##dm = \mu * dx * y## where ##\mu## is the mass per unit area.
So subbing in the above equation into the first, I get:
##\frac{1}{M}\int x (\mu * dx *y) ##
##\frac{\mu}{M}\int xy dx##
Since the 2 triangles are identical, I can assume triangle on the left has equation ##y = 1/4x +4##
This is the part where I'm not sure. Do I calculate each of the triangle's center of moment, sum them and divide by 2? Or am I suppose to use another method?
Regardless of what, supposed I am correct:
COM for right triangle:
##\frac{\mu}{M}\int_{4}^{16}x(\frac{1}{4}x+4) dx## = 8 (expected)
COM for left triangle:
##\frac{\mu}{M}\int_{-11}^{1}x(-\frac{1}{4}x+4) dx## = 5.63...
Total COM = ##8+5.63/2## which is wrong :(
Thanks
Last edited:
|
Practice Paper 2 Question 1
Sketch the function \(\displaystyle f(x)=\min_{t\le x} t^2\) for all real \(x\).
Hints Hint 1What is the minimum value of \(t^2\) when \(t\) varies between \(-\infty\) and \(-3?\) Hint 2How about between \(-\infty\) and \(-1?\) Hint 3How about between \(-\infty\) and \(+1?\) Solution
We are looking for the minimum value of \(t^2\) for \(t\in(-\infty,x]\). This is \(x^2\) when \(x<0\), and \(0\) when \(x\ge0\).
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
I’m in San Diego, and it’s charming here. (It’s certainly much nicer outside than the feet of snow in Boston. I’ve apparently brought some British rain with me, though).
Today I give a talk on counting lattice points on one-sheeted hyperboloids. These are the shapes described by
$$ X_1^2 + \cdots + X_{d-1}^2 = X_d^2 + h,$$ where $h > 0$ is a positive integer. The question is: how many lattice points $x$ are on such a hyperboloid with $| x |^2 \leq R$; or equivalently, how many lattice points are on such a hyperboloid and contained within a ball of radius $\sqrt R$ centered at the origin?
I describe my general approach of transforming this into a question about the behavior of modular forms, and then using spectral techniques from the theory of modular forms to understand this behavior. This becomes a question of understanding the shifted convolution Dirichlet series
$$ \sum_{n \geq 0} \frac{r_{d-1}(n+h)r_1(n)}{(2n + h)^s}.$$ Ultimately this comes from the modular form $\theta^{d-1}(z) \overline{\theta(z)}$, where $$ \theta(z) = \sum_{m \in \mathbb{Z}} e^{2 \pi i m^2 z}.$$
Here are the slides for this talk. Note that this talk is based on chapter 5 of my thesis, and (hopefully) soon a preprint of this chapter ready for submission will appear on the arXiv.
|
Fifteen hep-ph papers co-written by Dr *ang today
By December 21st, 43 hep-ph papers on the diphoton resonance seen at the LHC have been written (and released in three packages). Eight days later, the terrain is very different. After another package on Dec 22, the total number has jumped to 72 (like the number of virgins who wait for a terrorist in the Islamic hell), then to 80 (Dec 23), 89 (Dec 24), 98 (Dec 25), and by today, the total has reached 118 unless I have overlooked some papers.
At this rate, the number of papers on the \(750\GeV\) resonance may surpass 750 by April when the new collisions may start to show that all of this activity was focusing on a mirage – or not.
A Honda CB 750 redesigned by the holographic hammer. Motorbikes with 750 cc (cm 3) completely dominate the Google Images search for "750".
Because it's Tuesday today – a strong day – and we've had the Christmas break, there are many papers on the arXiv. Hep-ph shows 99 entries including 44 newly posted papers. It's a lot but what I find even more amazing – a sign of the Asian century that many people expect – is that 15 newly posted hep-ph papers were written or co-written by Dr *ang. It's close to the number of papers written by authors with any name posted on an average day. ;-)
Using the arXiv's ID [1]-[44] for the new hep-ph papers, we may see papers with the following authors:
[3] Wang+Wang+1OK, your playful humble correspondent has spent about 5 minutes with this stuff. As Madonna sang and it rang true, the list is no prang (such as the spang Monty Python sketch) even if it makes your head bang once you hang the papers on the wall. I guess that some playful readers will appreciate this research that sheds completely new light on Yang-Mills theory – which suddenly looks like a single Asian-underrepresented projection of physics insights among a billion of similar insights. ;-) [5] Wang+2 [22] Tang+Wang+4 [25] Tang+1 [26] Kang+2 [27] Wang [28] Zhang [31] Wang+Zhang+1 [32] Wang+Wang+Zhang+Yang+1 [33] Zhang+2 [34] Zhang+4 [38] Zhang+2 [40] Huang+Wang+7 [42] Huang+5 [43] Zhang+2
Many of the papers seems very good but my overall impression still is that China is much more likely to beat the West in the quantity rather than the quality.
The 20 papers on the diphoton resonance are
5 11 18 19 22 24 25 28 31-34 37-44The largest group of papers wants to explain the excess by models adding vector-like quarks and perhaps also leptons (aside from a new bosonic field); with seven papers [18], [24], [25], [28], [32], [33], [44] in just one day, you could almost say that the need for new vector-like particles in the explanation of the bump is almost a "consensus" in the literature. These new fermions may run in the loops which are the triangular Feynman diagrams with the new resonance \(S\) as well as two gluons \(gg\) (production) or two photons \(\gamma\gamma\) (final state) attached to the vertices.
Well, such "consensus" based on counting shouldn't ever be taken more seriously and this "consensus" has important loopholes, too. The paper [40], one of the *ang* papers, proposes to only add two new bosonic (scalar) fields, \(H'\) at \(750\GeV\) and \(s\) below \(2.6\GeV\). The latter, light scalar is pair-produced and decays to a pair of photons. In total, \(H'\) decays to four photons\[
H'\to ss\to \gamma\gamma\gamma\gamma
\] but the pairs of photons coming from each \(s\) are flying the nearly identical direction (they are "collimated") and can't be distinguished from the single photons at the LHC.
Also, [19] by Dermíšek and 3 non-Czech, Asian authors proposes a model with no new \(750\GeV\) boson at all. They claim that a bump-like profile may be obtained from a direct \(gg\to \gamma\gamma\) one-loop box diagram if a \(375\GeV\) particle runs in the loop.
But let me return to the vector-like fermions combined with a new boson for a while.
Vector-like fermions refer to full Dirac spinors with uniform couplings to bosonic fields. This condition means that the left-handed and right-handed two-component spinors carry the same charges and representations under non-Abelian groups as well so you don't need any \(\gamma_5\) in between the Dirac spinors to write the interactions with the gauge (vector) fields. This is to be contrasted with the \(V-A\) (vector minus axial vector) interactions of the chiral fermions in the electroweak theory.
The most explicit and elegant models involving new vector-like fermions are those working in the NMSSM, the next-to-minimal supersymmetric standard model, the "more natural" cousin of MSSM where the \(\mu\) coefficient of the Higgs-up-Higgs-down bilinear term in the superpotential is promoted to a whole new chiral superfield with a vev.
In fact, amusingly enough, both NMSSM papers explaining the diphoton excess with vector-like fermions belong to the list of the *ang* papers at the top. You need to solve a simple one-minute exercise in comparative literature to see that despite the similarities in the content and the author names, the two groups of authors are disjoint:
[25]The PhD degree in comparative literature should be either automatically granted along with particle physics PhDs or required as a prerequisite before the physics PhD defense. ;-) NMSSM extended with vector-like particles and the diphoton excess on the LHC, by Yi-Lei Tang, Shou-hua Zhu [32] Interpreting 750 GeV Diphoton Resonance in the NMSSM with Vector-like Particles, by Fei Wang, Wenyu Wang, Lei Wu, Jin Min Yang, Mengchao Zhang
The NMSSM paper [25] postulates that the vector-like fermions transform in\[
{\bf 5}\oplus \bar{\bf 5}\quad {\rm or}\quad {\bf 10}\oplus \bar{\bf 10}
\] of an \(SU(5)\) grand unified gauge group. People generally prefer full representations of the \(SU(5)\) because that's how the SUSY-GUT gauge coupling unification may remain accurate. The first among the two direct sums arises from a \({\bf 10}\) of the \(SO(10)\) grand unified group which arguably makes it more likely, especially because this representation arises from the decomposition of a standard \(E_6\) GUT family\[
{\bf 27} = {\bf 16}\oplus {\bf 10}\oplus {\bf 1}
\] under the \(SO(10)\) subgroup. So the \(E_6\)-based grand unification could naturally have the particles required to explain the three well-known Standard Model generations of fermions as well as the new vector-like fermions needed to explain the diphoton excess. Such an explanation sounds elegant and thrifty, but so do the explanations based on the sgoldstino (or the radion and a few others).
While the paper [25] spends more time with the representation theory of the new fermions, the paper [32] spends more time with the new bosons. There are two new bosons nearly degenerate around \(750\GeV\), as in other models.
After the release of 950 and 950 XL, rumors suggest that Lumia 750 is one of the likely models that Microsoft is completing right now.
Aside from resonances supplemented with vector-like fermions, there are some papers about isolated creatures. Jihn Kim [37] (not to be confused with John Kom or Jahn Kam) discusses the heavy axion theory – or, more precisely, the axizilla. An axizilla is an off-spring of Godzilla and a Betsy Devine's detergent. Additional papers such as [5], [11], [22] try to explain the diphoton anomaly along with the dark matter. [33] uses extra dimensions and resembles the previously discussed radion models.
Even more stringy models than ever before
I want to start with Jonathan Heckman's explanation of the diphoton resonance using the F-theory model building although it appeared in the previous package of hep-ph explanations of the bump. Jonathan works with a type IIB model with intersecting D7-branes. The Standard Model fields live at the intersections of the D7-brane stacks – its fields arise from the 7-7 open strings. Just to be sure, the two digits in 7-7 indicate that both endpoints of the open string terminate on a D7-brane.
However, F-theory generically predicted possible new fields arising from the open 3-7 strings stretched between the Standard-Model-carrying D7-branes and some new "probe" D3-branes that are flying around (those D3-branes are ordinary spacetime-filling but points in the extra 6 dimensions). Heckman identifies the new \(750\GeV\) resonance with the scalar measuring the transverse distance of the D3-brane point from the locus of the D7-branes. And the new 3-7 strings give rise to the "messenger" states that play the same role as the vector-like fermions discussed in the models above.
But the 3-7 strings are not exactly "a stringy embedding of the same" vector-like fermions. Heckman is a bit ambiguous about the mass of the messenger 3-7 strings – anything between the LHC and GUT scale seems possible a priori and the gauge coupling unification may be preserved for all those options. For example, he can calculate that the effect of his 3-7 open strings are 2.2 times larger than the effect of a \({\bf 5}\oplus \bar{\bf 5}\) vector-like fermions discussed above. The fact that 2.2 is greater than 1 is encouraging because one seems to need a bit stronger signal than the vector-like models typically give. But the "integrating out" of the new messenger fields produces the same kind of a cubic interaction between the new scalar resonance and two gluons (or two photons) as the vector-like models.
Most importantly, Jonathan mentions that these extra 3-7 strings were viewed as a "redundant exotic prediction" of many F-theory models. In this sense, people may have been more self-confident about this piece and formulate it as a
predictionof F-theory model building – a prediction that could be confirmed by the possible discovery in 2016. Of course, the bold prediction wasn't made in this way because the people weren't "quite" certain about their F-theory models.
Postdictions always look less impressive because one may adjust his explanations after the fact. However, from a logical perspective, it's obvious that the chronology of the discoveries and human statements is a matter of history and sociology, not pure scientific evidence, so if one assumes that the scientists are almost totally objective, postdictions should matter almost as much as "true" predictions.
Another stringy scenario of the diphoton resonance was discussed by Mirjam Cvetič and two co-authors who have surveyed 89,964 quiver diagrams arising from a class of type II string braneworlds. It's quite a comprehensive analysis of a large number of possibilities. The anti-string crackpots love to talk about a large number of possibilities as a flaw – and perhaps string theorists' personal failure. But it is not really a flaw, the number of models is whatever it is, it is a piece of knowledge to be learned, and technically powerful string theorists may sometimes analyze 89,964 models in one paper. In this sense, they may be doing 100,000 times more work for the same salary than a narrow-minded non-stringy phenomenologist. My main point is that the number of
a prioripossible detailed models going beyond the Standard Model is large – whether you think in terms of string theory or not – and one simply has to live with the fact and the research must adapt to this fact accordingly. Good theorists simply can't be stuck at a random small place (one tree) of this large forest; they must preserve their ability to see the whole forest or at least a non-negligible fraction of it! One random tree in a forest with zillions of trees is a very specific ("predictive") object but it is very unlikely that it is the right one. Before we climb to a specific tree, we should spend time with efforts to study the whole forest and comparisons of trees and their groups so that we only spend time by climbing the promising enough trees (or with methods to climb groups of trees simultaneously).
Finally, a new paper
closedstring. Note that Heckman's scalar identified with the bump was a 3-3 open string. They investigate the possibility to describe the Standard Model as a stringy braneworld in the large extra dimensions (ADD) paradigm. When they adjust the string scale to be as low as allowed by the existing LHC exclusion limit – the string scale has to be above \(7\TeV\) or so – they find out that it is indeed possible to explain the diphoton bump as a closed string excitation (one freed from the D-branes, living in the bulk) that nevertheless couples to the gauge kinetic terms strongly enough.
The new particle of mass \(750\GeV\) could very well be the second example of a particle (after the graviton; and the first massive one) that is liberated from the D-branes on which we are stuck.
You see that the number of qualitatively different, interesting ideas proposed to explain the \(750\GeV\) diphoton resonance has become rather large. They differ in the technical preferences – what sort of particles, forces, representations, interactions, extra symmetries, SUSY breaking if any, extra dimensions, or their shapes are more natural. One of them could be right which would be a total revolution.
But more conceptually, the papers also differ in the depth to which the authors are willing to analyze more far-reaching hypothetical consequences. Some (intrinsically bottom-up, bound to experiments) authors only discuss a possible addition of a couple of new fields to the effective field theory, thus superseding the Standard Model. Others (authors who are naturally top-down theorists) aren't ashamed to link the "modest" new signal to SUSY breaking if not extra dimensions if not the whole structure of a string theory scenario with Hagedorn towers and geometries of D-branes predicted along the way.
The particular scenarios sketched by the latter, top-down authors may be slightly less likely than the particular extensions of the effective field theory considered by the first, bottom-up group. But they are much more interesting and potentially predictive. That's why I am much more excited about them and I am surely not the only one. And at the end, if a model is found to fulfill some Planck-scale consistency criteria, such a model may become more likely than a "simple" effective field theory, too.
In fact, the comparison of the two groups is ironic. The haters of science often like to pick string theorists (and, to a lesser extent, grand unification and supersymmetry model builders) as people who make no predictions and only describe what has already been seen. But if you compare the predictive power of the papers about the diphoton bump, you may see that the truth is closer to the opposite one. The mundane bottom-up model builders are only adding new fields and interactions once they are forced to do so by the observations at the LHC. They never really predict anything truly new simply because it isn't really possible to make predictions about more accurate, higher-energy-scale effective field theories from a lower-energy effective field theory.
On the other hand, extra-dimensional, supersymmetric, and especially stringy models have a very rich yet rigid structure with numerous interesting "qualitatively new" ingredients that go beyond the conventional effective field theories and that's why these models have the potential to produce "more original" but also "more constrained and unambiguous" predictions than the mundane bottom-up builders. No one can be sure that truly convincing characteristic signs of string physics will be discovered in 2016. But it isn't quite excluded, either. And if that happened, it would be a stunner, indeed.
The discovery of an otherwise "generic" new particle species beyond the Standard Model, another particle that "no one has ordered", would be fascinating but it would be vastly less fascinating than the discovery of some physical phenomena that would overthrow the business-as-usual routine based on the effective quantum field theories with some bosonic and fermionic fields and their Lagrangian-given interactions.
From some viewpoints, the continuation of the Lagrangian-based QFT model building is terribly boring even if it happened to go beyond the Standard Model by Summer 2016. Signs of string theory physics would be way cooler than that, of course. And an experimental discovery of stringy, beyond-QFT physics would arguably be the greatest event in physics of all time. This conditional proposition is no hype; it's a cold fact.
|
Many authors pull the definitions of the raising and lowering (or ladder) operators out of their butt with no attempt at motivation. This is pointed out nicely in [1] by Eli along with one justification based on factoring the Hamiltonian.
In [2] is a small exception to the usual presentation. In that text, these operators are defined as usual with no motivation. However, after the utility of these operators has been shown, the raising and lowering operators show up in a context that does provide that missing motivation as a side effect.
It doesn’t look like the author was trying to provide a motivation, but it can be interpreted that way.
When seeking the time evolution of Heisenberg-picture position and momentum operators, we will see that those solutions can be trivially expressed using the raising and lowering operators. No special tools nor black magic is required to find the structure of these operators. Unfortunately, we must first switch to both the Heisenberg picture representation of the position and momentum operators, and also employ the Heisenberg equations of motion. Neither of these last two fit into standard narrative of most introductory quantum mechanics treatments. We will also see that these raising and lowering “operators” could also be introduced in classical mechanics, provided we were attempting to solve the SHO system using the Hamiltonian equations of motion.
I’ll outline this route to finding the structure of the ladder operators below. Because these are encountered trying to solve the time evolution problem, I’ll first show a simpler way to solve that problem. Because that simpler method depends a bit on lucky observation and is somewhat unstructured, I’ll then outline a more structured procedure that leads to the ladder operators directly, also providing the solution to the time evolution problem as a side effect.
The starting point is the Heisenberg equations of motion. For a time independent Hamiltonian \( H \), and a Heisenberg operator \( A^{(H)} \), those equations are
\begin{equation}\label{eqn:harmonicOscDiagonalize:20}
\ddt{A^{(H)}} = \inv{i \Hbar} \antisymmetric{A^{(H)}}{H}. \end{equation}
Here the Heisenberg operator \( A^{(H)} \) is related to the Schrodinger operator \( A^{(S)} \) by
\begin{equation}\label{eqn:harmonicOscDiagonalize:60}
A^{(H)} = U^\dagger A^{(S)} U, \end{equation}
where \( U \) is the time evolution operator. For this discussion, we need only know that \( U \) commutes with \( H \), and do not need to know the specific structure of that operator. In particular, the Heisenberg equations of motion take the form
\begin{equation}\label{eqn:harmonicOscDiagonalize:80}
\begin{aligned} \ddt{A^{(H)}} &= \inv{i \Hbar} \antisymmetric{A^{(H)}}{H} \\ &= \inv{i \Hbar} \antisymmetric{U^\dagger A^{(S)} U}{H} \\ &= \inv{i \Hbar} \lr{ U^\dagger A^{(S)} U H – H U^\dagger A^{(S)} U } \\ &= \inv{i \Hbar} \lr{ U^\dagger A^{(S)} H U – U^\dagger H A^{(S)} U } \\ &= \inv{i \Hbar} U^\dagger \antisymmetric{A^{(S)}}{H} U. \end{aligned} \end{equation}
The Hamiltonian for the harmonic oscillator, with Schrodinger-picture position and momentum operators \( x, p \) is
\begin{equation}\label{eqn:harmonicOscDiagonalize:40}
H = \frac{p^2}{2m} + \inv{2} m \omega^2 x^2, \end{equation}
so the equations of motions are
\begin{equation}\label{eqn:harmonicOscDiagonalize:100}
\begin{aligned} \ddt{x^{(H)}} &= \inv{i \Hbar} U^\dagger \antisymmetric{x}{H} U \\ &= \inv{i \Hbar} U^\dagger \antisymmetric{x}{\frac{p^2}{2m}} U \\ &= \inv{2 m i \Hbar} U^\dagger \lr{ i \Hbar \PD{p}{p^2} } U \\ &= \inv{m } U^\dagger p U \\ &= \inv{m } p^{(H)}, \end{aligned} \end{equation}
and
\begin{equation}\label{eqn:harmonicOscDiagonalize:120} \begin{aligned} \ddt{p^{(H)}} &= \inv{i \Hbar} U^\dagger \antisymmetric{p}{H} U \\ &= \inv{i \Hbar} U^\dagger \antisymmetric{p}{\inv{2} m \omega^2 x^2 } U \\ &= \frac{m \omega^2}{2 i \Hbar} U^\dagger \lr{ -i \Hbar \PD{x}{x^2} } U \\ &= -m \omega^2 U^\dagger x U \\ &= -m \omega^2 x^{(H)}. \end{aligned} \end{equation}
In the Heisenberg picture the equations of motion are precisely those of classical Hamiltonian mechanics, except that we are dealing with operators instead of scalars
\begin{equation}\label{eqn:harmonicOscDiagonalize:140}
\begin{aligned} \ddt{p^{(H)}} &= -m \omega^2 x^{(H)} \\ \ddt{x^{(H)}} &= \inv{m } p^{(H)}. \end{aligned} \end{equation}
In the text the ladder operators are used to simplify the solution of these coupled equations, since they can decouple them. That’s not really required since we can solve them directly in matrix form with little work
\begin{equation}\label{eqn:harmonicOscDiagonalize:160}
\ddt{} \begin{bmatrix} p^{(H)} \\ x^{(H)} \end{bmatrix} = \begin{bmatrix} 0 & -m \omega^2 \\ \inv{m} & 0 \end{bmatrix} \begin{bmatrix} p^{(H)} \\ x^{(H)} \end{bmatrix}, \end{equation}
or, with length scaled variables
\begin{equation}\label{eqn:harmonicOscDiagonalize:180}
\begin{aligned} \ddt{} \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} &= \begin{bmatrix} 0 & -\omega \\ \omega & 0 \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \\ &= -i \omega \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \\ &= -i \omega \sigma_y \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix}. \end{aligned} \end{equation}
Writing \( y = \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \), the solution can then be written immediately as
\begin{equation}\label{eqn:harmonicOscDiagonalize:200}
\begin{aligned} y(t) &= \exp\lr{ -i \omega \sigma_y t } y(0) \\ &= \lr{ \cos \lr{ \omega t } I – i \sigma_y \sin\lr{ \omega t } } y(0) \\ &= \begin{bmatrix} \cos\lr{ \omega t } & \sin\lr{ \omega t } \\ -\sin\lr{ \omega t } & \cos\lr{ \omega t } \end{bmatrix} y(0), \end{aligned} \end{equation}
or
\begin{equation}\label{eqn:harmonicOscDiagonalize:220}
\begin{aligned} \frac{p^{(H)}(t)}{m \omega} &= \cos\lr{ \omega t } \frac{p^{(H)}(0)}{m \omega} + \sin\lr{ \omega t } x^{(H)}(0) \\ x^{(H)}(t) &= -\sin\lr{ \omega t } \frac{p^{(H)}(0)}{m \omega} + \cos\lr{ \omega t } x^{(H)}(0). \end{aligned} \end{equation}
This solution depends on being lucky enough to recognize that the matrix has a Pauli matrix as a factor (which squares to unity, and allows the exponential to be evaluated easily.)
If we hadn’t been that observant, then the first tool we’d have used instead would have been to diagonalize the matrix. For such diagonalization, it’s natural to work in completely dimensionless variables. Such a non-dimensionalisation can be had by defining
\begin{equation}\label{eqn:harmonicOscDiagonalize:240}
x_0 = \sqrt{\frac{\Hbar}{m \omega}}, \end{equation}
and dividing the working (operator) variables through by those values. Let \( z = \inv{x_0} y \), and \( \tau = \omega t \) so that the equations of motion are
\begin{equation}\label{eqn:harmonicOscDiagonalize:260}
\frac{dz}{d\tau} = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} z. \end{equation}
This matrix can be diagonalized as
\begin{equation}\label{eqn:harmonicOscDiagonalize:280}
A = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = V \begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix} V^{-1}, \end{equation}
where
\begin{equation}\label{eqn:harmonicOscDiagonalize:300}
V = \inv{\sqrt{2}} \begin{bmatrix} i & -i \\ 1 & 1 \end{bmatrix}. \end{equation}
The equations of motion can now be written
\begin{equation}\label{eqn:harmonicOscDiagonalize:320}
\frac{d}{d\tau} \lr{ V^{-1} z } = \begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix} \lr{ V^{-1} z }. \end{equation}
This final change of variables \( V^{-1} z \) decouples the system as desired. Expanding that gives
\begin{equation}\label{eqn:harmonicOscDiagonalize:340}
\begin{aligned} V^{-1} z &= \inv{\sqrt{2}} \begin{bmatrix} -i & 1 \\ i & 1 \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{x_0 m \omega} \\ \frac{x^{(H)}}{x_0} \end{bmatrix} \\ &= \inv{\sqrt{2} x_0} \begin{bmatrix} -i \frac{p^{(H)}}{m \omega} + x^{(H)} \\ i \frac{p^{(H)}}{m \omega} + x^{(H)} \end{bmatrix} \\ &= \begin{bmatrix} a^\dagger \\ a \end{bmatrix}, \end{aligned} \end{equation}
where
\begin{equation}\label{eqn:harmonicOscDiagonalize:n} \begin{aligned} a^\dagger &= \sqrt{\frac{m \omega}{2 \Hbar}} \lr{ -i \frac{p^{(H)}}{m \omega} + x^{(H)} } \\ a &= \sqrt{\frac{m \omega}{2 \Hbar}} \lr{ i \frac{p^{(H)}}{m \omega} + x^{(H)} }. \end{aligned} \end{equation}
Lo and behold, we have the standard form of the raising and lowering operators, and can write the system equations as
\begin{equation}\label{eqn:harmonicOscDiagonalize:360}
\begin{aligned} \ddt{a^\dagger} &= i \omega a^\dagger \\ \ddt{a} &= -i \omega a. \end{aligned} \end{equation}
It is actually a bit fluky that this matched exactly, since we could have chosen eigenvectors that differ by constant phase factors, like
\begin{equation}\label{eqn:harmonicOscDiagonalize:380}
V = \inv{\sqrt{2}} \begin{bmatrix} i e^{i\phi} & -i e^{i \psi} \\ 1 e^{i\phi} & e^{i \psi} \end{bmatrix}, \end{equation}
so
\begin{equation}\label{eqn:harmonicOscDiagonalize:341}
\begin{aligned} V^{-1} z &= \frac{e^{-i(\phi + \psi)}}{\sqrt{2}} \begin{bmatrix} -i e^{i\psi} & e^{i \psi} \\ i e^{i\phi} & e^{i \phi} \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{x_0 m \omega} \\ \frac{x^{(H)}}{x_0} \end{bmatrix} \\ &= \inv{\sqrt{2} x_0} \begin{bmatrix} -i e^{i\phi} \frac{p^{(H)}}{m \omega} + e^{i\phi} x^{(H)} \\ i e^{i\psi} \frac{p^{(H)}}{m \omega} + e^{i\psi} x^{(H)} \end{bmatrix} \\ &= \begin{bmatrix} e^{i\phi} a^\dagger \\ e^{i\psi} a \end{bmatrix}. \end{aligned} \end{equation}
To make the resulting pairs of operators Hermitian conjugates, we’d want to constrain those constant phase factors by setting \( \phi = -\psi \). If we were only interested in solving the time evolution problem no such additional constraints are required.
The raising and lowering operators are seen to naturally occur when seeking the solution of the Heisenberg equations of motion. This is found using the standard technique of non-dimensionalisation and then seeking a change of basis that diagonalizes the system matrix. Because the Heisenberg equations of motion are identical to the classical Hamiltonian equations of motion in this case, what we call the raising and lowering operators in quantum mechanics could also be utilized in the classical simple harmonic oscillator problem. However, in a classical context we wouldn’t have a justification to call this more than a change of basis.
References
[1] Eli Lansey.
The Quantum Harmonic Oscillator Ladder Operators, 2009. URL http://behindtheguesses.blogspot.ca/2009/03/quantum-harmonic-oscillator-ladder.html. [Online; accessed 18-August-2015].
[2] Jun John Sakurai and Jim J Napolitano.
Modern quantum mechanics, chapter {Time Development of the Oscillator}. Pearson Higher Ed, 2014.
|
Euclid's Lemma for Prime Divisors/General Result Contents Lemma
Let $p$ be a prime number.
Let $\displaystyle n = \prod_{i \mathop = 1}^r a_i$.
That is: $p \divides a_1 a_2 \ldots a_n \implies p \divides a_1 \lor p \divides a_2 \lor \cdots \lor p \divides a_n$
As for Euclid's Lemma for Prime Divisors, this can be verified by direct application of general version of Euclid's Lemma for irreducible elements.
$\blacksquare$
Proof by induction:
For all $r \in \N_{>0}$, let $\map P r$ be the proposition:
$\displaystyle p \divides \prod_{i \mathop = 1}^r a_i \implies \exists i \in \closedint 1 r: p \divides a_i$ $\map P 1$ is true, as this just says $p \divides a_1 \implies p \divides a_1$. Basis for the Induction
$\map P 2$ is the case:
$p \divides a_1 a_2 \implies p \divides a_2$ or $p \divides a_2$
which is proved in Euclid's Lemma for Prime Divisors.
This is our basis for the induction.
Induction Hypothesis
Now we need to show that, if $\map P k$ is true, where $k \ge 1$, then it logically follows that $\map P {k + 1}$ is true.
So this is our induction hypothesis:
$\displaystyle p \divides \prod_{i \mathop = 1}^k a_i \implies \exists i \in \closedint 1 k: p \divides a_i$ Then we need to show: $\displaystyle p \divides \prod_{i \mathop = 1}^{k + 1} a_i \implies \exists i \in \closedint 1 {k + 1}: p \divides a_i$ Induction Step
This is our induction step:
\(\displaystyle p\) \(\divides\) \(\displaystyle a_1 a_2 \ldots a_{k + 1}\) \(\displaystyle \leadsto \ \ \) \(\displaystyle p\) \(\divides\) \(\displaystyle \paren {a_1 a_2 \ldots a_k} \paren {a_{k + 1} }\) \(\displaystyle \leadsto \ \ \) \(\displaystyle p\) \(\divides\) \(\displaystyle a_1 a_2 \ldots a_k \lor p \divides a_{k + 1}\) Basis for the Induction \(\displaystyle \leadsto \ \ \) \(\displaystyle p\) \(\divides\) \(\displaystyle a_1 \lor p \divides a_2 \lor \ldots \lor p \divides a_k \lor p \divides a_{k + 1}\) Induction Hypothesis
So $\map P k \implies \map P {k + 1}$ and the result follows by the Principle of Mathematical Induction.
Therefore: $\displaystyle \forall r \in \N: p \divides \prod_{i \mathop = 1}^r a_i \implies \exists i \in \closedint 1 r: p \divides a_i$
$\blacksquare$
Let $p \divides n$.
Aiming for a contradiction, suppose:
$\forall i \in \set {1, 2, \ldots, r}: p \nmid a_i$ $\forall i \in \set {1, 2, \ldots, r}: p \perp a_i$ $p \perp n$
By definition of coprime:
$p \nmid n$
The result follows by Proof by Contradiction.
$\blacksquare$
Source of Name
This entry was named for Euclid.
|
Practice Paper 2 Question 3
Find positive integers \(a,b,c,d\) such that
\(\displaystyle a+\dfrac{1}{b+\dfrac{1}{c+\dfrac{1}{d}}}=\frac{15}{11}.\)
Hints Hint 1Given \(\frac{1}{x}<1\) when \(x>1,\) how does \(a+\frac{1}{x}\) relate to \(\frac{15}{11}?\) Hint 2With that in mind, what can you say about \(b+\frac{1}{y}?\) Hint 3Can you write a fraction \(\frac{m}{n}\) in a different way? Hint 4... to resemble \(\frac{1}{z}\) perhaps (\(z\) may be a fraction)? Hint 5How about applying that approach to \(1+\frac{4}{11}\)? Hint 6... and continue successively? Solution
Let \(x=\frac{1}{b+\cdots}.\) Consider the value of \(x.\) Since \(b \geq 1\) and the numerator is \(1,\) it follows that \(x<1\). Now, consider the value of \(a\). Since \(x<1,\) \(1 \leq \frac{15}{11} < 2\) and \(a\) is integral, \(a\) must be \(1.\) So we now have \(\frac{15}{11} = 1 + \frac{4}{11}.\) Since we require the numerator of each nested fraction to be \(1,\) we can use the fact that \(\frac{m}{p} = \frac{1}{\frac{p}{m}}\) to get \(\frac{15}{11} = 1 + \frac{1}{\frac{11}{4}}.\) Using this technique successively, we can decompose the expression and get \[ \begin{align} \dfrac{15}{11} &= 1+\dfrac{4}{11} \\[2mm] &= 1+\dfrac{1}{\dfrac{11}{4}} \\[2mm] &= 1+\dfrac{1}{2+\dfrac{3}{4}} \\[2mm] &= 1+\dfrac{1}{2+\dfrac{1}{\dfrac{4}{3}}} \\[2mm] &= 1+\dfrac{1}{2+\dfrac{1}{1+\dfrac{1}{3}}}. \end{align} \] And hence \(a=1,b=2,c=1,d=3.\)
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
The
hyperbolic functions are a set of functions that have many applications to mathematics, physics, and engineering. Among many other applications, they are used to describe the formation of satellite rings around planets, to describe the shape of a rope hanging from two points, and have application to the theory of special relativity. This section defines the hyperbolic functions and describes many of their properties, especially their usefulness to calculus.
These functions are sometimes referred to as the "hyperbolic trigonometric functions" as there are many, many connections between them and the standard trigonometric functions. Figure \(\PageIndex{1}\) demonstrates one such connection. Just as cosine and sine are used to define points on the circle defined by \(x^2+y^2=1\), the functions
hyperbolic cosine and hyperbolic sine are used to define points on the hyperbola \(x^2-y^2=1\).
Figure \(\PageIndex{1}\): Using trigonometric functions to define points on a circle and hyperbolic functions to define points on a hyperbola. The area of the shaded regions are included in them.
We begin with their definition.
Definition \(\PageIndex{1}\): Hyperbolic Functions
\( \cosh x = \frac{e^x+e^{-x}}2\) \( \sinh x = \frac{e^x-e^{-x}}2\) \(\tanh x = \frac{\sinh x}{\cosh x}\) \( \text{sech} x = \frac{1}{\cosh x}\) \( \text{csch} x = \frac{1}{\sinh x}\) \( \coth x = \frac{\cosh x}{\sinh x}\)
These hyperbolic functions are graphed in Figure \(\PageIndex{2}\). In the graphs of \(\cosh x\) and \(\sinh x\), graphs of \(e^x/2\) and \(e^{-x}/2\) are included with dashed lines. As \(x\) gets "large," \(\cosh x\) and \(\sinh x\) each act like \(e^x/2\); when \(x\) is a large negative number, \(\cosh x\) acts like \(e^{-x}/2\) whereas $\sinh x$ acts like \(-e^{-x}/2\).
Notice the domains of \(\tanh x\) and \(\text{sech} x\) are \((-\infty,\infty)\), whereas both \(\coth x\) and \(\text{csch} x\) have vertical asymptotes at \(x=0\). Also note the ranges of these functions, especially \(\tanh x\): as \(x\to\infty\), both \(\sinh x\) and \(\cosh x\) approach \(e^{-x}/2\), hence \(\tanh x\) approaches \(1\).
The following example explores some of the properties of these functions that bear remarkable resemblance to the properties of their trigonometric counterparts.
Pronunciation Note: "cosh" rhymes with "gosh," "sinh" rhymes with "pinch," and "tanh" rhymes with "ranch."
Figure \(\PageIndex{2}\): Graphs of the hyperbolic functions.
Example \(\PageIndex{1}\): Exploring properties of hyperbolic functions
Use Definition \(\PageIndex{1}\) to rewrite the following expressions.
\(\cosh^2 x-\sinh^2x\) \(\tanh^2 x+\text{sech}^2 x\) \(2\cosh x\sinh x\) \(\frac{d}{dx}\big(\cosh x\big)\) \(\frac{d}{dx}\big(\sinh x\big)\) \(\frac{d}{dx}\big(\tanh x\big)\) Solution \[\begin{align} \cosh^2x-\sinh^2x &= \left(\frac{e^x+e^{-x}}2\right)^2 -\left(\frac{e^x-e^{-x}}2\right)^2\\ &= \frac{e^{2x}+2e^xe^{-x} + e^{-2x}}4 - \frac{e^{2x}-2e^xe^{-x} + e^{-2x}}4\\ &= \frac44=1.\end{align}\]So \(\cosh^2 x-\sinh^2x=1\). \[\begin{align} \tanh^2 x+\text{sech}^2 x &=\frac{\sinh^2x}{\cosh^2 x} + \frac{1}{\cosh^2 x} \\ &= \frac{\sinh^2x+1}{\cosh^2 x}\qquad \text{Now use identity from #1.}\\ &= \frac{\cosh^2 x}{\cosh^2 x} = 1. \end{align}\]So \(\tanh^2 x+\text{sech}^2 x=1\). \[\begin{align} 2\cosh x\sinh x &= 2\left(\frac{e^x+e^{-x}}2\right)\left(\frac{e^x-e^{-x}}2\right) \\ &= 2 \cdot\frac{e^{2x} - e^{-2x}}4\\ &= \frac{e^{2x} - e^{-2x}}2 = \sinh (2x).\\ \end{align}\]Thus \(2\cosh x\sinh x = \sinh (2x)\). \[\begin{align} \frac{d}{dx}\big(\cosh x\big) &= \frac{d}{dx}\left(\frac{e^x+e^{-x}}2\right) \\ &= \frac{e^x-e^{-x}}2\\ &= \sinh x. \end{align}\]So \(\frac{d}{dx}\big(\cosh x\big) = \sinh x.\) \[\begin{align} \frac{d}{dx}\big(\sinh x\big) &= \frac{d}{dx}\left(\frac{e^x-e^{-x}}2\right) \\ &= \frac{e^x+e^{-x}}2\\ &= \cosh x. \end{align}\]So \(\frac{d}{dx}\big(\sinh x\big) = \cosh x.\) \[\begin{align} \frac{d}{dx}\big(\tanh x\big) &= \frac{d}{dx}\left(\frac{\sinh x}{\cosh x}\right) \\ &= \frac{\cosh x \cosh x - \sinh x \sinh x}{\cosh^2 x}\\ &= \frac{1}{\cosh^2 x}\\ &=\text{sech}^2 x. \end{align}\]So \(\frac{d}{dx}\big(\tanh x\big) = \text{sech}^2 x.\)
The following Key Idea summarizes many of the important identities relating to hyperbolic functions. Each can be verified by referring back to Definition \(\PageIndex{1}\).
Key Idea 16: Useful Hyperbolic Function Properties
Basic Identities \(\cosh^2x-\sinh^2x=1\) \(\tanh^2x+\text{sech}^2x=1\) \(\coth^2x-\text{csch}^2x = 1\) \(\cosh 2x=\cosh^2x+\sinh^2x\) \(\sinh 2x = 2\sinh x\cosh x\) \(\cosh^2x = \frac{\cosh 2x+1}{2}\) \(\sinh^2x=\frac{\cosh 2x-1}{2}\) Derivatives \(\frac{d}{dx}\big(\cosh x\big) = \sinh x\) \(\frac{d}{dx}\big(\sinh x\big) = \cosh x\) \(\frac{d}{dx}\big(\tanh x\big) = \text{sech}^2 x\) \(\frac{d}{dx}\big(\text{sech} x\big) = -\text{sech} x\tanh x\) \(\frac{d}{dx}\big(\text{csch} x\big) = -\text{csch} x\coth x\) \(\frac{d}{dx}\big(\coth x\big) = -\text{csch}^2x\) Integrals \(\int \cosh x\ dx = \sinh x+C\) \(\int \sinh x\ dx = \cosh x+C\) \(\int \tanh x\ dx = \ln(\cosh x) +C\) \(\int \coth x\ dx = \ln|\sinh x\,|+C\)
We practice using Key Idea 16
Example \(\PageIndex{2}\): Derivatives and integrals of hyperbolic functions
Evaluate the following derivatives and integrals.
\(\frac{d}{dx}\big(\cosh 2x\big)\) \(\int \text{sech}^2(7t-3)\ dt\) \( \int_0^{\ln 2} \cosh x\ dx\) Solution Using the Chain Rule directly, we have \(\frac{d}{dx} \big(\cosh 2x\big) = 2\sinh 2x\). Just to demonstrate that it works, let's also use the Basic Identity found in Key Idea 16: \(\cosh 2x = \cosh^2x+\sinh^2x\). \[\begin{align}\frac{d}{dx}\big(\cosh 2x\big) = \frac{d}{dx}\big(\cosh^2x+\sinh^2x\big) &= 2\cosh x\sinh x+ 2\sinh x\cosh x\\ &= 4\cosh x\sinh x.\end{align}\]Using another Basic Identity, we can see that \(4\cosh x\sinh x = 2\sinh 2x\). We get the same answer either way. We employ substitution, with \(u = 7t-3\) and \(du = 7dt\). Applying Key Ideas 10 and 16 we have: $$ \int\text{sech}^2 (7t-3)\ dt = \frac17\tanh (7t-3) + C.$$ $$\int_0^{\ln 2} \cosh x\ dx = \sinh x\Big|_0^{\ln 2} = \sinh (\ln 2) - \sinh 0 = \sinh(\ln 2).$$ We can simplify this last expression as \(\sinh x\) is based on exponentials: $$\sinh(\ln 2) = \frac{e^{\ln 2}-e^{-\ln 2}}2 = \frac{2-1/2}{2} = \frac34.$$ Inverse Hyperbolic Functions
Just as the inverse trigonometric functions are useful in certain integrations, the inverse hyperbolic functions are useful with others. Figure 16 shows the restrictions on the domains to make each function one-to-one and the resulting domains and ranges of their inverse functions. Their graphs are shown in Figure \(\PageIndex{3}\)
Because the hyperbolic functions are defined in terms of exponential functions, their inverses can be expressed in terms of logarithms as shown in Key Idea 17. It is often more convenient to refer to \(\sinh^{-1}x\) than to \(\ln\big(x+\sqrt{x^2+1}\big)\), especially when one is working on theory and does not need to compute actual values. On the other hand, when computations are needed, technology is often helpful but many hand-held calculators lack a \textit{convenient} \(\sinh^{-1}x\) button. (Often it can be accessed under a menu system, but not conveniently.) In such a situation, the logarithmic representation is useful. The reader is not encouraged to memorize these, but rather know they exist and know how to use them when needed.
Table \(\PageIndex{1}\): Graphs of \(\cosh x\), \(\sinh x\) and their inverses.
Figure \(\PageIndex{3}\): Graphs of the hyperbolic functions and their inverses.
The following Key Ideas give the derivatives and integrals relating to the inverse hyperbolic functions. In Key Idea 19, both the inverse hyperbolic and logarithmic function representations of the antiderivative are given, based on Key Idea 17. Again, these latter functions are often more useful than the former. Note how inverse hyperbolic functions can be used to solve integrals we used Trigonometric Substitution to solve in Section 6.4.
Key IDea 17: Logarithmic definitions of the inverse hyperbolic functions.
\(\cosh^{-1}x=\ln\big(x+\sqrt{x^2-1}\big);\ x\geq1\) \(\tanh^{-1}x = \frac12\ln\left(\frac{1+x}{1-x}\right);\ |x|<1\) \(\text{sech}^{-1}x = \ln\left(\frac{1+\sqrt{1-x^2}}x\right);\ 0<x\leq1\) \(\sinh^{-1}x = \ln\big(x+\sqrt{x^2+1}\big)\) \(\coth^{-1}x = \frac12\ln\left(\frac{x+1}{x-1}\right);\ |x|>1\) \(\text{csch}^{-1}x = \ln\left(\frac1x+\frac{\sqrt{1+x^2}}{|x|}\right);\ x\neq0\)
Key Idea 18: Derivatives Involving Inverse Hyperbolic Functions
\(\frac{d}{dx}\big(\cosh^{-1} x\big) = \frac{1}{\sqrt{x^2-1}};\ x>1\) \(\frac{d}{dx}\big(\sinh^{-1} x\big) = \frac{1}{\sqrt{x^2+1}}\) \(\frac{d}{dx}\big(\tanh^{-1} x\big) = \frac{1}{1-x^2};\ |x|<1\) \(\frac{d}{dx}\big(\text{sech}^{-1} x\big) = \frac{-1}{x\sqrt{1-x^2}}; 0<x<1\) \(\frac{d}{dx}\big(\text{csch}^{-1} x\big) = \frac{-1}{|x|\sqrt{1+x^2}};\ x\neq0\) \(\frac{d}{dx}\big(\coth^{-1} x\big) = \frac{1}{1-x^2};\ |x|>1\)
Key Idea 19: Integrals Involving Inverse Hyperbolic Functions
\(\int \frac{1}{\sqrt{x^2-a^2}}\ dx\) \(=\qquad \cosh^{-1}\left(\frac xa\right)+C;\ 0<a<x\) \(\quad=\ln\Big|x+\sqrt{x^2-a^2}\Big|+C\) \(\int \frac{1}{\sqrt{x^2+a^2}}\ dx\) \(=\qquad \sinh^{-1}\left(\frac xa\right)+C;\ a>0\) \(\qquad=\ln\Big|x+\sqrt{x^2+a^2}\Big|+C\) \(\int \frac{1}{a^2-x^2}\ dx\) \(=\qquad \left\{\begin{array}{ccc} \frac1a\tanh^{-1}\left(\frac xa\right)+C & & x^2<a^2 \\ \\\frac1a\coth^{-1}\left(\frac xa\right)+C & & a^2<x^2 \end{array}\right.\) \(\quad=\frac1{2a}\ln\left|\frac{a+x}{a-x}\right|+C\) \(\int \frac{1}{x\sqrt{a^2-x^2}}\ dx \) \(=\qquad -\frac1a\text{sech}^{-1}\left(\frac xa\right)+C;\ 0<x<a\) \(\quad= \frac1a \ln\left(\frac{x}{a+\sqrt{a^2-x^2}}\right)+C \) \(\int \frac{1}{x\sqrt{x^2+a^2}}\ dx\) \(=\qquad -\frac1a\text{csch}^{-1}\left|\frac xa\right| + C;\ x\neq 0,\ a>0\) \(\quad= \frac1a \ln\left|\frac{x}{a+\sqrt{a^2+x^2}}\right|+C\)
We practice using the derivative and integral formulas in the following example.
Example \(\PageIndex{3}\): Derivatives and integrals involving inverse hyperbolic functions
Evaluate the following.
\( \frac{d}{dx}\left[\cosh^{-1}\left(\frac{3x-2}{5}\right)\right]\) \( \int\frac{1}{x^2-1}\ dx\) \( \int \frac{1}{\sqrt{9x^2+10}}\ dx\) Solution Applying Key Idea 18 with the Chain Rule gives: $$\frac{d}{dx}\left[\cosh^{-1}\left(\frac{3x-2}5\right)\right] = \frac{1}{\sqrt{\left(\frac{3x-2}5\right)^2-1}}\cdot\frac35.$$
Multiplying the numerator and denominator by \((-1)\) gives: \( \int \frac{1}{x^2-1}\ dx = \int \frac{-1}{1-x^2}\ dx\). The second integral can be solved with a direct application of item #3 from Key Idea 19, with \(a=1\). Thus \[ \begin{align} \int \frac{1}{x^2-1}\ dx &= -\int \frac{1}{1-x^2}\ dx \\ &= \left\{\begin{array}{ccc} -\tanh^{-1}\left(x\right)+C & & x^2<1 \\ \\-\coth^{-1}\left(x\right)+C & & 1<x^2 \end{array}\right. \\ &=-\frac12\ln\left|\frac{x+1}{x-1}\right|+C\\ &=\frac12\ln\left|\frac{x-1}{x+1}\right|+C. \end{align}\]
We should note that this exact problem was solved at the beginning of Section 6.5. In that example the answer was given as \(\frac12\ln|x-1|-\frac12\ln|x+1|+C.\) Note that this is equivalent to the answer given in Equation \(\PageIndex{29}\), as \(\ln(a/b) = \ln a - \ln b\).
This requires a substitution, then item #2 of Key Idea 19 can be applied.
Let \(u = 3x\), hence \(du = 3dx\). We have
\[\int \frac{1}{\sqrt{9x^2+10}}\ dx = \frac13\int\frac{1}{\sqrt{u^2+10}}\ du. \] Note \(a^2=10\), hence \(a = \sqrt{10}.\) Now apply the integral rule. \[\begin{align} &= \frac13 \sinh^{-1}\left(\frac{3x}{\sqrt{10}}\right) + C \\&= \frac13 \ln \Big|3x+\sqrt{9x^2+10}\Big|+C. \end{align}\]
This section covers a lot of ground. New functions were introduced, along with some of their fundamental identities, their derivatives and antiderivatives, their inverses, and the derivatives and antiderivatives of these inverses. Four Key Ideas were presented, each including quite a bit of information.
Do not view this section as containing a source of information to be memorized, but rather as a reference for future problem solving. Key Idea 19 contains perhaps the most useful information. Know the integration forms it helps evaluate and understand how to use the inverse hyperbolic answer and the logarithmic answer.
The next section takes a brief break from demonstrating new integration techniques. It instead demonstrates a technique of evaluating limits that return indeterminate forms. This technique will be useful in Section 6.8, where limits will arise in the evaluation of certain definite integrals.
Contributors
Gregory Hartman (Virginia Military Institute). Contributions were made by Troy Siemers and Dimplekumar Chalishajar of VMI and Brian Heinold of Mount Saint Mary's University. This content is copyrighted by a Creative Commons Attribution - Noncommercial (BY-NC) License. http://www.apexcalculus.com/
Integrated by Justin Marshall.
|
Practice Paper 2 Question 4
Three planar regions \(A\), \(B\), \(C\) partially overlap each other, with \(|A| = 90,\) \(|B| = 90,\) \(|C| = 60\) and \(|A \cup B \cup C| = 100,\) where \(|\cdot|\) denotes the area. Find the minimum possible \(|A \cap B \cap C|\).
Related topics Hints Hint 1Try to find the minimum \(|A\cap B|.\) Hint 2... by considering the maximum \(|A\cup B|.\) Hint 3What does that minimal case imply about \(C\) in relation to \(A\) and \(B,\) given the question conditions? Hint 4... more specifically, in relation to \(A\cup B,\) given \(|A \cup B \cup C| = 100?\) Hint 5Given \(C\) must be contained within \(A\cup B,\) what does that imply about \(C\) in relation to \(A\cap B?\) Hint 6... more specifically, how must \(C\) be distributed outside of \(A\cap B\) in order to minimize the full intersection? Solution
First consider the minimum size of \(A \cap B\). We know that \(|A \cup B|=|A|+|B|-|A\cap B|\) and since \(|A \cup B\cup C|=100\) then \(|A \cup B|\le100\) so \(|A \cap B|\ge80\). In this minimal case, \(C\) must be contained within \(A \cup B,\) because otherwise \(|A \cup B \cup C| > 100\). We want to have as much of \(C\) as possible outside of \(A\cap B,\) hence the minimal size of the intersection is \(|C| - (|A \cup B| - |A \cap B|)=40\).
Note: Drawing standard (circular) Venn diagrams for \(A,B,C\) to meet the above conditions is not possible, but it is possible using other shapes. Give it a try.
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
Basic Reflection is the return by a surface of some of the light which falls on that surface. (Other parts get absorbed or refracted).
There are two main laws (called "the laws of reflection") that play a role in this:
The angle of reflection of a ray ( r) of light is equal to the angle of the ray of incidence ( i). The incident ray ( I), the reflected ray ( R) and the normal ( NO) all lie in the same plane. In this case, it is the plane of your computer monitor.
Figure \(\PageIndex{1}\) : Law of Reflection
The angle of incidence (
ION) and the angle of reflection ( NOR) are both measured by reference to an imaginary line called the normal ( NO), which is perpendicular to the reflecting surface ( HP), at the point of incidence ( O). The point of incidence is the point at which the incident ray strikes the reflecting surface.
Understanding these basic principles are important in order to understand the many gemological terms that are used to describe optical effects caused by reflection.
Amongst those effects are:
Advanced Fresnel reflection
For light that is incident close to the normal (to about 10°), the amount of reflection can be calculated by the Fresnel equation for reflectivity.
\[Amount \ of \ reflection = \frac {(n_2 - n_1)^2}{(n_2 + n_1)^2}\]
For light traveling from air to diamond that translates to:
\[\frac {(2.417 - 1)^2} {(2.417 + 1)^2} \approx 0.17 = 17%\]
Brewster Angle
In 1812, Sir David Brewster (1781-1868) described a new phenomenon of light. He found that, when light falls on an optically denser object (like water) at a certain angle, the reflected component of the unpolarized light will be completely polarized in the plane of the surface off which it reflects. In the case of water, it would be the horizontal plane.
He also observed that the angle of the refracted ray was at 90° to the reflected ray (at this specific angle).
This angle is named
The Brewster Angle. It varies for every two materials that are in contact, relating to the refractive indices of the materials.
Figure \(\PageIndex{2}\) : Brewster Angle
When the refractive indices of both materials are known, one can calculate the Brewster angle.
\[\arctan \theta_B = \frac {n_2}{n_1}\] (where Θ
B is the angle between the incident ray and the normal). The latter makes it possible to use reflection as an aid to determine the refractive index of polished stones.
As the reflected ray is completely polarized in the horizontal plane at that specific angle, one could insert a vertically oriented polarizing filter to block all the reflected light (similar to how sunglasses work). By measuring the angle of the incident light, at which point no light passes through the polarizing filter, one may calculate the refractive index.
If one were to use monochromatic light that is polarized in the vertical plane, there would be no need for a horizontal orientated polarization filter as at the Brewster Angle that light will not be reflected. This is the principle of "The Brewster Angle Meter" developed by Peter Read.
|
Practice Paper 2 Question 7
A ternary tree is a collection of nodes, each having 0, 1, 2 or 3 children, where one node is not the child of any node and each of the other nodes is a child of exactly one other node. What are the maximum and minimum possible depths of a ternary tree containing \(n\) nodes?
Related topics Hints Hint 1What configuration should the tree have in order to have the maximum depth? Hint 2... that is, in order to be as tall as possible? Hint 3What configuration should the tree have in order to have the minimum depth? Hint 4... that is, in order to be as wide as possible? Hint 5For the minimum case, how many nodes are there at depth \(i\) from the top? Hint 6... what can you say about the relationship between the number of nodes at depth \(i\) and depth \(i-1?\) Hint 7... and what about the total sum of the nodes in this particular configuration? Hint 8... an arbitrary \(n\) means the last row isn't necessarily completely filled, hence \(n\) is only upper bounded rather than equal to the above sum. How can you extract the depth \(d\) from this inequality? Solution
Maximum depth is achieved when all nodes are in a line, so there is only one node at each particular depth. Hence the maximum depth is \(n.\)
Minimum is achieved by making the tree as wide as possible, i.e. when each node has 3 children. The maximum number of nodes at depth \(i\) is \(3^i,\) until we reach a sum of \(n.\) So, for a total depth \(d\) we have: \[ \begin{align} n &\leq \sum_{i=0}^{d}3^i \\ &= \frac{3^d - 1}{3 - 1}. \end{align} \] This rearranges to \(d \geq \log_3(2n+1).\) We want the smallest such integer (next largest integer), which means \(d= \lceil \log_3(2n+1) \rceil,\) where \(\lceil \cdot \rceil\) is the ceiling function.
Note: counting either nodes or edges would be acceptable as a correct answer.
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
I am working on a multiple part question for an introductory Real Analysis course. I have part of it done, but I have some problems.
Let $0 < y_1 < x_1$, and set $$x_{n+1}=\frac{x_n +y_n}{2}, y_{n+1}=\sqrt{x_n y_n}$$ (a) Prove that $0<y_n <x_n$ for all $n \in \mathbb{N}$
For part (a) I believe I have proven $y_n < x_n$, but I am not sure if what I have is sufficient.
$$\sqrt{x_{n-1}y_{n-1}} < \frac{x_{n-1}+y_{n-1}}{2}$$
Rearranging yields
$$0<x_{n-1}+y_{n-1}-2\sqrt{x_{n-1}y_{n-1}}$$
Factoring,
$$0 < \left ( \sqrt{x_{n-1}} - \sqrt{y_{n-1}} \right )^{2}$$
So the above inequality should always hold, as long as $\sqrt{x_{n-1}} \neq \sqrt{y_{n-1}} \neq 0$, which I know to be the case for $n=2$.
(b) Prove that $y_n$ is increasing and bounded above, and that $x_n$ is decreasing and bounded below.
I have shown that $y_n$ is increasing. I need to know when $y_{n+1} > y_{n}$ for $n \in \mathbb{N}$. So,
$$\sqrt{x_n y_n}>y_n$$
This results in $x_n > y_n$. In other words, $y_n$ is increasing when it is less than $x_n$. I will have shown in part (a) that this is always true, and thus $y_n$ is monotone increasing. I have not been able to prove that $y_n$ is bounded above.
I have been able to prove that $x_n$ is decreasing, but not that it is bounded below. The inequality $x_{n+1} < x_n$ yields $y_n < x_n$, which I will proven to be true for all $n \in \mathbb{N}$.
(c) Prove that $x_{n+1} - y_{n+1} < \frac{x_1 - y_1}{2^n}$ for $n \in \mathbb{N}$
I know that $x_1 > y_1$ and $x_{n+1} > y_{n+1}$. I divide the first inequality by $2^n$ which yields
$$\frac{x_1}{2^n} > \frac{y_1}{2^n}$$
I've added the inequalities but clearly it does not help
$$y_{n+1} - x_{n+1} < \frac{x_1}{2^n} - \frac{y_1}{2^n}$$
(d) Show that $\lim_{n \to \infty} x_n = \lim_{n \to \infty} y_n$
Assuming I am able to prove that $x_n$ is decreasing and bounded below, and $y_n$ is increasing and bounded above, I can use the Monotone Convergence Theorem.
$$\lim_{n \to \infty} x_n = L$$ $$\lim_{n \to \infty} y_n = W$$
$$L = \lim_{n \to \infty} x_{n+1} = \lim_{n \to \infty} \left ( \frac{x_n}{2}+\frac{y_n}{2} \right )= 1/2 \cdot \lim_{n \to \infty} x_n + 1/2 \cdot \lim_{n \to \infty} y_n $$
Which yields $L=W$. Any help with what I've missed would be great.
|
Practice Paper 2 Question 8
A polynomial \(f(x) = x^n + a_{n-1}x^{n-1} \cdots + a_1x + a_0,\) with integer coefficients \(a_i,\) has roots at \(1, 2, 4, \ldots, 2^{n-1}.\) What possible values can \(f(0)\) take?
Related topics Hints Hint 1How else can you write a polynomial when you know all of its roots? Hint 2... specifically, a polynomial that has a root at \(x_0\) is divisible by the binomial \(x-x_0.\) How does this help you answer the first hint? Hint 3... more specifically, \(x-x_0\) is a factor of \(f(x),\) i.e. \(f(x)=(x-x_0)g(x)\) where \(g(x)\) is a polynomial of degree \(n-1.\) How does this help you answer the first hint? Hint 4Finding \(f(0)\) is the same as evaluating this new expression at \(0.\) What do you obtain? Hint 5What manipulations can you perform on exponents when the bases are equal? Hint 6Specifically, try writing the product as a single number raised to a power, where the latter is an expression. Hint 7Looking at the expression (sum) in the power, do you notice a familiar relationship between its terms? Solution
Seeing as the coefficient of \(x^n\) is 1, we can write \(f(x)=\prod_{i=0}^{n-1}(x - 2^i).\) Thus, \[ \begin{align} f(0)&=\prod_{i=0}^{n-1}-2^i \\&= \prod_{i=0}^{n-1}(-1)\cdot2^i \\&= (-1)^n\;2^{\sum_0^{n-1}i} \\&= (-1)^n\;2^{n(n-1)/2}. \end{align} \] There are no duplicate roots since \(f\) is of degree \(n\) and we were told it has \(n\) distinct roots.
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
Back Propagation: A Mathematical and Intuitive Explanation Author: Oluwole Oyetoke (18th August, 2017) 𐐒AƆK PROPAGATION
Often times we hear about the backpropagation technique used during the training process of ANNs. At the snap of a finger, we can utilize backpropagation for Neural Nets Training on tools such as TesnorFlow, MATCOVNET, Caffe etc. For me, I decided to go a little bit deeper to understand what really goes on underneath.
Diagram 1:Back Propagation At A Glance
The backward propagation of errors or backpropagation, is a common method of training artificial neural networks and used in conjunction with an optimization method such as gradient descent. After the pioneering work of Rosenblatt and others, no efficient learning algorithm for multilayer or arbitrary feedforward neural networks was known. This led some to the premature conclusion that the whole field of ANN had reached a dead-end. The rediscovery of the backpropagation algorithm in the 1980s, together with the development of alternative network topologies, led to the intense outburst of activity which put neural computing back into the mainstream of computer science. The backpropagation algorithm was a major milestone in machine learning because, before it was discovered, optimization methods were extremely unsatisfactory. One popular method was to perturb (adjust) the weights in a random, uninformed direction (increase or decrease) and see if the performance of the ANN improved. If it did not, one would attempt to either go in the other direction, educe the perturbation size or a combination of both. Another attempt was to use Genetic Algorithms to evolve the performance of the neural networks. Unfortunately, in both cases, without (analytically) being informed on the correct direction, results and efficiency were suboptimal. This was where the backpropagation algorithm came into play by repeating a two-phase cycle (propagation and weight update) through which the network is able to learn on its own with less human trials and error approaches. Essentially, the two phases of the back propagation can be intuitively viewed as the steps below.
Feed-forward computation Back propagation to the output layer Back propagation to the hidden layer Weight updates
When input data is presented to a Neural Network, it is propagated layer-by-layer through the network up-to the output layer. This is called forward propagation. Using a cost function, the output of the network is then compared to the desired output and an error value is calculated for each of the neurons in the output layer. The error values are then propagated backwards, and used to adjust the weight value of each of the neurons which make up the layers of the neural network. Backpropagation uses these error values (from each of the output neurons) to compute the gradient of the cost function with respect to all the weights in the network. With this, as the network is trained, the neurons in the intermediate layers organize themselves in such a way that the different neurons learn to recognize different characteristics of the total input space. After the training process, when a previously unseen input pattern is presented to the network, neurons in the hidden layer of the network will respond with an active output if the input contains already learned features. It is important to note that backpropagation requires a known output for every input value to the network in order to calculate the cost function’s gradient. This explains why it is usually referred to as a supervised learning method.
The diagram 2 this post shows a graphical representation of the gradient descent operation on the cost function during the back-propagation process. In summary, with the backpropagation methodology, the loss function calculates the difference between the output of the network after being fed with input training samples against the network’s expected output. Note that these outputs are numeric values and are mainly dependent on the various mathematical computations within the various layers of the NN.
ANN Weight Update Computation (cost function)
In the ANN (supervised learning), the output of the network’s computation is compared to the desired output and by using a desired cost (loss) function an error value is calculated for each of the neurons in the output layer. There are different kinds of cost functions that can be used, some of which are listed below
Mean Squared Error (MSE) Cost Function Cross Entropy Cost Function Kullback-Leibler Divergence Exponential Cost Hellinger Distance Itakura–Saito distance
Cost function as the name implies are functions which help us determine the level of divergence an output is compared to a desired output, consequently helping us draw a line of best fit through a stream of data. The cost function on its own is a mathematical hypothesis which minimizes the error between a set of given input and outputs. (1) below shows the hypothesis formula for a linear regression with one variable which predicts that output y is a linear function of input x.
$$h_0(x) = {\theta}_0 + {\theta}_1x \dots (1)$$
Where: $$h_0(x) = Hypothesis$$
$$\theta_i = Parameters of the Model$$
$$x = Input of the Model$$
(2) below describes the hypothesis for data set with more than one input variable
$$ h_0(x^i) = {\theta}_0 + {\theta}_1x^i \dots(2) $$
The cost function seeks to find a valid set of 𝜃 so that h
0(x) is close to output y for every piece of input x supplied to the system i.e. in a supervised learning environment. Therefore, minimizing the error between the hypothesis and the reality by choosing suitable 𝜃. This can be achieved by (3) below
$$ \min\limits_{\theta_0 \theta_1} = J(\theta_0, \theta_1) = {1 \over 2} \sum_{i=1}^m {(h_\theta (x^i) - y^i)^2} \dots (3)$$
$$M=Number-of-training-set$$
Thinking intuitively through the algorithm of this function, different values of 𝜃 which make up the hypothesis are chosen and tested in equation 14 above to derive different cost values. This explains why it is also called the squared error function. This square isn’t there for no reason, as it allows the result to be quadratic. We know that a quadratic function when plotted will always have a visible ‘u’ shape making it convex, having only one minimum, unlike non-quadratic graphs which could have local and global minimums. This feature will come in handy should the gradient descent algorithm need to be used. For the ANN, the hypothesis h
𝜃 and output y are represented by the actual output of the Neural Network (NN) and the targeted output of the network. The entire error calculation for one sweep of training data across the network (1-Epoch) is realized by analysing the error/difference at the output nodes. i.e. comparing the difference in the networks output value for each of the output node(s) n against the targeted/expected output. This error is mathematically represented by the (4) below
$$E_p = {1 \over 2} \sum_1^n (T_n-N_n)^2 \dots (4)$$
$$N_n = Network Values$$
$$T_n = Target Values$$
P represents the sweep number e.g. at p=1, this represents the first sweep of training data across the NN. In other words, to derive the total error over the entire training iterations, equation r below can be used
$$ E = \sum_1^p E_p = {1 \over 2} \sum_1^p \sum_1^n ({T_n}^p - {N_n}^p)^2 \dots (5) $$
E is total error, and p represents all forward propagation iteration and n = number of output nodes. A normalized version of the total error is given by the MSE in (6) below
$$E = \sum_1^p E_p = {1 \over 2pn} \sum_1^p \sum_1^n ({N_n}^p-{T_n}^p)^2 \dots (6)$$
By computing the partial derivative of the total loss with respect to the individual weights at the connection between each of the individual neurons that make up the system, we can obtain how much a change in that weight value affects the total error E. The goal of backpropagation is to update each of the weights in the network so that they cause the actual output to be closer the target output, thereby minimizing the error for each output neuron and the network as a whole. However, to get the most minimal error, it is wise to also get the most minimum value from all the possible cost functions generated for a set of data. This can be gotten by using gradient descent which is a more general algorithm which can be used to minimize cost functions and even other functions. There exist also other algorithms which can be used in place of the gradient descent which are
The objective of this method is to find better training directions by using the second derivatives of the loss function. Newton's Method: This method build up an approximation to only information on the first derivatives of the loss function Quasi-Newton Method: The conjugate gradient method can be regarded as something intermediate between gradient descent and Newton's method. Here, the training directions are conjugated with respect to the used Hessian matrix Conjugate Gradient: Also known as the damped least-squares method. It works with the gradient vector and the Jacobian matrix. Levenberg – Marquardt algorithm::
The slowest training algorithm is usually gradient descent, but it is the one requiring less memory. On the contrary, the fastest one might be the Levenberg-Marquardt algorithm, but usually requires a lot of memory. A good compromise might be the quasi-Newton method
Using the gradient descent, maximally minimizes the total networks output error, every time we want to update our weights, we subtract the derivative of the cost function w.r.t. the weight itself, scaled by a learning rate from the current weight. In the case where there is more than one neural connection to the node to deal with, it means the weight values have to be simultaneously updated.
$$w := w - \alpha{\partial E \over \partial w} \dots (7)$$
Where: $$ w = weight $$
$$ E = cost function$$
$$ \alpha = learning rate$$
Diagram 2:Back Propagation Per Neuron
For an arbitrary node as the diagram above which has an input connection weight w, net input ‘in’ an output of ‘o’, with an expected target output of t and constitutes the part of a network with an MSE of E, the deferential of the MSE with respect to its input connection weight w is given by (8) below based on chains rule
$$ {\partial E \over \partial w}={\partial E \over \partial o} * {\partial o \over \partial in} * {\partial in \over \partial w} \dots (8) $$
In words, this can be read out as:
“
How much does the total error change with respect to the output * How much does the output ‘o’ change with respect to its total net input * how much does the total net input of in change with respect to the input weight w
"
By using the quotient rule given in (9) below, the individual differentials in equation t above are reduced as described in (10), (11) and (12) below
$$ {d{ \begin{Bmatrix} f(x) \over g(x) \end{Bmatrix}}} = {{g(x)f^|(x) - f(x)g^|(x)} \over {{(g(x))}^2}} \dots (9)$$
$$ {\partial E \over \partial o} = {-(t-o)} \dots (10)$$
$$ {\partial o \over \partial in} = {o(1-o)} \dots (11)$$
$${\partial in \over \partial w} = in \dots (12)$$
(8) above can be re-written as (13) below
$$ {\partial E \over \partial w} = {-(t-o) * o(1-o) * in} \dots (13)$$
Therefore to decrement the error and adjust the weight for every node, the new weight w(+1) is given by (14) below
$$w(+1) = w - \alpha (-(t-o) * o(1-o) * in) \dots (14) $$
Once a complete back propagation has been carried out and the weight adjusted, a new sample is fed to the network and its output is supervised. Effort is made to correct errors detected using the same back propagation technique until the near perfect classifier is generated. As we know, a supervised learning algorithm analyses the training data and produces an inferred function, which is called a classifier. This inferred function (made up of the network’s weights) should predict the correct output value for any valid input object.
By iterating through the network multiple times and minimizing the error effect of each input weight on the total error of the network, the learning algorithm gradually descends towards the least possible error value until convergence is reached. The learning rate controls how steep/fast the function steps down the gradients surface. It might be very difficult for the system to converge when the learning rate chosen is too large, however, on the other hand, choosing an extremely minimal learning rate would make the system take longer time to converge.
Intuitive Analysis of The Gradient Descent Operation
As was highlighted in the sections above, the MSE formulae is designed with a squared in it (which is later cancelled out by the multiplication by 0.5) so that the result of the function is quadratic. It is known that a quadratic function when plotted will always have a visible ‘u’ shape making it convex, having only one minimum, unlike non-quadratic graphs which could have local and global minimums. As the gradient descent function adjust the weight values through the network, it systematically converges towards a set of weight values which will generate the least error difference. Note that the least error difference graphically can be found on the points corresponding to the lowest point of the ‘u’ shaped graph. What the derivate in the gradient descent formulae does is simply pick a point on the MSE plot based on the selected weight value and look for the slope of the line tangent to that point. In other words, (7) above can be re-written as (15) below.
$$ w := w- \alpha * (positive slope) \dots (15)$$
Therefore, at every iteration, the value of w reduces towards the minimum. At the local/global minimum, the derived slope will be zero, therefore bringing the system to convergence. There is no need to decrease the learning rate over time, because as one approaches the local minimum, the slope approaches 0 and the step becomes smaller.
Diagram 3:Gradient Descent
It is important to note that there are many laws (algorithms) used to implement the adaptive feedback required to adjust the weights during training. Backpropagation just happens to be one of the most commonly used and known. Training the ANN requires a conscious analysis, to ensure that the network is not over trained and when finally, the system has been correctly trained, and no further learning is needed, the weights can, if desired, be "frozen." In some systems, this finalized network is then turned into hardware so that it can be fast. Other systems don't lock themselves in but continue to learn while in production use.
Diagram 4:Supervised Training Using Back Propagation
|
While studying about Hückel theory, I got accustomed to the approximation of making the overlap matrix an identity matrix; that is making the off-diagonal elements zero as $S_{AB}= S_{BA}= 0\;;$ this implies the use of orthogonal base states of AOs. The off-diagonal elements of the Hamiltonian matrix are still taken as constants that may be non-zero: $H_{AB}= H_{BA}= \beta_{AB}$, where $\beta_{AB}$ is a negative quantity.
Then I wondered why it isn't the case that $H_{AB}= H_{BA}= 0$ strictly, as it represents the expectation value - the average energy contribution of the overlapping region of AOs $A$ and $B$. But, it would seem that overlapping is not actually possible, as is evident from the fact that $S_{AB}$ is zero.
In this question, when I asked about this,
ifilot replied:
Indeed, this seems rather counterintuitive, but it is not. Another way of looking at $S_{ij}=δ_{ij}$ is saying that all atomic orbitals are orthonormal to each other. So if you would evaluate the overlap integral of two different orbitals, it would result in zero. This does not necessarily mean that evaluating the Hamiltonian integral $\langle ϕ_i|\hat H|ϕ_j\rangle$ results in zero, because
first applying the Hamiltonian operator on the wave function and then integrating might result in a non-zero outcome.
I know he is right in this point; but I'm still having trouble seeing how this is possible.
As Peter Atkins in his book
Elements of Physical Chemistry wrote:
[...] The integral $H_{AB}$ depends on both $\psi_A$ and $\psi_B$, and we can interpret it as
the contribution to the energy due to the accumulation of electron density where the two atomic orbitals overlap, including, for instance, the Coulombic attraction between the extra accumulation of electron density and both nuclei.
Evidently, this phrase makes clear that $H_{AB}$ is non-zero
iff there is overlap between the AOs.
So, how can $H_{AB}\ne 0$ and $S_{AB}= 0$ both be true at the same time? And, what are the physical implications? While the former means there is overlap, the latter means the opposite; it seems really
contradictory.
|
Practice Paper 2 Question 9
Using a fair coin we can generate random integers in \(\{1,2,3,4\}\) with equal probability by doing:
\((a)\) toss coin, if heads go to \((b)\) otherwise go to \((c)\) \((b)\) toss coin, if heads output \(1\) otherwise output \(2\) \((c)\) toss coin, if heads output \(3\) otherwise output \(4\)
By altering just one of the lines \((a),\) \((b)\) or \((c),\) we can generate random integers in \(\{1,2,3\}\) with equal probability. Identify which line and give the new version. Prove that it is correct.
Related topics Warm-up Questions You roll a die \(3\) times. What is the probability of getting \(1,2\) and \(3\) in any order? A geometric series has terms \(3, 6, 12, 24, 48, \ldots.\) Find an expression for the \(n^{th}\) term in the sequence, in terms of \(n.\) Hints Hint 1Which of the \(3\) lines must we definitely change? Hint 2You are allowed to go to another step, as done in step (a). Hint 3... you are also allowed to output a number for heads, and go to another step for tails. Hint 4If you change \((c)\) to output 3 for heads and go to a another step for tails, which step should that be? Hint 5Focus on the possible sequences of coin tosses that will output \(1\) (for now). Notice any pattern? Hint 6Try expressing the probability of each sequence of coin tosses in terms of the length of the sequence. Hint 7Sequences are independent. What can you say about the total probability of outputting \(1?\) Hint 8What about the probabilities of outputting \(2\) and \(3?\) Solution
Since we are not outputting \(4,\) we must change the second part of \((c)\) and we must go to another step instead of outputting \(4.\) Going to any other step than to (a) would make the probabilities of 1, 2, 3 unequal (this is easy to verify). Hence the only viable option is to go to (a) instead of outputting 4:
\((c)\) toss coin, if heads output \(3\) otherwise go to \((a)\)
To prove the probabilities are indeed equal, let's consider first the probability \(P(1)\) of outputting \(1,\) which we get when we flip \(HH\) or \(TTHH\) or \(TTTTHH\) and so on. The probability of flipping any particular sequence of heads or tails of length \(k\) is \(\big(\frac{1}{2}\big)^k.\) Therefore \(P(1)\) is just the sum of the even powers of \(\frac{1}{2}\) (infinite geometric series): \[ P(1) = \sum_{n=1}^{\infty}\frac{1}{4^n} = \frac{1/4}{1-1/4} = \frac{1}{3}. \] Probabilities for \(2\) and \(3\) are the same by symmetry of the coin.
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
Piezoelectric Materials: Understanding the Standards
Standards form an integral part of the work we do as engineers, providing a common language for communicating complex information. But standards committees are not omnipotent and sometimes revised standards are not universally adopted. This has happened in the case of the standards for piezoelectric materials, particularly for quartz. This blog post explains the multiple standards used to describe piezoelectrics in literature. Although the particular focus of this post is on quartz, the standards described apply for any piezoelectric material.
Piezoelectric Materials
Piezoelectric materials become electrically polarized when strained. From a microscopic perspective, the displacement of charged atoms within the crystal unit cell (when the solid is deformed) produces a net electric dipole moment within the medium. In certain crystal structures, this combines to give an average macroscopic dipole moment and a corresponding net electric polarization. This effect, known as the
direct piezoelectric effect, is always accompanied by the inverse piezoelectric effect, in which the solid becomes strained when placed in an electric field.
Several material properties must be defined in order to fully characterize the piezoelectric effect within a given material. The relationship between the material polarization and its deformation can be defined in two ways: the
strain-charge or the stress-charge form. Different sets of material properties are required for each of these equation forms.
To complicate things further, there are two standards used in the literature: the IEEE 1978 Standard and the IRE 1949 standard, and the material properties take different forms within the two standards. IEEE actually revised the 1978 standard in 1987, but this version of the standard contained a number of errors and was subsequently withdrawn. Confused yet? I was when I first started reading the literature!
Today’s blog post describes in detail the different equation forms and standards, with a focus on the particular case of quartz — the material that causes the most confusion. In both academia and industry, the quartz material properties are commonly defined within the older 1949 IRE standard. Meanwhile, other materials are now almost always defined using the 1978 IEEE standard. To make matters worse, it is not common to indicate which standard is being employed when specifying the material properties.
Two Equation Forms: The Strain-Charge and the Stress-Charge Form
The coupling between the structural and electrical domains can be expressed in the form of a connection between the material stress and its permittivity at constant stress or as a coupling between the material strain and its permittivity at constant strain. The two forms are given below.
Strain-Charge Form
The strain-charge form is written as:
\bf{S}=s_E \bf{T}+d^T \bf{E} \\[3mm]
\bf{D}=d \bf{T}+\epsilon_0 \epsilon_{rT} \bf{E}
\end{array}
where
S is the strain, T is the stress, E is the electric field, and D is the electric displacement field. The material parameters s E, d, and ε rT correspond to the material compliance, coupling properties, and relative permittivity at constant stress. ε 0 is the permittivity of free space. These quantities are tensors of rank 4, 3, and 2, respectively. The tensors, however, are highly symmetric for physical reasons. They can be represented as matrices within an abbreviated subscript notation, which is usually more convenient. In literature, the Voigt notation is almost always used.
Within this notation, the above two equations can be written as:
\left(
\begin{array}{l} S_{xx} \\ S_{yy} \\ S_{zz} \\ S_{yz} \\ S_{xz} \\ S_{xy} \\ \end{array} \right) = \left( \begin{array}{llllll}s_{E11} & s_{E12} &s_{E13} &s_{E14} &s_{E15} &s_{E16}\\ s_{E21} & s_{E22} &s_{E23} &s_{E24} &s_{E25} &s_{E26}\\ s_{E31} & s_{E32} &s_{E33} &s_{E34} &s_{E35} &s_{E36}\\ s_{E41} & s_{E42} &s_{E43} &s_{E44} &s_{E45} &s_{E46}\\s_{E51} & s_{E52} &s_{E53} &s_{E54} &s_{E55} &s_{E56}\\s_{E61} & s_{E62} &s_{E63} &s_{E64} &s_{E65} &s_{E66}\\\end{array} \right)\left( \begin{array}{l}T_{xx} \\ T_{yy} \\ T_{zz} \\ T_{yz} \\ T_{xz} \\ T_{xy} \\ \end{array} \right) + \left( \begin{array}{lll} d_{11} & d_{21} & d_{31} \\ d_{12} & d_{22} & d_{32} \\ d_{13} & d_{23} & d_{33} \\ d_{14} & d_{24} & d_{34} \\ d_{15} & d_{25} & d_{35} \\ d_{16} & d_{26} & d_{36} \\ \end{array} \right) \left( \begin{array}{l} E_{x} \\ E_{y} \\ E_{z} \\ \end{array} \right) \\ \left( \begin{array}{l} D_{x} \\ D_{y} \\ D_{z} \\ \end{array} \right) = \left( \begin{array}{llllll} d_{11} & d_{12} &d_{13} & d_{14} & d_{15} & d_{16}\\ d_{21} & d_{22} &d_{23} & d_{24} & d_{25} & d_{26}\\ d_{31} & d_{32} &d_{33} & d_{34} & d_{35} & d_{36}\\ \end{array} \right)\left( \begin{array}{l} T_{xx} \\ T_{yy} \\ T_{zz} \\ T_{yz} \\ T_{xz} \\ T_{xy} \\ \end{array} \right) + \epsilon_0 \left( \begin{array}{lll} \epsilon_{rT11} & \epsilon_{rT12} & \epsilon_{rT13} \\ \epsilon_{rT21} & \epsilon_{rT22} & \epsilon_{rT23} \\ \epsilon_{rT31} & \epsilon_{rT32} & \epsilon_{rT33} \\
\end{array}
\right) \left( \begin{array}{l} E_{x} \\ E_{y} \\ E_{z} \\ \end{array} \right) \\ \end{array} Stress-Charge Form
The stress-charge form is as follows:
\bf{T}=c_E \bf{S}-e^T \bf{E} \\[3mm]
\bf{D}=e \bf{S}+\epsilon_0 \epsilon_{rS} \bf{E}
\end{array}
The material parameters
c E, e, and ε rS correspond to the material stiffness, coupling properties, and relative permittivity at constant strain. ε 0 is the permittivity of free space. Once again, these quantities are tensors of rank 4, 3, and 2 respectively, but can be represented using the abbreviated subscript notation.
Using the Voigt notation and writing out the components gives:
\left(
\begin{array}{l} T_{xx} \\ T_{yy} \\ T_{zz} \\ T_{yz} \\ T_{xz} \\ T_{xy} \\ \end{array} \right) = \left( \begin{array}{llllll}c_{E11} & c_{E12} &c_{E13} &c_{E14} &c_{E15} &c_{E16}\\ c_{E21} & c_{E22} &c_{E23} &c_{E24} &c_{E25} &c_{E26}\\ c_{E31} & c_{E32} &c_{E33} &c_{E34} &c_{E35} &c_{E36}\\ c_{E41} & c_{E42} &c_{E43} &c_{E44} &c_{E45} &c_{E46}\\c_{E51} & c_{E52} &c_{E53} &c_{E54} &c_{E55} &c_{E56}\\c_{E61} & c_{E62} &c_{E63} &c_{E64} &c_{E65} &c_{E66}\\\end{array} \right)\left( \begin{array}{l}S_{xx} \\ S_{yy} \\ S_{zz} \\ S_{yz} \\ S_{xz} \\ S_{xy} \\ \end{array} \right) + \left( \begin{array}{lll} e_{11} & e_{21} & e_{31} \\ e_{12} & e_{22} & e_{32} \\ e_{13} & e_{23} & e_{33} \\ e_{14} & e_{24} & e_{34} \\ e_{15} & e_{25} & e_{35} \\ e_{16} & e_{26} & e_{36} \\ \end{array} \right) \left( \begin{array}{l} E_{x} \\ E_{y} \\ E_{z} \\ \end{array} \right) \\ \left( \begin{array}{l} D_{x} \\ D_{y} \\ D_{z} \\ \end{array} \right) = \left( \begin{array}{llllll} e_{11} & e_{12} &e_{13} & e_{14} & e_{15} & e_{16}\\ e_{21} & e_{22} &e_{23} & e_{24} & e_{25} & e_{26}\\ e_{31} & e_{32} &e_{33} & e_{34} & e_{35} & e_{36}\\ \end{array} \right)\left( \begin{array}{l} S_{xx} \\ S_{yy} \\ S_{zz} \\ S_{yz} \\ S_{xz} \\ S_{xy} \\ \end{array} \right) + \epsilon_0 \left( \begin{array}{lll} \epsilon_{rS11} & \epsilon_{rS12} & \epsilon_{rS13} \\ \epsilon_{rS21} & \epsilon_{rS22} & \epsilon_{rS23} \\ \epsilon_{rS31} & \epsilon_{rS32} & \epsilon_{rS33} \\
\end{array}
\right) \left( \begin{array}{l} E_{x} \\ E_{y} \\ E_{z} \\ \end{array} \right) \\ \end{array}
The matrices defined in the above equations are the key material properties that need to be defined for a piezoelectric material. Note that for many materials, a number of the elements in each of the matrices are zero and several others are related, as a result of the crystal symmetry.
Using the international notation for describing crystal symmetry, the symmetry group of quartz is Trigonal 32. The nonzero matrix elements take different values within different standards, which can result in confusion when specifying the material properties for a simulation, especially for quartz, where two different standards are commonly employed.
Finally, there is another complication in the case of quartz. Quartz crystals do not have symmetry planes parallel to the vertical axis. Correspondingly, they occur in two types: left- or right-handed (this is known as
enantiomorphism). Each one of these enantiomorphic forms results in different signs for particular elements in the material property matrices.
The material property matrices appropriate for quartz and other Trigonal 32 materials are shown below. Note that the symmetry relationships between elements in the matrix hold irrespective of the standard used or whether the material is right- or left-handed.
\left(
\begin{array}{cccccc} c_{E11} & c_{E12} &c_{E13} & c_{E14} & 0 & 0\\ c_{E12} & c_{E11} &c_{E13} & -c_{E14} &0 & 0\\ c_{E13} & c_{E13} &c_{E33} & 0 & 0 & 0\\ c_{E14} & -c_{E14} & 0 & c_{E44} & 0 & 0 \\ 0 & 0 & 0 & 0 & c_{E44} & c_{E14}\\ 0 & 0 & 0 & 0 & c_{E14} & \frac{1}{2}\left(c_{E11}-c_{E12}\right)\\ \end{array} \right) & \left( \begin{array}{cccccc} s_{E11} & s_{E12} &s_{E13} & s_{E14} & 0 & 0\\ s_{E12} & s_{E11} &s_{E13} & -s_{E14} &0 & 0\\ s_{E13} & s_{E13} &s_{E33} & 0 & 0 & 0\\ s_{E14} & -s_{E14} & 0 & s_{E44} & 0 & 0 \\ 0 & 0 & 0 & 0 & s_{E44} & 2 s_{E14}\\ 0 & 0 & 0 & 0 & 2 s_{E14} & 2\left(s_{E11}-s_{E12}\right)\\ \end{array} \right) \\
\left(
\begin{array}{cccccc} e_{11} &-e_{11} & 0 & e_{14} & 0 & 0 \\ 0 & 0 & 0 & 0 & -e_{14} & -e_{11}\\ 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right) & \left( \begin{array}{cccccc} d_{11} & -d_{11} & 0 & d_{14} & 0 & 0 \\ 0 & 0 & 0 & 0 & -d_{14} & -2d_{11} \\ 0 & 0 & 0 & 0 & 0 & 0\\ \end{array} \right)
\\
\left(
\begin{array}{ccc} \epsilon_{rS11} & 0 & 0 \\ 0 & \epsilon_{rS11} & 0 \\ 0 & 0 & \epsilon_{rS33} \\ \end{array} \right) & \left( \begin{array}{ccc} \epsilon_{rT11} & 0 & 0 \\ 0 & \epsilon_{rT11} & 0 \\ 0 & 0 & \epsilon_{rT33} \\ \end{array} \right) \\
\end{array}
Two Standards: 1949 IRE and 1978 IEEE
Having defined a set of material properties in terms of matrices that operate on the different components of the stress or the strain in the x,y,z axes system, all that remains is to define a consistent set of axes to use when writing down the material properties.
Correspondingly, all of the standards define a consistent set of axes for each of the relevant crystal classes. Unfortunately, in the particular case of quartz, subsequent standards have not used the same sets of axes, and the adoption of the most recent standard has not been widespread. Therefore, it is important to understand exactly which standard a given set of material properties is defined in.
The two relevant standards are:
The IEEE 1978 standard: This is usually employed for materials other than quartz in most of the literature. Sometimes, it is used to specify the quartz material properties, for example, B. A. Auld’s book Acoustic Fields and Waves in Solidsemploys this standard. This is usually employed for materials other than quartz in most of the literature. Sometimes, it is used to specify the quartz material properties, for example, B. A. Auld’s book The IRE 1949 standard: This is usually used for the material properties of quartz in literature.
The orientation of the axes set with the crystal can be determined by specifying the orientation with respect to the atoms in the unit cell of the crystal (which is not that helpful in practice) or by specifying the orientation with respect to the
crystal forms. A crystal form is a set of crystal faces or planes that are related by symmetry. Particular forms commonly appear in crystal specimens found in rocks and are used to identify different minerals.
The
Quartz Page has a series of helpful figures for identifying the common crystal forms, termed m, r, s, x, and z, as well as a further page specifying the Miller indices of the corresponding planes. Since the standards typically use crystal forms to orientate the axes, this approach is adopted in the figure below, which shows the two axes sets that relate to the 1978 and 1949 standards. Note that both left- and right-handed quartz are shown in the figure.
As a result of the different crystal axes, the signs of the material properties for both right- and left-handed quartz can change depending on the particular standard employed. The table below summarizes the different signs that occur for the quartz material properties:
IRE 1949 Standard
IEEE 1978 Standard
Material Property
Right-Handed Quartz
Left-Handed Quartz
Right-Handed Quartz
Left-Handed Quartz
s E14 c E14 d 11 d 14 e 11 e 14 Two Definitions for the Crystal Cut
Usually, piezoelectrics, such as quartz, are supplied in thin wafers that have been cut at a particular angle, with respect to the crystallographic axes. The orientation of a piezoelectric crystal cut is frequently defined by the system used in both the 1949 and 1978 standards. The orientation of the cut, with respect to the crystal axes, is specified by a series of rotations, using notation that takes the form illustrated below:
Diagram showing how a GT cut plate of quartz is defined in the IEEE 1978 standard. The crystal shown is right-handed quartz.
The first two letters of the notation given in the brackets describe the orientation of the thickness and length of the plate that is being cut from the crystal. From the figure on the left, it is clear that the thickness direction (t) is aligned with the
Y-axis and the length direction (l) is aligned with the X-axis. The plate also has a third dimension, its width (w). After the first two letters, a series of rotations are defined about the edges of the plate.
In the example above, the first rotation is about the
l-axis, with an angle of -51°. The negative angle means that the rotation takes place in the opposite direction to a right-handed rotation about the axis. Finally, an additional rotation about the resulting t-axis is defined, with an angle (in a right-handed sense) of -45°.
Most practical cuts use one or two rotations, but it is possible to have up to three rotations within the standard, allowing for completely arbitrary plate orientations.
Note that since the crystallographic axes are defined differently in the 1949 and the 1978 standards, the crystal cut definitions differ between the two. A common cut for quartz plates is the AT cut, which is defined in the two standards in the following manner:
Standard
AT Cut Definition
1949 IRE
(YXl) 35.25°
1978 IEEE
(YXl) -35.25°
The figure below shows how the two alternative definitions of the AT cut correspond to the two alternative definitions of the axes employed in the standards.
The AT cut of quartz is defined as (YXl) 35.25° in the IRE 1949 standard and (YXl) -35.25° in the IEEE 1978 standard. The figure shows the cut defined in a right-handed crystal of quartz. The reason for the difference between the standards is related to the different conventions for the orientation of the crystallographic axes. In the IRE 1949 standard, the rotation occurs in a positive or right-handed sense about the l -axis (which in this case is aligned with the X -axis). As a result of the different axes set employed in the IEEE 1978 standard, the rotation corresponds to a negative angle in this standard. Next Steps
We have now seen how the two different standards result in different definitions of the material properties and different definitions of the crystal cuts.
In a follow-up blog post, we will explore how to set up a COMSOL Multiphysics model using the two standards. COMSOL Multiphysics provides material properties for quartz using both of the available standards, so it is possible to set up a model using whichever standard you are most familiar with. Stay tuned for that.
Editor’s note: We published a follow-up blog post on this topic on 1/27/16. Read about applying the standards in your COMSOL Multiphysics models here. Comments (1) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
The final subject we shall consider is the convergence of Fourier series. I shall show two examples, closely linked, but with radically different behaviour.
A square wave,
\(f(x)= 1\) for \(-\pi < x < 0\); \(f(x)= -1\) for \(0 < x < \pi\).
a triangular wave,
\(g(x)= \pi/2+x\) for \(-\pi < x < 0\); \(g(x)= \pi/2-x\) for \(0 < x < \pi\).
Note that \(f\) is the derivative of \(g\).
It is not very hard to find the relevant Fourier series, \[\begin{aligned} f(x) & = & -\frac{4}{\pi} \sum_{m=0}^\infty \frac{1}{2m+1} \sin (2m+1) x,\\ g(x) & = & \frac{4}{\pi} \sum_{m=0}^\infty \frac{1}{(2m+1)^2} \cos (2m+1) x.\end{aligned}\] Let us compare the partial sums, where we let the sum in the Fourier series run from \(m=0\) to \(m=M\) instead of \(m=0\ldots\infty\). We note a marked difference between the two cases. The convergence of the Fourier series of \(g\) is uneventful, and after a few steps it is hard to see a difference between the partial sums, as well as between the partial sums and \(g\). For \(f\), the square wave, we see a surprising result: Even though the approximation gets better and better in the (flat) middle, there is a finite (and constant!) overshoot near the jump. The
area of this overshoot becomes smaller and smaller as we increase \(M\). This is called the Gibbs phenomenon (after its discoverer). It can be shown that for any function with a discontinuity such an effect is present, and that the size of the overshoot only depends on the size of the discontinuity! A final, slightly more interesting version of this picture, is shown in Fig. \(\PageIndex{3}\).
|
Have there been examples of seemingly long standing hard problems, answered quite easily possibly with tools existing at the time the problems were made? More modern examples would be nice. An example could be Hilbert's basis theorem. Another could be Dwork's p-adic technique for rationality of zeta functions from algebraic varieties.
I'm sure there are loads. One that comes to mind is the Seven Bridges of Konisberg problem.
Basically, the problem asked: is there a path which you can walk such that you cross every bridge once and only once. (you can't do things like walk halfway onto a bridge and walk back).
It was kinda known, well... conjectured, that there was no solution to this problem. Basically because no one found a solution to this problem and no one could prove why or why not.
Until Euler of course. As usual, it is always Euler who saves the day. He showed that there was no answer to this problem, as in, it was impossible to construct a path such that you cross every bridge just once.
The justification ended up being really simple: First, you have to put the problem in terms of the areas in town as opposed to the bridges. From the illustration, you can see that you can split the town into 4; the top, the bottom, the right and the center. Also, you have to realize that in order not to reuse a bridge, every region in town must have an even number of bridges (an entry bridge and an exit bridge). If you don't have an even number of bridges, you get stuck in one of the regions (As in, if there's just 1 bridge, you use that bridge and then get stuck. If there are 3 bridges, you go in through one, go out from the second, and then get stuck upon using the third). So, if you look closely, the regions have all odd numbers of bridges connecting them. (3 connecting to the top region, 5 connecting to the center, 3 connecting to the right and 3 connecting to the bottom)
Therefore, from this you get the idea of eulerian cycles and from there loads of properties of graphs and then graph theory was born. And if it were not for graph theory we'd be lost when it comes to the telephone network and the internet. So, yeah, long-standing problem with a simple solution and with even more long-standing benefits.
Acknowledgment: About the problem, another possibility would be if you had only 2 regions with an odd number of bridges connecting them, and all the rest must have an even number of bridges. The logic behind this would be that one of those two regions would be your "starting" point and the other would be you "ending" point or finish line. In this case, you would get an Eulerian path instead of a cycle
I don't know how old or how famous Tarski's Unique Square Root Problem was when the young László Lovász solved it. Certainly Tarski was very famous, and I'm sure he thought the problem was quite hard. Lovász's solution, while brilliant, was surely much shorter and simpler than Tarski or anyone had expected.
Tarski's Problem. If two finite structures (groups, rings, lattices, graphs, whatever) have isomorphic squares, must they be isomorphic? (The square of a structure $A$ is the direct product $A\times A$.)
Lovász solved Tarski's problem in the affirmative with a clever counting argument, which I will try to reproduce since I'm having trouble finding a good reference. I assume that all structures are finite structures of a fixed type, e.g., finite groups. Let $A,B$ be two structures such that $A\times A\cong B\times B$.
For any structures $X$ and $Y$, let $h(X,Y)$ be the number of homomorphisms, and $i(X,Y)$ the number of injective homomorphisms, from $X$ to $Y$.
Claim 1. For any structure $X$, we have $h(X,A)=h(X,B)$.
Proof. $h(X,A)^2=h(X,A^2)=h(X,B^2)=h(X,B)^2$, so $h(X,A)=h(X,B)$.
Claim 2. For any structure $X$, we have $i(X,A)=i(X,B)$.
Waving of Hands. We can use the inclusion-inclusion principle to express $i(X,A)$ in terms of values of $h(Y,A)$ where $Y$ ranges over quotients of $X$. (I think I could write a correct proof of this, but it would take some laborious typing. Anyway, it's obvious, isn't it?) Since $h(Y,A)=h(Y,B)$ by Claim 1, it follows that $i(X,A)=i(X,B)$.
Now we have $i(A,B)=i(B,B)\ge1$ and $i(B,A)=i(A,A)\ge1$, i.e., we have injective homomorphisms from $A$ to $B$ and from $B$ to $A$. Since the structures are finite, it follows from this that they are isomorphic.
I learned this argument by word of mouth, attributed to Lovász, several decades ago. This paper (which I haven't read) seems to be the source: L. Lovász, Operations with structures, Acta Math. Acad. Sci. Hungar. 18 (1967) 321-328; see Theorem 4.1 on p. 327, which appears to be a generalized version of the Unique Square Root Theorem.
I like the establishment of the independence of Euclid's 5th postulate from the others for winner in this category. It's arguably the first long-standing problem of pure mathematics, and maybe the oldest example we have of a mathematical
community posing a problem to itself and investigating it collaboratively. Perhaps some leniency is required here as to how accessible the "methods" by which it was eventually resolved really would have been to the Greeks, but in the spirit of positive historico-imaginationism I'm going to say they could have done it. I have no doubt Archimedes could have accomplished much of the necessary editing and improvements in rigor to Euclid's Elements, had been of a mindset and motivation to do so. The harder ingredient is a brilliant mathematician with enough childish instinct to go the distance and say "I don't care if Euclid's axioms are self-evident truths of your world. This is MY world where dragons are real, my sword is magic, and parallel lines can intersect each other." Who knows. Had the Fates granted Plato a star-student interested in mathematics (instead of the metaphysician he got in Aristotle)....
Steiner-Lehmus theorem is a good one: Any triangle that has two equal angle bisectors (each measured from a vertex to the opposite side) is isosceles.
To quote from "Geometry Revisited", by Coxeter and Greitzer: This theorem was sent to the great Swiss geometer Jacob Steiner in 1840 by C.L. Lehmus (whose name would otherwise have been forgotten long ago) with a request for a pure geometrical proof. Steiner gave a fairly complicated proof which inspired many other people to look for simpler methods. Papers on the Steiner-Lehmus theorem appeared in various journals in 1842, 1844, 1848, almost every year from 1854 till 1864, and with a good deal of regularity during the next hundred years.
It turns out to be a straightforward proof when converted to the contrapositive form, devised by two English engineers, Gilbert and Macdonnell, in 1963, showing that (see first diagram below) if in $\Delta ABC,\ B \neq C,$ then $BM \neq CN.$ The proof is given in Geometry Revisited, this is just an outline, but it rests on an easy lemma that if a triangle has two different angles, the smaller angle has the longer internal bisector.
![steiner-lehmus.ggb][1]
One direct proof involves solving a complicated algebraic equation based on the formula for the length of a triangle's angular bisector. See below.
$$\text{The equation BM}=\text{CN implies}\\ca\left[1-\left(\frac b{c+a}\right)^2\right]=ab\left[1-\left(\frac c{a+b}\right)^2\right]$$
To summarize, the theorem had a simple proof available using methods of the time (19th century), but was seen as difficult all the way up to well into the 20th century!
From least likely to fit your criteria to most, in my opinion:
Halting problem Special Cases of Fermats last theorem (n = 3, n = 4...) These problems weren't around for very long, but their solutions are fairly easy to understand after you've seen them. Polynomial time primality This problem has been around forever, but the solution is only "relatively" easy; as in, it's simple for those who study number theory algorithms. It's not introductory material though. It is fairly modern though. Sum of inverse squares $\sum_{k = 0}^{\infty} {\frac 1 {k^2}} = \text{???}$
Definitely a winner here. It had been known for a long time that the sum converged to
something finite, but no one had any clue what. The solution evaded the best mathematicians for generations until Euler came down from heaven and told us.
Edit: From "A History of PI" by Petr Beckmann
$\begin{align} \sin(x) &= x - \frac {x^3} {3!} + \frac {x^5} {5!} - \frac {x^7} {7!} \cdots \\ &= x \cdot (1 - \frac {x^2} {3!} + \frac {x^4} {5!} - \frac {x^6} {7!} \cdots)\end{align}$
Let $y^2 = x$, consider $\sin(x) = 0$ for $x \ne 0$:
$0 = 1 - \frac y {3!} + \frac {y^2} {5!} - \frac {y^3} {7!} \cdots \tag{P}$
We know roots of $\sin(x)$ are $0, \pm \pi, \pm 2 \pi, \cdots$, so the roots of $(P)$ are $(\pi)^2, (2 \pi)^2, (3 \pi)^2 \cdots$.
By theory of equations, the sum of the inverse of the roots of $(P)$ is the negative of the linear coefficient. So $$\frac 1 {\pi^2} + \frac 1 {(2 \pi)^2} + \frac 1 {(3 \pi)^2} + \cdots = \frac 1 {3!}$$
$$\frac 1 {1^2} + \frac 1 {2^2} + \frac 1 {3^2} + \cdots = \frac {\pi ^2} 6$$
There is a property of finite polynomials, that if:
$$ 0 = a_0 + a_1x + a_2x^2 + \cdots a_nx^n = a_n(x - r_0)(x - r_1)(x - r_2)\cdots(x - r_n)$$
then
$$\frac {a_1} {a_0} = \frac 1 {r_0} + \frac 1 {r_1} + \frac 1 {r_2} + \cdots \frac 1 {r_n}$$
It is a major skip in reasoning to assume that this is true for infinite polynomials, but I think this is what the author was getting at.
Zhang's recent result on the boundedness of prime gaps uses surprisingly simple ideas that were available for a long time before.
The Sylvester-Gallai theorem was posed by Sylvester in 1893 and first proved by Melchior in 1941 (who was apparently even unaware of Sylvester's problem). Several more proofs followed and all (including Melchior's) are fairly elementary.
$\zeta(3)$
I do not know how long it took for Apéry to prove that $\zeta(3)$ is irrational. However, as Alf van der Poorten recounted in his
Mathematical Intelligencer article,
Anyhow, I considered ‘A Proof that Euler Missed’ a racy title. It arose after Cohen’s report at Helsinki, with someone (specifically, an old friend I was sitting next to) sourly commenting ‘A victory for the French peasant . . . ’. To this Nick Katz retorted: ‘No __ ! No! This is marvellous! It is something Euler could have done.’
Peaucellier's Linkage
A charming example of an elegant solution to an engineering problem is
Peaucellier's Linkage to convert circular motion to linear motion, using circular inversion. James Watt had created a linkage to convert circular motion to nearly linear motion, but it wasn't perfect. Peaucellier, a French army officer found a method that was perfect. Körner, in his book Fourier Analysis, recounts that Sylvester showed Lord Kelvin a model of it, and:
[H]e 'nursed it as if it had been his own child, and when a motion was made to relieve him of it, replied "No! I have not had nearly enough of it—it is the most beautiful thing I have ever seen in my life"'.
The primality of Fermat numbers. That conjecture was a great fail albeit the numbers were pretty big to calculate by hand.
Van der Waerden's conjecture on the permanent of a doubly stochastic matrix, proved independently by Falikmann & Egorychev. It turned out to be an easy consequence of an inequality which had been known for a long time.
The Dinitz conjecture maybe doesn't belong here because it wasn't all that famous, but it was an open problem for over ten years and the proof was really, really trivial.
Finite field Kakeya "conjecture"
Let $\mathbb{F}_q$ be the finite field with $q$ elements and $A\subset \mathbb{F}_q^n$ be a set which contains a line in every possible direction. Then there exists constant $c>0$ (depending only on $n$) such that $$|A|\geq cq^n.$$
Dvir's proof of this is extremely elementary and easy, but nevertheless ingenious. The whole paper is 6 pages long, but one could easily fit the whole argument in less than a page. See here.
Modern example:Polynomial time constructive proof for Lovasz Local Lemma (LLL) was proved by Moser and Tardos in 2010. Problem's History: Lovasz along with Erdos introduced LLL in their 1975 seminal paper "Problems and results on 3-chromatic hypergraphs and some related questions". This paper only gave existential results. In 1991, Beck gave a an algorithmic approach of LLL but the algorithmic results where a constant away from LLL's existential results.
In 2010, Moser and Tardos proposed an algorithm whose results was equal to LLL's existential results.
A good example is the proof of the Corona theorem by Thomas Wolff in 1979 as a graduate student at Berkeley. It states that the algebra $H^{\infty}(\mathbb{D})$ has no Corona iff if $f_1,...,f_N\in H^{\infty}(\mathbb{D})$ and if $\max_{1\leq j\leq N}|f_j(z)|\geq \delta >0~\forall z\in\mathbb{D}$ then $\exists g_1,...,g_N\in H^{\infty}(\mathbb{D})$ such that \begin{equation*} 1=\sum^{N}_{j=1}f_j(z)g_j(z). \end{equation*}
This problem had remained open and despite links with BMO, it remained open. Then Wolff provided a proof that is remarkably simple. He summarised his result in a lemma that is based on bounded solutions of the equation $\overline{\partial}u=f$, and followed Hörmander's suggestion that one should solve the equations in a nonanalytic way and then make them analytic.
For more reading, see the references:
Bounded Analytic Functions by John Garnett
Geometric Function Theory: Explorations in Complex Analysis by Steven Krantz
Another one by I like is the algebraic Kakeya conjecture again by Thomas Wolff. In an attempt to solve the full Kakeya conjecture, Wolff suggested an algebraic variant (I think the hope was that it could be used to further the full conjecture). It states that for a finite field $\mathbb{F}$ with $E\subset \mathbb{F}^n$ containing a line in every direction, then $E$ has cardinality at least $c_n|\mathbb{F}|^n,~c_n>0$. Zeev Dvir (a computer scientist at Princeton) gave a remarkably simple proof of it.
|
I'll give you the numbers. I'm breaking this up into 3 different terms. There's atmospheric drag, what I'll call the "hover" term, and the gravitational potential climb. I will more or less assume a flight directly up. You're welcome to use whatever term for velocity you want, as none of them will be representative. I'll take the Shuttle's speed at halfway to max Q. This is 1000 ft/s, or about 300 m/s.
You would think atmospheric drag would be very difficult. It's actually not. In any case, you would probably use the v^2 relationship for drag. But if you think about where that comes from, it basically assumes that all the air in front of you is accelerated to the speed of your craft (minus any departure from unity in the drag coefficient). So for a good approximation, just take the mass-thickness (I call mu) for the entire atmosphere, and multiply by the metric for velocity.
Also, I'll use the numbers for Falcon 9, which is a diameter of 3.66 meters and launch mass of 333,400 kg. Yes, a lot of these numbers change over the course of the flight, but in ways that are fairly obvious if you changed this to do numerical integration.
$$\Delta V (drag) = 1/2 \mu C_d A v / M $$$$= (0.5)(10 \text{ tonnes} / m^2)(0.5)\pi(3.66/2 m)^2(300 m/s)/(333.4 \text{ tonnes}) $$$$= 23.7 m/s$$
Wow. That is not much. Maybe velocity should be higher. But still, out of 10 km/s total, this is a tiny amount. Atmospheric drag complicates launches, but not much because of its Delta v value.
Next, the "hover" term. This represents the gravity drag. Again, I'm forced to assume a pretty much upward launch. I'll also compare sea-level to Mt. Everest, at a height of 8,848 m. Not that you'd set up a launchpad there, but we need this to answer the question.
$$\Delta V = g h / v= (9.8 m/s^2)(8,848 m)/(300 m/s) = 298 m/s$$
Now this is much more significant. This isn't all of the gravity drag either. It's still sucking your delta v budget after you're out of the atmosphere, until you get to full orbital velocity.
Let's move on to the gravitational potential itself.
$$\Delta V = \sqrt( g h )= \sqrt( (9.8 m/s^2) (8,848 m) ) = 294.5 m/s$$
The sum of all these is a ballpark estimate of the benefit you would get from changing your launch location from sea-level to Mt. Everest. Honestly though, you save a comparable amount just by moving it down to the equator, where the Earth's rotation gives you a bigger boost.
Anyway, this is 616.7 m/s out of a total budget of 10 km/s. So it would be less than 10%. By the rocket equation, this can still make a difference. But then again, actual costs are complicated.
|
In Chapter 12 we’ll study partial differential equations that arise in problems of heat conduction, wave propagation, and potential theory. The purpose of this chapter is to develop tools required to solve these equations. In this section we consider the following problems, where \(\lambda\) is a real number and \(L>0\):
Problem 1: \(y''+\lambda y=0,\quad y(0)=0,\quad y(L)=0\) Problem 2: \(y''+\lambda y=0,\quad y'(0)=0,\quad y'(L)=0\) Problem 3: \(y''+\lambda y=0,\quad y(0)=0,\quad y'(L)=0\) Problem 4: \(y''+\lambda y=0,\quad y'(0)=0,\quad y(L)=0\) Problem 5: \(y''+\lambda y=0,\quad y(-L)=y(L), \quad y'(-L)=y'(L)\)
In each problem the conditions following the differential equation are called
boundary conditions. Note that the boundary conditions in Problem 5, unlike those in Problems 1-4, don’t require that \(y\) or \(y'\) be zero at the boundary points, but only that \(y\) have the same value at \(x=\pm L\) , and that \(y'\) have the same value at \(x=\pm L\). We say that the boundary conditions in Problem 5 are periodic.
Obviously, \(y\equiv0\) (the trivial solution) is a solution of Problems 1-5 for any value of \(\lambda\). For most values of \(\lambda\), there are no other solutions. The interesting question is this:
For what values of \(\lambda\) does the problem have nontrivial solutions, and what are they?
A value of \(\lambda\) for which the problem has a nontrivial solution is an
eigenvalue of the problem, and the nontrivial solutions are \(\lambda\)- eigenfunctions, or eigenfunctions associated with \(\lambda\). Note that a nonzero constant multiple of a \(\lambda\)-eigenfunction is again a \(\lambda\)-eigenfunction.
Problems 1-5 are called
eigenvalue problems. Solving an eigenvalue problem means finding all its eigenvalues and associated eigenfunctions. We’ll take it as given here that all the eigenvalues of Problems 1-5 are real numbers. This is proved in a more general setting in Section 13.2.
[thmtype:11.1.1] Problems \(1\)–\(5\) have no negative eigenvalues. Moreover\(,\) \(\lambda=0\) is an eigenvalue of Problems \(2\) and \(5,\) with associated eigenfunction \(y_0=1,\) but \(\lambda=0\) isn’t an eigenvalue of Problems \(1,\) \(3,\) or \(4\).
We consider Problems 1-4, and leave Problem 5 to you (Exercise [exer:11.1.1}). If \(y''+\lambda y=0\), then \(y(y''+\lambda y)=0\), so
\[\int_0^L y(x)(y''(x)+\lambda y(x))\,dx=0;\]
therefore,
\[\label{eq:11.1.1} \lambda\int_0^L y^2(x)\,dx=-\int_0^L y(x)y''(x)\, dx.\]
Integration by parts yields
\[\label{eq:11.1.2} \begin{array}{rcl} \int_0^L y(x)y''(x)\, dx &=& \ y(x)y'(x) \lims 0L-\int_0^L (y'(x))^2\,dx \\ &=&\ y(L)y'(L)-y(0)y'(0)-\int_0^L (y'(x))^2\,dx. \end{array}\]
However, if \(y\) satisfies any of the boundary conditions of Problems 1-4, then
\[y(L)y'(L)-y(0)y'(0)=0;\]
hence, Equation \ref{eq:11.1.1} and Equation \ref{eq:11.1.2} imply that
\[\lambda\int_0^L y^2(x)\,dx=\int_0^L (y'(x))^2\, dx.\]
If \(y\not\equiv0\), then \(\int_0^L y^2(x)\,dx>0\). Therefore \(\lambda\ge0\) and, if \(\lambda=0\), then \(y'(x)=0\) for all \(x\) in \((0,L)\) (why?), and \(y\) is constant on \((0,L)\). Any constant function satisfies the boundary conditions of Problem 2, so \(\lambda=0\) is an eigenvalue of Problem 2 and any nonzero constant function is an associated eigenfunction. However, the only constant function that satisfies the boundary conditions of Problems \(1\), \(3\), or \(4\) is \(y\equiv0\). Therefore \(\lambda=0\) isn’t an eigenvalue of any of these problems.
[example:11.1.1] Solve the eigenvalue problem
\[\label{eq:11.1.3} y''+\lambda y=0,\quad y(0)=0,\quad y(L)=0.\]
From Theorem [thmtype:11.1.1}, any eigenvalues of Equation \ref{eq:11.1.3} must be positive. If \(y\) satisfies Equation \ref{eq:11.1.3} with \(\lambda>0\), then
\[y=c_1\cos\sqrt\lambda\, x+c_2\sin\sqrt\lambda\, x,\]
where \(c_1\) and \(c_2\) are constants. The boundary condition \(y(0)=0\) implies that \(c_1=0\). Therefore \(y=c_2\sin\sqrt\lambda\, x\). Now the boundary condition \(y(L)=0\) implies that \(c_2\sin\sqrt\lambda\, L=0\). To make \(c_2\sin\sqrt\lambda\, L=0\) with \(c_2\ne0\), we must choose \(\sqrt\lambda=n\pi/L\), where \(n\) is a positive integer. Therefore \(\lambda_n=n^2\pi^2/L^2\) is an eigenvalue and
\[y_n=\sin{n\pi x\over L}\]
is an associated eigenfunction.
For future reference, we state the result of Example [example:11.1.1} as a theorem.
[thmtype:11.1.2] The eigenvalue problem
\[y''+\lambda y=0,\quad y(0)=0,\quad y(L)=0\]
has infinitely many positive eigenvalues \(\lambda_n=n^2\pi^2/L^2\), with associated eigenfunctions
\[y_n=\sin {n\pi x\over L},\quad n=1,2,3,\dots.\]
There are no other eigenvalues.
We leave it to you to prove the next theorem about Problem 2 by an argument like that of Example [example:11.1.1} (Exercise [exer:11.1.17}).
[thmtype:11.1.3] The eigenvalue problem
\[y''+\lambda y=0,\quad y'(0)=0,\quad y'(L)=0\]
has the eigenvalue \(\lambda_0=0\), with associated eigenfunction \(y_0=1\), and infinitely many positive eigenvalues \(\lambda_n=n^2\pi^2/L^2\), with associated eigenfunctions
\[y_n=\cos {n\pi x\over L}, n=1,2,3\dots.\]
There are no other eigenvalues.
[example:11.1.2] Solve the eigenvalue problem
\[\label{eq:11.1.4} y''+\lambda y=0,\quad y(0)=0,\quad y'(L)=0.\]
From Theorem [thmtype:11.1.1}, any eigenvalues of Equation \ref{eq:11.1.4} must be positive. If \(y\) satisfies Equation \ref{eq:11.1.4} with \(\lambda>0\), then
\[y=c_1\cos\sqrt\lambda\, x+c_2\sin\sqrt\lambda\, x,\]
where \(c_1\) and \(c_2\) are constants. The boundary condition \(y(0)=0\) implies that \(c_1=0\). Therefore \(y=c_2\sin\sqrt\lambda\, x\). Hence, \(y'=c_2\sqrt\lambda\cos\sqrt\lambda\,x\) and the boundary condition \(y'(L)=0\) implies that \(c_2\cos\sqrt\lambda\,L=0\). To make \(c_2\cos\sqrt\lambda\,L=0\) with \(c_2\ne0\) we must choose
\[\sqrt\lambda={(2n-1)\pi\over2L},\]
where \(n\) is a positive integer. Then \(\lambda_n=(2n-1)^2\pi^2/4L^2\) is an eigenvalue and
\[y_n=\sin{(2n-1)\pi x\over2L}\]
is an associated eigenfunction.
For future reference, we state the result of Example [example:11.1.2} as a theorem.
[thmtype:11.1.4] The eigenvalue problem
\[y''+\lambda y=0,\quad y(0)=0,\quad y'(L)=0\]
has infinitely many positive eigenvalues \(\lambda_n=(2n-1)^2\pi^2/4L^2,\) with associated eigenfunctions
\[y_n=\sin{(2n-1)\pi x\over2L},\quad n=1,2,3,\dots.\]
There are no other eigenvalues.
We leave it to you to prove the next theorem about Problem 4 by an argument like that of Example [example:11.1.2} (Exercise [exer:11.1.18}).
[thmtype:11.1.5] The eigenvalue problem
\[y''+\lambda y=0,\quad y'(0)=0,\quad y(L)=0\]
has infinitely many positive eigenvalues \(\lambda_n=(2n-1)^2\pi^2/4L^2,\) with associated eigenfunctions
\[y_n=\cos{(2n-1)\pi x\over2L},\quad n=1,2,3,\dots.\]
There are no other eigenvalues.
[example:11.1.3] Solve the eigenvalue problem
\[\label{eq:11.1.5} y''+\lambda y=0,\quad y(-L)=y(L),\quad y'(-L)=y'(L).\]
From Theorem [thmtype:11.1.1}, \(\lambda=0\) is an eigenvalue of Equation \ref{eq:11.1.5} with associated eigenfunction \(y_0=1\), and any other eigenvalues must be positive. If \(y\) satisfies Equation \ref{eq:11.1.5} with \(\lambda>0\), then
\[\label{eq:11.1.6} y=c_1\cos\sqrt\lambda\, x+c_2\sin\sqrt\lambda\, x,\]
where \(c_1\) and \(c_2\) are constants. The boundary condition \(y(-L)=y(L)\) implies that
\[\label{eq:11.1.7} c_1\cos(-\sqrt\lambda\,L)+c_2\sin(-\sqrt\lambda\,L)=c_1\cos \sqrt\lambda\,L+c_2\sin \sqrt\lambda\,L.\]
Since
\[\label{eq:11.1.8} \cos(-\sqrt\lambda\,L)=\cos \sqrt\lambda\,L\mbox {\quad and \quad}\sin(-\sqrt\lambda\,L)=-\sin \sqrt\lambda\,L,\]
Equation \ref{eq:11.1.7} implies that
\[\label{eq:11.1.9} c_2\sin \sqrt\lambda\,L=0.\]
Differentiating Equation \ref{eq:11.1.6} yields
\[y'=\sqrt\lambda\left(-c_1\sin\sqrt\lambda x+c_2\cos\sqrt\lambda x\right).\]
The boundary condition \(y'(-L)=y'(L)\) implies that
\[-c_1\sin(-\sqrt\lambda\,L)+c_2\cos(-\sqrt\lambda\,L)=-c_1\sin \sqrt\lambda\,L+c_2\cos \sqrt\lambda\,L,\]
and Equation \ref{eq:11.1.8} implies that
\[\label{eq:11.1.10} c_1\sin \sqrt\lambda\,L=0.\]
Eqns. Equation \ref{eq:11.1.9} and Equation \ref{eq:11.1.10} imply that \(c_1=c_2=0\) unless \(\sqrt\lambda =n\pi /L\), where \(n\) is a positive integer. In this case Equation \ref{eq:11.1.9} and Equation \ref{eq:11.1.10} both hold for arbitrary \(c_1\) and \(c_2\). The eigenvalue determined in this way is \(\lambda_n=n^2\pi^2/L^2\), and each such eigenvalue has the linearly independent associated eigenfunctions
\[\cos {n\pi x\over L} \mbox{\quad and \quad} \sin{ n\pi x\over L}. \eqno{\bbox}\]
For future reference we state the result of Example [example:11.1.3} as a theorem.
[thmtype:11.1.6] The eigenvalue problem
\[y''+\lambda y=0,\quad y(-L)=y(L),\quad y'(-L)=y'(L),\]
has the eigenvalue \(\lambda_0=0\), with associated eigenfunction \(y_0=1\) and infinitely many positive eigenvalues \(\lambda_n=n^2\pi^2/L^2,\) with associated eigenfunctions
\[y_{1n}=\cos {n\pi x\over L} \mbox{\quad and \quad} y_{2n}=\sin {n\pi x\over L},\quad n=1,2,3,\dots.\]
There are no other eigenvalues.
We say that two integrable functions \(f\) and \(g\) are
orthogonal on an interval \([a,b]\) if
\[\int_a^bf(x)g(x)\,dx=0.\]
More generally, we say that the functions \(\phi_1\), \(\phi_2\), …, \(\phi_n\), …(finitely or infinitely many) are orthogonal on \([a,b]\) if
\[\int_a^b\phi_i(x)\phi_j(x)\,dx=0\mbox{\quad whenever \quad} i\ne j.\]
The importance of orthogonality will become clear when we study Fourier series in the next two sections.
[example:11.1.4] Show that the eigenfunctions
\[\label{eq:11.1.11} 1,\, \cos{\pi x\over L},\, \sin{\pi x\over L}, \, \cos{2\pi x\over L}, \, \sin{2\pi x\over L},\dots, \cos{n\pi x\over L}, \, \sin{n\pi x\over L},\dots\]
of Problem 5 are orthogonal on \([-L,L]\).
We must show that
\[\label{eq:11.1.12} \int_{-L}^L f(x)g(x)\,dx=0\]
whenever \(f\) and \(g\) are distinct functions from Equation \ref{eq:11.1.11}. If \(r\) is any nonzero integer, then
\[\label{eq:11.1.13} \int_{-L}^L\cos{r\pi x\over L}\,dx ={L\over r\pi}\sin{r\pi x\over L}\bigg|_{-L}^L=0.\]
and
\[\int_{-L}^L\sin{r\pi x\over L}\,dx =-{L\over r\pi}\cos{r\pi x\over L}\bigg|_{-L}^L=0.\]
Therefore Equation \ref{eq:11.1.12} holds if \(f\equiv1\) and \(g\) is any other function in Equation \ref{eq:11.1.11}.
If \(f(x)=\cos m\pi x/L\) and \(g(x)=\cos n\pi x/L\) where \(m\) and \(n\) are distinct positive integers, then
\[\label{eq:11.1.14} \int_{-L}^L f(x)g(x)\,dx=\int_{-L}^L\cos{m\pi x\over L} \cos{n\pi x\over L}\,dx.\]
To evaluate this integral, we use the identity
\[\cos A\cos B={1\over2}[\cos(A-B)+\cos(A+B)]\]
with \(A=m\pi x/L\) and \(B=n\pi x/L\). Then Equation \ref{eq:11.1.14} becomes
\[\int_{-L}^L f(x)g(x)\,dx={1\over2}\left[\int_{-L}^L\cos{(m-n)\pi x\over L}\,dx +\int_{-L}^L\cos{(m+n)\pi x\over L}\,dx\right].\]
Since \(m-n\) and \(m+n\) are both nonzero integers, Equation \ref{eq:11.1.13} implies that the integrals on the right are both zero. Therefore Equation \ref{eq:11.1.12} is true in this case.
If \(f(x)=\sin m\pi x/L\) and \(g(x)=\sin n\pi x/L\) where \(m\) and \(n\) are distinct positive integers, then
\[\label{eq:11.1.15} \int_{-L}^L f(x)g(x)\,dx=\int_{-L}^L\sin{m\pi x\over L} \sin{n\pi x\over L}\,dx.\]
To evaluate this integral, we use the identity
\[\sin A\sin B={1\over2}[\cos(A-B)-\cos(A+B)]\]
with \(A=m\pi x/L\) and \(B=n\pi x/L\). Then Equation \ref{eq:11.1.15} becomes
\[\int_{-L}^L f(x)g(x)\,dx={1\over2}\left[\int_{-L}^L\cos{(m-n)\pi x\over L}\,dx -\int_{-L}^L\cos{(m+n)\pi x\over L}\,dx\right]=0.\]
If \(f(x)=\sin m\pi x/L\) and \(g(x)=\cos n\pi x/L\) where \(m\) and \(n\) are positive integers (not necessarily distinct), then
\[\int_{-L}^L f(x)g(x)\,dx=\int_{-L}^L\sin{m\pi x\over L} \cos{n\pi x\over L}\,dx=0\]
because the integrand is an odd function and the limits are symmetric about \(x=0\).
|
Is the function $f:\Bbb R \rightarrow \Bbb R$ defined as $f(x)=\sin(x^2)$, for all $x\in\Bbb R$, periodic?
Here's my attempt to solve this:
Let's assume that it is periodic. For a function to be periodic, it must satisfy $f(x)=f(T+x)$ for all $x\in\Bbb R$, so it must satisfy the relation for $x=0$ as well. So we get that $T^2=k\pi \iff T=\sqrt{k\pi}$, $k\in\Bbb N$ (since $T$ must be positive, we remove the $-\sqrt{k\pi}$ solution).
So what now? I tried taking $x=\sqrt\pi$ and using the $T$ I found, and I get this: $$ \sin\pi=\sin(T+\sqrt\pi)\iff-1=\sin(\pi(\sqrt k+1)^2)\iff k+2\sqrt k+1=3/2+l $$ Is this enough for contradiction? The left side of equation is sometimes irrational and gets rational only when $k$ is perfect square, which doesn't happen periodic, while the right hand side is always rational. Or I'm still missing some steps?
Thanks.
|
A description of what I'm attempting:
Given a generic vector $\vec{r}$ and a point in space $D$, I want to build a reference frame centred in $\vec{r}= (x,y,z)$, with one axis directed along the direction from $\vec{r}$ to $D$. Calling this last vector $\vec{L} = \vec{D} - \vec{r}$, the direction of reference is simply $\hat{L} = \frac{\vec{L}}{|\vec{L}|} = \frac{(D_x-x,D_y-y,D_z-z)}{\sqrt{(D_x-x)^2+(D_y-y)^2+(D_z-z)^2}}$.
At this point, I define two generic unit vectors: $\hat{V} = \frac{(V_x,V_y,V_z)}{|\vec{V}|}$ and $\hat{T} = \frac{(T_x,T_y,T_z)}{|\vec{T}|}$
Taking $\hat{L}$, $\hat{V}$, $\hat{T}$ to correspond to the directions in Cartesian space $(i,j,k)$, their mutual orthogonality of is given by: $\hat{L} \times \hat{V} = \hat{T}$ and $\hat{L} \times \hat{T} = -\hat{V}$
Taken together, the two cross products yield six equations in six unknowns: $V_x,V_y,V_z,T_x,T_y,T_z$
What I would like to obtain via Mathematica is the expression of the unknowns above as a function of $x,y,z,D_x,D_y,D_z$.
I first tried this:
Solve[{Cross[L, V] == T, Cross[L, T] == -V}, {Vx,Vy,Vz,Tx,Ty,Tz}]
but this command takes ages and after a full night of running, it wasn't done.
I also looked at the functions
JordanDecomposition and
NDSolve, but they both appear to serve for numerical evaluations, whereas I need the symbolic expressions.
I thought of writing the six equations into matrix form (possibly there's a Mathematica function to do this) and then diagonalise this matrix.
Is there a way to do this with Mathematica?
|
I’ve had a number of requests for hints and/or solutions to several of the homework problems for tomorrow. So here we go.
17.5 #30 This question asks us to compute the gravitational flux through a cylinder \(S\) from the gravitational field
\[ \mathbf{F} = – G m \frac{\mathbf{e}_r}{r^2}. \] Here, \(S\) is the cylinder of radius \(R\) whose axis of symmetry is the \(z\)-axis with \(a \leq z \leq b\), and \(r = \sqrt{x^2 + y^2 + z^2}\) is the distance from \((x, y, z)\) to the origin.
Recall that \(\mathbf{e}_r(x, y, z)\) is the unit vector that points in the direction \(\langle x, y, z \rangle\). Therefore, we can write
\[ \mathbf{F}(x, y, z) = – G m \frac{\langle x, y, z \rangle}{(x^2 + y^2 + z^2)^{3/2}}. \] In general, if \(H(u, v)\) is a parametrization of a surface \(S\), we compute the flux of \(\mathbf{F}\) through \(S\) to be \[ \iint_S \mathbf{F} \cdot d\mathbf{S} = \iint_D F(H(u, v)) \cdot \mathbf{n}(u, v), du , dv \] where \[ \mathbf{n}(u, v) = \frac{\partial H}{\partial u} \times \frac{\partial H}{\partial v} \] is the normal vector for the surface \(S\). For this problem, we cannot parametrize the cylinder with a single equation: we must parametrize the top, sides, and bottom separately. Hence, we must compute 3 integrals to determine the flux.
The sides of the cylinder can be parametrized in cylindrical coordinates by
\[ H(\theta, z) = (R \cos \theta, R \sin \theta, z) \] where \(0 \leq \theta \leq 2 \pi\) and \(a \leq z \leq b\). You should check that this parametrization gives a normal vector \(\mathbf{n}(\theta, z) = \langle (R \cos \theta, R \sin \theta, 0 \rangle\). Therefore, the contribution of the flux from the sides of the cylinder is \[ \iint_{\mathrm{sides}} \mathbf{F} \cdot d\mathbf{S} = – G m\int_a^b \int_0^{2 \pi} \frac{R^2}{(R^2 + z^2)^{3/2}}, d\theta, dz. \] You can compute the outer integral using the trig substitution \(z = R \tan u\).
For the top of the cylinder, we must parametrize the disk of radius \(R\) parallel to the \(xy\)–plane centered at \((0, 0, b)\). Again, we can use polar coordinates to parametrize this region as
\[ H(r, \theta) = (r \cos \theta, r \sin \theta, b) \] for \(0 \leq r \leq R\) and \(0 \leq \theta \leq 2 \pi\). This parametrization gives a normal vector of \(\mathbf{n}(r, \theta) = \langle 0, 0, r \rangle\). Notice that this is the upward pointing normal vector, which is what we want since we need an outward pointing normal for the entire cylinder \(S\). Therefore, the integral for the flux through the top face of the cylinder is \[ \iint_{\mathrm{top}} = – G m \int_0^R \int_0^{2 \pi} \frac{b r}{(b^2 + r^2)^{3/2}}, d\theta , dr. \] This integral is easily computed using a \(u\) substitution.
The flux through the bottom face of the cylinder is almost identical to the top face except that \(z = a\) on the bottom face, and we must take the downward facing normal vector \(\mathbf{n}(r, \theta) = \langle 0, 0, -r \rangle\). When computing computing the integral for the bottom face, you will get a term involving \(\sqrt{a^2}\). Be careful that you simplify \(\sqrt{a^2} = |a|\), and not just \(a\), for this will make a difference (in fact the crucial difference) when \(a\) is negative instead of positive.
18.1 #40 This problem asks you to show that when \(\mathbf{F} = \nabla \phi\), we have \(\mathrm{curl}_z(\mathbf{F}^*) = \Delta \phi\). Recall that for \(\mathbf{F} = \langle F_1, F_2 \rangle\)
\(\mathbf{F}^* = \langle – F_2, F_1 \rangle\) \(\mathrm{curl}_z(\mathbf{F}) = \frac{\partial F_2}{\partial x} – \frac{\partial F_1}{\partial y}\) \(\Delta \phi = \frac{\partial^2 \phi}{\partial x^2} + \frac{\partial^2 \phi}{\partial y^2}\).
Using these definitions and \(\mathbf{F} = \nabla \phi = \langle \partial \phi / \partial x, \partial \phi / \partial y\rangle\), we can compute
\[ \mathrm{curl}_z(\mathbf{F}^*) = \mathrm{curl}_z( \langle – \partial \phi / \partial y, \partial \phi / \partial x \rangle) = \frac{\partial^2 \phi}{\partial x^2} + \frac{\partial^2 \phi}{\partial y^2}, \] which is exactly what we wanted to show.
|
Sampling from Phase Space Distributions in 3D Charged Particle Beams
In the previous installment of this series, we explained two concepts needed to model the release and propagation of real-world charged particle beams. We first introduced probability distribution functions in a purely mathematical sense and then discussed a specific type of distribution — the transverse phase space distribution of a charged particle beam in 2D. Now, let’s combine what we’ve learned and find out how to sample the initial positions and velocities of 3D beam particles from this distribution.
Reviewing 2D Phase Space Distributions and Ellipses
To start, let’s briefly review phase space distributions and ellipses in 2D, both of which are fully explained in the previous post in the Phase Space Distributions in Beam Physics series. The particles in real-world nonlaminar charged particle beams occupy a region in phase space that is often elliptical in shape. The equation for this phase space ellipse in 2D depends on the beam emittance ε and Twiss parameters,
(1)
where
x and x’ are the transverse position and inclination angle of the particle, respectively. The Twiss parameters are further related by the Courant-Snyder condition,
(2)
The actual positions of particles in the ellipse can vary. Two of the most common distributions of phase space density are a uniform density within the ellipse and a Gaussian distribution with a maximum at the ellipse’s center, both of which are illustrated below. The blue curve in each case is the phase space ellipse described in Eq.(1), where ε is the
4-rms transverse emittance. For the Gaussian distribution, note that some particles still lie outside the ellipse. Since the Gaussian distribution decreases gradually without reaching exactly zero, there is always a chance that a few particles will lie outside the ellipse, no matter how large it is drawn. When using the 4-rms emittance to define the ellipse in Eq.(1), about 86% of the particles lie inside the ellipse. Comparison of a uniform and Gaussian distribution.
Let’s consider a simpler case in which the probability of finding a particle at any point in phase space is constant inside the ellipse and zero outside of it. For this problem, substituting Eq.(2) into Eq.(1) and solving for
x’ yields
(3)
The probability distribution function is then
(4)
\begin{array}{cc}
C & -\frac{\alpha x}{\beta} -\frac{\sqrt{\varepsilon \beta -x^2}}{\beta} \textless x' \textless -\frac{\alpha x}{\beta} + \frac{\sqrt{\varepsilon \beta -x^2}}{\beta}\\
0 & \textrm{otherwise}
\end{array}\right.
where the constant
C depends on the size of the ellipse. The probability g(x) of the particle having a given x-coordinate is
Considering the locations where Eq.(3) can take on real values, we get
\begin{array}{cc}
2C \frac{\sqrt{\varepsilon \beta -x^2}}{\beta} & -\sqrt{\varepsilon \beta} \textless x \textless \sqrt{\varepsilon \beta}\\
0 & \textrm{otherwise}
\end{array}\right.
Or, more simply,
(5)
Suppose we have a population of model particles that we want to sample using the probability distribution function given by Eq.(4). More specifically, we’d like to first sample the initial transverse positions of the particles according to Eq.(5) and then assign appropriate inclination angles so that the particles lie within the phase space ellipse. One way to accomplish this is to compute a cumulative distribution function starting from Eq.(5) and then use the method of inverse random sampling. Another possible method is using Eq.(5) to define the density of particles, which we can enter directly into the
Inlet and Release features in the particle tracing interfaces. In this case, the normalization is done automatically. Screenshot showing how to input the particle density in the Inlet feature.
Still, the most convenient approach is using the
Particle Beam feature available in the Charged Particle Tracing physics interface. The Particle Beam feature automatically distributes the particles in phase space, allowing you to specify the location of the beam center, emittance, and Twiss parameters. Screenshot showing how to input the particle density in the Particle Beam feature. Simulating Charged Particle Beams in 3D
So far, we’ve only considered charged particle beams as idealized sheet beams where the out-of-plane (
y) component of the transverse position and velocity can be ignored. However, real beams propagate in 3D space and only extend a finite distance in both transverse directions. Thus, in order to get a complete picture of a beam, we must consider two orthogonal transverse directions x and y as well as the inclination angles x' = v_x/ v_z and y' = v_y/v_z. Particle beam propagating in 3D space.
The reason why simulating the release of particle beams in 3D is more complicated than in 2D is that the degrees of freedom for the two transverse directions are often coupled in real-world beams. For example, suppose two particles are released at the same transverse position i.e., the same
x– and y-coordinates. Let’s say that one of these particles has a very large inclination angle in the x direction ( x’) and the other particle has a very small inclination angle in the x direction. The particle with the large inclination angle in the x direction is more likely to have a small inclination angle in the y direction and vice versa. Hence, we can’t just sample from two different distributions for x’ and y’ because the value of each one affects the probability distribution of the other.
To phrase this problem in a more abstract sense: Instead of considering the two transverse directions as separate 2D phase space ellipses, we actually need to think about the transverse particle motion using distributions of phase space in four dimensions! Since we’re used to seeing objects only in 2D or 3D, distributions with more than three space dimensions are much harder to visualize.
This is where the Particle Beam feature is most useful. It includes settings for sampling the initial particle positions and inclination angles from a variety of built-in 4D transverse phase space distributions. Some common distributions are the Kapchinskij-Vladimirskij (KV) distribution, waterbag distribution, parabolic distribution, and Gaussian distribution. First, let’s consider the simplest distribution, the KV distribution, and then visualize the other distributions in this group.
Mathematically, the KV distribution considers the beam particles to be uniformly distributed on an infinitesimally thin, 4D hyperellipsoid in phase space. It’s written as
+\left(\frac{r_x x' -r'_x x}{\varepsilon_x} \right)^2
+\left(\frac{y}{r_y} \right)^2
+\left(\frac{r_y y' -r'_y y}{\varepsilon_y} \right)^2 = 1
where
r x and rare the maximum extents of the beam in the y xand ydirections, εand x εare the beam emittances associated with the two transverse directions, and y r’and x r’are the inclination angles at the edge of the beam envelope. y
Because it is more difficult to visualize 4D probability distribution functions than functions of lower dimensions, it is often convenient to visualize the distribution indirectly by plotting its projection onto lower dimensions. An interesting property of the KV distribution is that its projection onto any 2D plane is an ellipse of uniform density. The projections onto six such planes are shown below. The projections of the 4D hyperellipsoid onto the
x-x’ and y-y’ planes are tilted because nonzero values have been specified for the Twiss parameter α in each transverse direction. The KV distribution projected onto six 2D planes.
Compare the distributions shown above to the following alternatives.
The waterbag, parabolic, and Gaussian distributions projected onto six 2D planes.
We see that the projection onto any 2D plane forms an ellipse-shaped distribution in all cases, but the ellipses are only uniformly filled in the KV distribution.
Concluding Thoughts on Modeling Charged Particle Beams
Even as this blog series on modeling charged particle beams comes to a close, we have only scratched the surface of the intricate and highly technical field of beam physics. While we’ve discussed transverse phase space distributions in 3D, we haven’t examined longitudinal emittance or the related phenomenon of bunching. We also haven’t categorized the phenomena that causes emittance to increase, decrease, or remain constant as the beam propagates.
This series is meant to be an introduction to the way in which random or pseudorandom sampling from probability distribution functions plays an important role in capturing the real-world physics of high-energy ion and electron beams. For a more comprehensive overview of beam physics, references 1-3 provide an excellent starting point. More technical details about each of the 4D transverse phase space distributions described above, including algorithms for sampling pseudorandom numbers from these distributions, can be found in references 4-7. To learn more about how these concepts apply in the COMSOL Multiphysics® software, browse the resources featured below or contact us for guidance.
Other Posts in This Series Sampling Random Numbers from Probability Distribution Functions Phase Space Distributions and Emittance in 2D Charged Particle Beams References Humphries, Stanley. Principles of charged particle acceleration. Courier Corporation, 2013. Humphries, Stanley. Charged particle beams. Courier Corporation, 2013. Davidson, Ronald C., and Hong Qin. Physics of intense charged particle beams in high energy accelerators. Imperial college press, 2001. Lund, Steven M., Takashi Kikuchi, and Ronald C. Davidson. “Generation of initial Vlasov distributions for simulation of charged particle beams with high space-charge intensity.” Physical Review Special Topics — Accelerators and Beams, vol. 12, N/A, November 19, 2009, pp. 114801 12, no. UCRL-JRNL-229998 (2007). Lund, Steven M., Takashi Kikuchi, and Ronald C. Davidson. “Generation of initial kinetic distributions for simulation of long-pulse charged particle beams with high space-charge intensity.” Physical Review Special Topics — Accelerators and Beams, 12, no. 11 (2009): 114801. Batygin, Y. K. “Particle distribution generator in 4D phase space.” Computational Accelerator Physics, vol. 297, no. 1, pp. 419-426. AIP Publishing, 1993. Batygin, Y. K. “Particle-in-cell code BEAMPATH for beam dynamics simulations in linear accelerators and beamlines.” Nuclear Instruments and Methods in Physics Research. Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 539, no. 3 (2005): 455-489. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
I understand that $f(x)$ must be linear with a first derivative equal to a constant. I'm just not sure how I can use the mean value property of integrals to show something about $f''(x)$. The hint on this question is to use the fundamental theorem of calculus or Jensen's inequality.
Assuming $f$ is integrable and twice differentiable (otherwise your statement about average value doesn't make sense, nor your final statement*), $$\int_a^bf(x)\,\mathrm dx=(b-a)f\left(\cfrac{a+b}{2}\right)$$
Differentiate both sides w.r.t $\,b$, using the Leibniz integral rule (derived from fundamental theorem of calculus) for the LHS:
$$f(b)=\frac{b}{2}f'\left(\cfrac{a+b}{2}\right)+f\left(\cfrac{a+b}{2}\right)-\frac{a}{2}f'\left(\cfrac{a+b}{2}\right)$$
Now set $b=0$ and $a=2x$:
$$f(0)=f(x)-xf'(x)$$
Differentiate both sides w.r.t $x$:
$$0=f'(x)-f'(x)-xf''(x)$$
so $f''(x)=0$ for all $x\neq 0$.
Thus we have proven the function is linear everywhere except $0$. Since $f'(0)$ and $f'(x)$, $f'(-x)$ are constants for $x>0$ exists, we know $f'(0)$ has to be equal to each of these and thus $f''(0)=0$.
*I'm not sure if you're able to prove that $f$ has to be twice differentiable.
I will assume that $f$ is locally integrable for an obvious reason. Our aim is to prove that $f$ is linear, which is then enough to conclude that $f$ is twice-differentiable with $f'' \equiv 0$.
Let $x < y$ and $0 < \lambda < 1$ be arbitrary. Set
$$ c = \lambda x + (1-\lambda) y, \qquad a = 2x - c, \qquad b = 2y - c. $$
Then $a < x < c < y < b$ and
$$ \frac{a+b}{2} = (1-\lambda)x + \lambda y, \qquad \frac{a+c}{2} = x, \qquad \frac{c+b}{2} = y. $$
So it follows that
\begin{align*} f((1-\lambda)x+\lambda y) &= \frac{\int_{a}^{b} f(t) \, \mathrm{d}t}{b-a} \\ &= \frac{\int_{a}^{c} f(t) \, \mathrm{d}t + \int_{c}^{b} f(t) \, \mathrm{d}t}{b-a} \\ &= \frac{(c-a)f(x) + (b-c)f(y)}{b-a} \\ &= (1-\lambda) f(x) + \lambda f(y). \end{align*}
This proves that $f$ is linear.
|
Show that $F(\alpha)=F(\beta)$ if $\alpha$ and $\beta$ are roots of the same irreducible polynomial $g$ over field $F$
I defined a map $\psi: F(\alpha) \to F(\beta)$ by sending $p(\alpha)$ to $p(\beta)$. This map is clearly onto. If $p_1(\alpha)=p_2(\alpha)$, then by minimality of the polynomial $g$, we must have that $g$ divides $p_1-p_2$ and hence $p_1(\beta)=p_2(\beta)$. The same argument backward gives that this map is one-one. It is also easy to see that $\psi$ is a homomorphism. Thus, $\psi$ is an isomorphism.
I showed that both these fields are isomorphic. Is this what the question wants me to show? Is there any other way to do it?
Thanks for the help!!
|
Plugging these values in, we find
$$2t\sum_{n=0}^{\infty}n(n-1)a_nt^{n-2}+(1-2t)\sum_{n=0}^{\infty}na_nt^{n-1}-\sum_{n=0}^{\infty}a_nt^n = 0$$
The next step is to sort things so we have matching powers of $t$. For some particular $k$, what's the coefficient of $t^k$?
To do this, we first bring distribute those polynomials in $t$ across the sums:$$\sum_{n=0}^{\infty}2n(n-1)a_nt^{n-1}+\sum_{n=0}^{\infty}na_nt^{n-1}-\sum_{n=0}^{\infty}2na_nt^{n}-\sum_{n=0}^{\infty}a_nt^n = 0$$Next, we shift indices so that all of the sums match up. Let $k=n-1$ in the first and second sums, and $k=n$ in the third and fourth:$$\sum_{k=-1}^{\infty}2(k+1)ka_{k+1}t^k+\sum_{k=-1}^{\infty}(k+1)a_{k+1}t^k-\sum_{k=0}^{\infty}2ka_kt^k-\sum_{k=0}^{\infty}a_kt^k = 0$$And now that they're all lined up, we recombine the sums. They all have the same factor $x^k$, so$$\sum_{k=0}^{\infty}\left[2(k+1)ka_{k+1}+(k+1)a_{k+1}-2ka_k-a_k\right]t^k = 0$$$$\sum_{k=0}^{\infty}\left[(k+1)(2k+1)a_{k+1}-(2k+1)a_k\right]t^k = 0$$Since the $k=-1$ terms were both zero, we dropped them from the sum. Now, the coefficient $(2k+1)[(k+1)a_{k+1}-a_k]$ of $a_k$ is zero for each $k\ge 0$, so we must have $a_k=(k+1)a_{k+1}$. That leads to the familiar exponential series $a_n=\frac{a_0}{n!}$.
Now, there's a nagging problem - we only got a one-dimensional solution space. This is a second order equation - where's the other solution? What happened is that the differential equation is singular - that leading $2t$ coefficient of $y''$ is zero at $t=0$. There's another solution, but it doesn't have a power series at zero because it isn't differentiable there.
Can we find that other solution with this method? Well, look at what happens if we don't require the exponents $k$ to be integers. We can solve the system "backwards" to get ever-increasing values of $a_k$ as $k\to-\infty$, but that just leads to a divergent series. We only get a convergent series if the process stops somewhere.
When does it stop? If we're working with integer $k$, it stops when we hit $k=0$. The next step $k=1$ multiplies the coefficient by zero. This is, of course, the solution we've already found.
Our other chance is that $2k+1$ coefficient. Whatever $a_{1/2}$ is, $a_{-1/2}$ can be anything. If we set $a_{-1/2}=0$, then $a_{-3/2}=0$ and so on.
So, that's the alternative: the other solution is of the form $$y=a_{\frac12}t^{\frac12}+a_{\frac32}t^{\frac12}+\cdots+a_{n+\frac12}t^{n+\frac12}+\cdots$$Can you find that series, too?
|
Problem # 8
projectile motion problems figure 2
A ball is kicked at an angle θ = 45°. It is intended that the ball lands in the back of a moving truck which has a trunk of length L = 2.5 m. If the initial horizontal distance from the back of the truck to the ball, at the instant of the kick, is do = 5 m, and the truck moves directly away from the ball at velocity V = 9 m/s (as shown), what is the maximum and minimum velocity vo so that the ball lands in the trunk. Assume that the initial height of the ball is equal to the height of the ball at the instant it begins to enter the trunk.
projectile motion problems figure 2
A ball is kicked at an angle θ = 45°. It is intended that the ball lands in the back of a moving truck which has a trunk of length L = 2.5 m. If the initial horizontal distance from the back of the truck to the ball, at the instant of the kick, is do = 5 m, and the truck moves directly away from the ball at velocity V = 9 m/s (as shown), what is the maximum and minimum velocity vo so that the ball lands in the trunk. Assume that the initial height of the ball is equal to the height of the ball at the instant it begins to enter the trunk.
You need to check my answer very carefully before you accept it. There is a lot of room for errors .
That should have been 17.1 m/s and 18.8 m/s
OK....this is kinda long....hang in there and we'll see if I get the same answer as Melody.
The ball's x and y positions are given by
x=xo + vox t = voxt = .707 v t (1) (since xo=0 and 45 degrees and there is no x decceleration)
y= yo + voy t - 1/2 a t^2 = .707v t - 1/2 a t^2 (2)
The position of the two ends of the truck bed are given by
5m + 9 (t) and 7.5 + 9 (t)
Now...when the ball lands in the truck bed y will = 0, so
y=0= .707v t - 1/2 a t^2 (3)
and the 'x' values for the ball and truck bed will be equal
.707 v t = 5+ 9 t substitute THIS value of .707v t in to (3) and solve for t
0= 5 + 9t - 1/2 (9.8) t^2 results in t = 2.28 sec (using quadratic formula)
THEN the back of the truck bed will be at 5 + 9t = 25.55 m
(the front wil be 25.55+2.5=28.05m)
Now solve (1) for the ball velocity .707v(2.28)=25.55 results in v = 15.85 m/s
to land on the BACK of the truck bed.
Repeating some of this for the FRONT of the truck bed:
.707 v (2.28) = 28.05 = 17.4 m/s
Velocity must be 15.85-17.4 m/s (NOT QUITE THE SAME AS MELODY'S ANSWER)
Ooops.....
is correct....I constrained the motion of the truck for the second part (shown in THIS color above) ....here is the CORRECTED version for the second part (it is like the first part) GingerAle
For the FRONT of the truck bed, we have .707 v t = 7.5 + 9t
Substitute THIS value in to equation (3) and solve for t
0= 7.5 + 9t - 1/2 (9.8)t^2 results in t= 2.459 sec (using quadratic formula)
Then the FRONT of the truck bed will be at 7.5 + 9t = 29.63 m
Now solve (1) for the ball velocity ,707 v (2.459) = 29.63 m results in v =17.04 m/s
to land on the FRONT of the truck bed
SOOOOO Velocity must be between 15.85-17.04 m/s Taa-Daa !
(Thanx everyone for pointing out the errors of my ways....hopefully I will not burn in hell forever)
Hi EP
It is good that we have done it different ways. One (or both) of us has probably made a little mistake as the answers should be closer. Still, the asker probably wanted one way or the other, now they have both. :)
No, you won’t burn in heII forever, just for 2.459 seconds in purgatory.
Hi Melody,
My jest was directed to EP, for his final (appended) remark, in the post above yours.
Purgatory is commonly regarded as a cleansing by way of painful temporal punishment, which, like the eternal punishment of h**l, is associated with the idea of fire. https://en.wikipedia.org/wiki/Purgatory#Pain_and_fire
Like any religious belief, individuals will adjust their viewpoints according to their own personality and perspective. So for many, such a place is an environment for prayer and meditation, while waiting for god to take them to heaven.
Personally, despite being raised as Catholic, I could never embrace any of the tenets of Catholicism (or any other religion). My first public demonstration of my contrary nature was in the fourth grade, after a visiting Brother gave a lecture (well couched in Catholic dogma) on creationism vs. evolution.
The teacher called on the students to ask questions or offer comments. Very few volunteered, so the teacher called on students individually. “Ginger, you usually have something to say.” I stood, which we were required to do when addressing the teacher or class, “You are too late,” I said. “I’ve already read Darwin’s book. God may have created the heavens and Earth, but I know I’m a chimp, just a new and improved version.”
After my comment, I started scurrying around the classroom and jumping on the tables while making chimp noises. I would have swung from the rafters if there were any. About a third of the class—mostly boys joined in the spectacle. I jumped on my best (boy) friend’s back; he could easily carry me around, and this continued for a few minutes, until the teacher scolded us back to our seats. During the commotion, I heard the Brother say to our teacher, “Your class room has become purgatory.” I remember thinking, “it may be purgatory for you, but it’s a little bit of heaven for us.”
Amazingly, we were not disciplined for this, although we did get a lecture on proper classroom behavior. After this, my classmates –usually boys– often brought me bananas. These banana gifts continued into high school. My chimp persona continues to this day.
Sister Alice, my mathematics teacher for 7
th, 8 th, and the first half of 9 th grade, would often “condemn” her students (sometimes individually) to additional time in purgatory for poor math scores on tests. Her comments never came across as any kind of jest or levity. I would often intentionally “solve” problems incorrectly using various absurd methods, just to listen to her howl. My math grades were poor anyway, and by doing this, I rarely scored higher than D’s.
After one particular midterm exam, Sister Alice returned my test, saying, “Ginger, you are not going to purgatory, you’re going straight to heII!” To which I replied, “Oh, I’ll keep your company then, and you can teach me the correct way to solve these.”
My class thought this was funny, but Sister Alice didn’t. I received an immediate detention—the parochial school’s version of purgatory. I spent the time drawing Sister Alice roasting in heII, with demons stabbing her with pitchforks, inscribed with errant mathematical equations. I titled it
Saint Alice Skips Purgatory.
Sister Alice was also the art teacher for the secondary grades, and she never cared much for my art, despite the fact that at least one of my art works always received honorable mention in every regional completion I entered—including one titled
Beyond Purgatory, based on the one I did in detention. The Mother Superior (principal) confiscated the original—not because of the subject matter, but because detention was supposed to be a time of repentance and meditation. When she confiscated my drawing, she was doing her best to suppress her laughter.
The Mother Superior was usually “no nonsense” when it came to running the school, but she did have a sense of humor. Years later, after her funeral mass, someone told me she had my drawing framed, and she would hang it in her quarters for the week preceding
All Saints' Day (Halloween).
To me this honor far exceeded all the gold/blue, silver/red, and bronze/white ribbons I’d won for any art completion.
GA
Both methods are correct.
Melody has made a small numerical error in solving the last quadratic for reaching the rear of the truck. The value under the square root sign should be 601 rather than 761.
The other differences are due to small differences used for the numerical approximations used for 1/root2. (I get values for velocity between 15.83 and 17.04 m/s).
This is an interesting question.
Here’s a composite graph of the parabolas x=time for y=0 for the back and front of the tuck’s bed, and the corresponding velocities for these range times.
\(y=5+9t-\dfrac{1}{2}\cdot (9.8)t^2\\ y=7.5+9t-\dfrac{1}{2}\cdot (9.8)t^2\\ y=\text {velocity for time (t) to range target}\\ y=\frac{\left(t\cdot9.81\right)}{2\cdot\sin\left(45\right)}\\\)
---------
In EP's solution for the front of the truck bed, he adjusted the velocity by constraining the time component. In this case, with the increase in velocity the ball will pass over the front of the truck bed at time = 2.284, but it does not intersect y=0, here. In range equations, both time and velocity are functions of distance and changing one affects the other.
The ball with a velocity of 17.4m/s will cross y=0 in (17.4*2*sin(45)/9.81)=
2.51s at (17.4^2/9.8)= 30.86 meters. The target point of the truck will be at 7.5+(9*2.51)= 30.08 meters. So, the ball will land on the hood or the windshield.
GA
|
I'm studying about the reflection principle of the brownian motion, and I found that this result is a direct consequence of this principle:
Let $B_t$ a brownian motion, then for every $a \in \mathbb{R} \ $,
$$\mathbb{P}(\lim_{t \to \infty} \sup_{s\in [0,t]} B_s > a) = 1$$
I'm trying to prove this statement using the reflection principle but I'm totally lost. I can't see how are those results related.
|
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
|
Answer
False. Change to: For $x=\displaystyle \frac{1}{3},\quad\frac{3x+7}{3x+10}=\frac{8}{11}$
Work Step by Step
If this were true, it would be true for any x, provided it doesn't yield 0 in the denominator. Try x=0: $LHS=\displaystyle \frac{0+7}{0+10}=\frac{7}{10}\neq\frac{8}{11}$ so, the statement is false. It is true only when $x=\displaystyle \frac{1}{3},$ $LHS=\displaystyle \frac{3\cdot\frac{1}{3}+7}{3\cdot\frac{1}{3}+10}=\frac{1+7}{1+10}=\frac{8}{11}$
|
In Exercises 1-13, find all \((x_0,y_0)\) for which Theorem [thmtype:2.3.1} implies that the initial value problem \(y'=f(x,y),\ y(x_0)=y_0\) has (a) a solution and (b) a unique solution on some open interval that contains \(x_0\).
[exer:2.3.1] \( {y'={x^2+y^2 \over \sin x}}\) &
[exer:2.3.2] \( {y'={e^x+y \over x^2+y^2}}\)
[exer:2.3.3] \(y'= \tan xy\) &
[exer:2.3.4] \( {y'={x^2+y^2 \over \ln xy}}\)
[exer:2.3.5] \(y'= (x^2+y^2)y^{1/3}\) &
[exer:2.3.6] \(y'=2xy\)
[exer:2.3.7] \( {y'=\ln(1+x^2+y^2)}\) &
[exer:2.3.8] \( {y'={2x+3y \over x-4y}}\)
[exer:2.3.9] \( {y'=(x^2+y^2)^{1/2}}\) &
[exer:2.3.10] \(y' = x(y^2-1)^{2/3}\)
[exer:2.3.11] \(y'=(x^2+y^2)^2\) &
[exer:2.3.12] \(y'=(x+y)^{1/2}\)
[exer:2.3.13] \( {y'={\tan y \over x-1}}\)
[exer:2.3.14] Apply Theorem [thmtype:2.3.1} to the initial value problem \[y'+p(x)y = q(x), \quad y(x_0)=y_0\] for a linear equation, and compare the conclusions that can be drawn from it to those that follow from Theorem 2.1.2.
[exer:2.3.15]
Verify that the function \[y = \left\{ \begin{array}{cl} (x^2-1)^{5/3}, & -1 < x < 1, \\[6pt] 0, & |x| \ge 1, \end{array} \right.\] is a solution of the initial value problem \[y'={10\over 3}xy^{2/5}, \quad y(0)=-1\] on \((-\infty,\infty)\). Verify that if \(\epsilon_i=0\) or \(1\) for \(i=1\), \(2\) and \(a\), \(b>1\), then the function \[y = \left\{ \begin{array}{cl} \epsilon_1(x^2-a^2)^{5/3}, & - \infty < x < -a, \\[6pt] 0, & -a \le x \le -1, \\[6pt] (x^2-1)^{5/3}, & -1 < x < 1, \\[6pt] 0, & 1 \le x \le b, \\[6pt] \epsilon_2(x^2-b^2)^{5/3}, & b < x < \infty, \end{array} \right.\] is a solution of the initial value problem of a on \((-\infty,\infty)\).
[exer:2.3.16] Use the ideas developed in Exercise [exer:2.3.15} to find infinitely many solutions of the initial value problem \[y'=y^{2/5}, \quad y(0)=1\] on \((-\infty,\infty)\).
[exer:2.3.17] Consider the initial value problem \[y' = 3x(y-1)^{1/3}, \quad y(x_0) = y_0. \eqno{\rm (A)}\]
For what points \((x_0,y_0)\) does Theorem [thmtype:2.3.1} imply that (A) has a solution? For what points \((x_0,y_0)\) does Theorem [thmtype:2.3.1} imply that (A) has a unique solution on some open interval that contains \(x_0\)?
[exer:2.3.18] Find nine solutions of the initial value problem \[y'=3x(y-1)^{1/3}, \quad y(0)=1\]that are all defined on \((-\infty,\infty)\) and differ from each other for values of \(x\) in every open interval that contains \(x_0=0\).
[exer:2.3.19] From Theorem [thmtype:2.3.1}, the initial value problem \[y'=3x(y-1)^{1/3}, \quad y(0)=9\] has a unique solution on an open interval that contains \(x_0=0\). Find the solution and determine the largest open interval on which it is unique.
[exer:2.3.20]
From Theorem [thmtype:2.3.1}, the initial value problem \[y'=3x(y-1)^{1/3}, \quad y(3)=-7 \eqno{\rm (A)}\] has a unique solution on some open interval that contains \(x_0=3\). Determine the largest such open interval, and find the solution on this interval. Find infinitely many solutions of (A), all defined on \((-\infty,\infty)\).
[exer:2.3.21] Prove:
If \[f(x,y_0) = 0,\quad a <b,> If \(f\) and \(f_y\) are continuous on an open rectangle that contains \((x_0,y_0)\) and (A) holds, no solution of \(y'=f(x,y)\) other than \(y\equiv y_0\) can equal \(y_0\) at any point in \((a,b)\).
|
Based on this question: What is the homology groups of the torus with a sphere inside?
I'm trying to find the fundamental group of this space using the Seifert–van Kampen theorem. If $U$ is the torus and $V$ is the sphere, then $U\cap V$ is the circle, thus we have the following fundamental groups:
$\pi_1(U)=\mathbb Z\times\mathbb Z$
$\pi_1(V)=0$
$\pi_1(U\cap V)=\mathbb Z$
If we use the group presentation notation we have:
$\pi_1(U)=\langle\alpha,\beta\mid \alpha\beta=\beta\alpha\rangle$
$\pi_1(V)=\langle\emptyset\mid\emptyset\rangle$
$\pi_1(U\cap V)=\langle\gamma\mid\emptyset\rangle$
Thus using the Seifert–van Kampen theorem:
$\pi_1(X)=\langle\alpha,\beta\mid\alpha\beta=\beta\alpha,\beta\rangle$
Note that I added $\beta$ above because when we turn around the generator of $S^1$ which is $U\cap V$, we span one of the generators of the torus which is $U$.
Thus the fundamental group of this space is $\mathbb Z\times \{0\}$ which is $\mathbb Z$ itself.
My approach is correct?
Thanks a lot!
|
Difference between revisions of "Signed measure"
m (moved subhead above MSC)
m
Line 6: Line 6:
$\newcommand{\abs}[1]{\left|#1\right|}$
$\newcommand{\abs}[1]{\left|#1\right|}$
−
An real-valued $\sigma$-additive function defined on a certain
+
An real-valued $\sigma$-additive function defined on a certain -algebra$\mathcal{B}$ of subsets of
a set $X$. More generally one can consider vector-valued measures, i.e. $\sigma$-additive functions $\mu$ on $\mathcal{B}$
a set $X$. More generally one can consider vector-valued measures, i.e. $\sigma$-additive functions $\mu$ on $\mathcal{B}$
taking values on a Banach space $B$ (see [[Vector measure]]). The total variation measure of $\mu$ is defined on $B\in\mathcal{B}$ as:
taking values on a Banach space $B$ (see [[Vector measure]]). The total variation measure of $\mu$ is defined on $B\in\mathcal{B}$ as:
Line 38: Line 38:
By the [[Riesz representation theorem]] the space of signed measures with finite total
By the [[Riesz representation theorem]] the space of signed measures with finite total
−
variation on the
+
variation on the $\sigma$-algebra of a locally compact
Hausdorff space is the dual of the space of continuous functions (cp. also with [[Convergence of measures]]).
Hausdorff space is the dual of the space of continuous functions (cp. also with [[Convergence of measures]]).
Revision as of 17:14, 30 July 2012 generalized measure, real valued measure
$\newcommand{\abs}[1]{\left|#1\right|}$ An real-valued $\sigma$-additive function defined on a certain σ-algebra $\mathcal{B}$ of subsets of a set $X$. More generally one can consider vector-valued measures, i.e. $\sigma$-additive functions $\mu$ on $\mathcal{B}$ taking values on a Banach space $B$ (see Vector measure). The total variation measure of $\mu$ is defined on $B\in\mathcal{B}$ as: \[ \abs{\mu}(B) :=\sup\left\{ \sum \abs{\mu(B_i)}_B: \text{'"`UNIQ-MathJax11-QINU`"' is a countable partition of '"`UNIQ-MathJax12-QINU`"'}\right\} \] where $\abs{\cdot}_B$ denotes the norm of $B$. In the real-valued case the above definition simplifies as \[ \abs{\mu}(B) = \sup_{A\in \mathcal{B}, A\subset B} \left(\abs{\mu (A)} + \abs{\mu (X\setminus B)}\right). \] $\abs{\mu}$ is a measure and $\mu$ is said to have finite total variation if $\abs{\mu} (X) <\infty$.
If $V$ is finite-dimensional the Radon-Nikodym theorem implies the existence of a measurable $f\in L^1 (\abs{\mu}, V)$ such that \[ \mu (B) = \int_B f d\abs{\mu}\qquad \mbox{for all '"`UNIQ-MathJax20-QINU`"'.} \] In the case of real-valued measures this implies that each such $\mu$ can be written as the difference of two nonnegative measures $\mu^+$ and $\mu^-$ which are mutually singular (i.e. such that there are sets $B^+, B^-\in\mathcal{B}$ with $\mu^+ (X\setminus B^+)= \mu^- (X\setminus B^-) =\mu^+ (B^-)=\mu^- (B^+)=0$). This last statement is sometimes referred to as Hahn decomposition theorem. The Hahn decomposition theorem can also be proved defining directly the measures $\mu^+$ and $\mu^-$ in the following way: \begin{align*} \mu^+ (B) = \sup \{ \mu (A): A\in \mathcal{B}, A\subset B\}\\ \mu^- (B) = \sup \{ -\mu (A): A\in \mathcal{B}, A\subset B\} \end{align*} $\mu^+$ and $\mu^-$ are sometimes called, respectively, positive and negative variations of $\mu$. Observe that $|\mu| = \mu^++\mu^-$.
By the Riesz representation theorem the space of signed measures with finite total variation on the $\sigma$-algebra of Borel subsets of a locally compact Hausdorff space is the dual of the space of continuous functions (cp. also with Convergence of measures).
References
[AmFuPa] L. Ambrosio, N. Fusco, D. Pallara, "Functions of bounded variations and free discontinuity problems". Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000. MR1857292Zbl 0957.49001 [Bo] N. Bourbaki, "Elements of mathematics. Integration" , Addison-Wesley (1975) pp. Chapt.6;7;8 (Translated from French) MR0583191 Zbl 1116.28002 Zbl 1106.46005 Zbl 1106.46006 Zbl 1182.28002 Zbl 1182.28001 Zbl 1095.28002 Zbl 1095.28001 Zbl 0156.06001 [DS] N. Dunford, J.T. Schwartz, "Linear operators. General theory" , 1 , Interscience (1958) MR0117523 [Bi] P. Billingsley, "Convergence of probability measures" , Wiley (1968) MR0233396 Zbl 0172.21201 [Ma] P. Mattila, "Geometry of sets and measures in euclidean spaces. Cambridge Studies in Advanced Mathematics, 44. Cambridge University Press, Cambridge, 1995. MR1333890 Zbl 0911.28005 How to Cite This Entry:
Signed measure.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Signed_measure&oldid=27267
|
I sympathize with the text to some extent and the extent could have approached 100% some 25 years ago. In particular, decoherence is a legitimate insight and quantum mechanics isn't weird when you look at it calmly. But there are lots of claims that Ball makes that I heavily disagree with, too.
OK, what is decoherence? And equally importantly, what decoherence isn't? Well, starting with the positive things, decoherence is a process that
allows one to calculate at what "point" in the parameter space, classical physics (gradually) becomes a decent approximate theory for a given physical system puts severe constraints on the possible "basis of states" that may arise as "the states" after a measurement eliminates the physical visibility of the complex phases in the probability amplitudes, so that the probability amplitudes may effectively be replaced with their absolute values
Decoherence is an effective process – perhaps a gedanken process – which is irreversible and erases the information about the relative complex phases. You start with an observed physical system, like a cat. Decoherence will ultimately pick "dead" and "alive" as the preferred basis. The key observation is that the cat interacts with some "environmental" degrees of freedom, which I will take to be air in order to be would-be witty, so after some time, the basis vectors of the cat, "dead" and "alive", along with the state of the air evolve in specific ways.
What is the evolution of the dead-or-alive cat's basis vectors?\[
U: \,\, \ket{\rm alive} \otimes \ket{\rm odorless} \to \ket{\rm alive} \otimes \ket{\rm odorless}\\
U: \,\, \ket{\rm dead} \otimes \ket{\rm odorless} \to \ket{\rm dead} \otimes \ket{\rm stinky}
\] You see that air surrounding the cat (in Schrödinger's lethal box) was odorless to start with but it becomes stinky if the cat has died. You should appreciate that the evolution rules above don't have to be "postulated". In a microscopic theory including the quantum mechanics of biological systems, you can derive these rules. When a cat is dead, the circulation stops, the blood no longer takes some chemicals away from the skin, and bacteria find it easy to devour the dead cat.
The number of stinky molecules in the air remains "low" when the cat is alive and it becomes "high" when the cat is dead. The states of air, "low" and "high", are orthogonal to each other. Imagine we won't ever "smell" the environment again, so all predictions for observations of the cat itself may also be obtained from the density matrix that only describes the cat's own degrees of freedom, not the air.
It means that you may compute the reduced density matrix for the cat. And because the "odorless" and "stinky" states of the air are orthogonal to each other, this reduced density matrix will be diagonal. If the pure vector describing the initial state of the cat was\[
\ket{\psi}_{\rm initial} = a \ket{\rm alive} + b\ket{\rm dead},
\] the final state's density matrix – after you traced over the environment – will be\[
\rho_{\rm final,cat} = {\rm diag} ( |a|^2 , |b|^2 ).
\] Note that at least for \(|a|^2\neq |b|^2\), the density matrix is only diagonal in this particular basis, not in others. So the process of the interaction with the air – the environment – has picked a preferred basis. The diagonal entries of the density matrix may be interpreted in the very same way as the classical probabilities \(P_{\rm alive},P_{\rm dead}\) – probabilities that we use when we throw dice or when we calculate classical statistical physics of atoms. The phases of the complex numbers \(a,b\in\CC\) have become unobservable, even the relative phase.
This was a trivial sketch. When you actually calculate decoherence, you must consider how quickly the environment evolves to its characteristic final states (which know something about the state of the observed object), and you must realize that they're not exactly orthogonal to each other. In practice, the observed system – a cat or its generalization – imprints itself roughly to \(\exp(t/t_0)\) "degrees of freedom" in the environment, because their influences' on others grow like an exponential avalanche. But the inner products of the environment's degrees of freedom aren't exactly orthogonal. Instead, the (squared absolute value of the) inner product is some \(\exp(-B)\) per degree of freedom. Exponentiate this probability to the exponent equal to the number of degrees of freedom, \(\exp(t/t_0)\), and you will determine that the off-diagonal degrees of freedom of the reduced density matrix go down like \[
\rho_{\rm alive,dead}(t)\sim \exp[-B \exp(t/t_0)].
\] For the sake of clarity, I have kept the notation from the dead-or-alive cat's example. I am sure that you can generalize the labels. OK: It's an exponential of an exponential so it decreases much faster than an exponential. In practice, the dimensionless number \(B\) won't be too different from "a number of order one" so the realistic calculation of the "speed of decoherence" will mostly depend on the timescale \(t_0\). The more "air" or environment surrounds the cat, or the more strongly this environment interacts with the cat, the shorter the timescale \(t_0\) will be. At times \(t\gg t_0\), you may assume that the off-diagonal elements of the density matrix are basically zero.
I sincerely hope that some people have just learned what decoherence is for the first time in their life by reading the text above!
OK, now the negative comments. Why doesn't decoherence actually erase all the things that some people consider mysterious, especially the dependence on the observers?
Decoherence is just an emergent, approximate "process" that depends on the separation of the degrees of freedom to the observed ones and the environment. Well, only an observermay separate things to those that are observed and those that are not – the environment! In this sense, the very assumptions of the decoherence calculation require an observer – who knows what is and what isn't observed (what isn't observable by him in the future may be counted as the environment). The selection of the preferred basis is "mostly unique" or "basically unique" but it isn't quite unique and it isn't guaranteed to be unique, and one may design examples in which it demonstrably isn't unique. Most importantly, the final result of the decoherence calculation was a diagonal density matrix with nonzero entries \(|a|^2,|b|^2\) on the diagonal. The final result was not\({\rm diag}(1,0)\) or \({\rm diag}(0,1)\). So decoherence hasn't actually picked any "actual outcome" from the basis i.e. from the list of possible outcomes of the measurements.
Also, the observer is still needed for the measurement to end. Decoherence doesn't replace the measurement: the pick of one of the options (either "dead" or "alive" in my example). And to a limited extent, the observer is still needed in between because even the basis in which the density matrix is diagonal for a given choice of the environment may refuse to be unique. In such cases or to this extent, an observer is still needed to determine what observable he wants to be measured – what the basis of options should be.
If I have to pick the most incorrect representative phrase from Ball's article, it would be this sentence written in a large font:
We don’t need a conscious mind to measure or look. With or without us, the Universe is always looking.Sorry but if you read the description above, you should be able to understand that the Universe cannot be looking by itself. Nothing is collapsing without the observer. The collapse of the cat to "dead" or "alive" i.e. the setting of the reduced density matrix to \({\rm diag}(1,0)\) or \({\rm diag}(0,1)\) is the "philosophically irritating stage" that must be done afterdecoherence. And even the previous erasure of the off-diagonal matrix entries only happens "approximately" because in the strictly exact description, the information about the relative phases is never completelylost.
When you look rationally at decoherence, you will realize that it changes nothing about the observer dependence of quantum mechanics. It's really just a framework to figure out when the classical description becomes OK enough. But the quantum description is
alwaysexactly accurate, on both sides of this cut.
There is also one amusing historical fact that destroys the idea that "decoherence erases the need for minds in quantum mechanics". The first paper that really introduced the mathematical operations behind "decoherence" was written by (no, it wasn't Wojciech Zurek!) H. Dieter Zeh in 1970. Here is the actual damn paper. As you can see on the Wikipedia page about Zeh, he not only refused to eliminate minds from quantum mechanics. He has multiplied them because he's been a proponent of the many-minds interpretation of quantum mechanics. Funny. Maybe decoherence isn't such a staunch "killer of the minds", after all.
Incidentally, while I prefer when people avoid the "interpretation talk" altogether, "many-minds interpretation" is surely the kind of ideology that is being attacked by all the Marxist inkspillers in the media etc. When they see a "mind", they have a hissy fit. But this "interpretation" is nothing else than the "minimum fix" needed to be applied to Everett's "many worlds interpretation" so that the modified "interpretation" becomes at least morally correct. Needless to say, the "fix" is basically equivalent to going back to the Copenhagen interpretation.
I think that if we fairly look at the actual beliefs of the folks such as Niels Bohr, they knew the "qualitative message" of this blog post – i.e. of decoherence – even if they haven't ever coined words and formalisms for them. They knew that when things become complicated, the relative phases just become impossible to predict or trace, and then the quantum mechanical predictions become qualitatively indistinguishable from predictions in classical statistical physics: They may predict probabilities for the elements of a preferred list of possible outcomes while the measurement of non-commuting observables effectively disappears, along with all the relevance of the quantum phases. Bohr surely thought about these procedures but didn't go far enough to explicitly discuss the environment. But he did offer the correct final answers. In particular, the "Bohr correspondence principle" revealed that when quantum numbers such as \(n\) become high, the atom etc. starts to behave as in classical physics.
From this broader viewpoint, and especially if you're not really interested in the calculation of the "gradual disappearance of coherence", decoherence may be said to be much ado about nothing. It changes nothing about the postulates of quantum mechanics. Instead, it is just an approximate way to organize certain calculations and to focus on certain questions, and to use a trick that enables a pseudo-classical procedure to calculate the relevant predictions. But there's always the same quantum mechanics – observer-dependent quantum mechanics – underlying all these calculations.
|
I recently wrote an answer to a question on MSE about estimating the number of squarefree integers up to $X$. Although the result is known and not too hard, I very much like the proof and my approach. So I write it down here.
First, let’s see if we can understand why this “should” be true from an analytic perspective.
We know that
$$ \sum_{n \geq 1} \frac{\mu(n)^2}{n^s} = \frac{\zeta(s)}{\zeta(2s)},$$ and a general way of extracting information from Dirichlet series is to perform a cutoff integral transform (or a type of Mellin transform). In this case, we get that $$ \sum_{n \leq X} \mu(n)^2 = \frac{1}{2\pi i} \int_{(2)} \frac{\zeta(s)}{\zeta(2s)} X^s \frac{ds}{s},$$ where the contour is the vertical line $\text{Re }s = 2$. By Cauchy’s theorem, we shift the line of integration left and poles contribute terms or large order. The pole of $\zeta(s)$ at $s = 1$ has residue $$ \frac{X}{\zeta(2)},$$ so we expect this to be the leading order. Naively, since we know that there are no zeroes of $\zeta(2s)$ on the line $\text{Re } s = \frac{1}{2}$, we might expect to push our line to exactly there, leading to an error of $O(\sqrt X)$. But in fact, we know more. We know the zero-free region, which allows us to extend the line of integration ever so slightly inwards, leading to a $o(\sqrt X)$ result (or more specifically, something along the lines of $O(\sqrt X e^{-c (\log X)^\alpha})$ where $\alpha$ and $c$ come from the strength of our known zero-free region.
In this heuristic analysis, I have omitted bounding the top, bottom, and left boundaries of the rectangles of integration. But proceeding in a similar way as in the proof of the analytic prime number theorem, you could proceed here. So we expect the answer to look like
$$ \frac{X}{\zeta(2)} + O(\sqrt X e^{-c (\log X)^\alpha})$$ using no more than the zero-free region that goes into the prime number theorem.
We will now prove this result, but in an entirely elementary way (except that I will refer to a result from the prime number theorem). This is below the fold.
We do this in a series of steps.
Lemma
$$\sum_{d^2 \mid n} \mu(d) = \begin{cases} 1 & \text{if } n \text{ is squarefree} \
0 & \text{else} \end{cases}$$ Proof. This comes almost immediately upon noticing that this is a multiplicative function, and it’s trivial to prove it for prime powers. $\spadesuit$
So to sum up the squarefree numbers up to $X$, we look at
$$ \sum_{n \leq X} \sum_{d^2 \mid n} \mu(d) = \sum_{d^2e \leq X} \mu(d)= \sum_{d^2 \leq X} \mu(d) \left\lfloor \frac{X}{d^2} \right\rfloor.$$
This last expression is written in one of the links in Marty’s answer, and they prove it with inclusion-exclusion. I happen to find this derivation more intuitive, but it’s our launching point forwards.
We approximate the floored bit. Notice that
$$ \left \lfloor \frac{X}{d^2} \right \rfloor = \frac{X}{d^2} + E(X,d)$$ with $\lvert E(x,d) \rvert \leq 1$ (we think of it as the Error of the approximation). So the number of squarefree numbers up to $X$ is $$ \sum_{d^2 \leq X} \mu(d) \frac{X}{d^2} + \sum_{d^2 \leq X}\mu(d) E(x,d).$$ We look at the two terms separately. The first term
The first term can be approximated by the infinite series plus an error term.
$$ \sum_{d^2 \leq X} \frac{\mu(d)}{d^2} = X\sum_{d \geq 1} \frac{\mu(d)}{d^2} – X\sum_{d > \sqrt X} \frac{\mu(d)}{d^2} = \frac{X}{\zeta(2)} – X\sum_{d > \sqrt X} \frac{\mu(d)}{d^2}.$$
We must now be a bit careful. If we perform the naive bound, by bounding $\mu(n) \leq 1$, then this last sum is of size $O(\sqrt X)$. That’s too big!
So instead, we integrate by parts (Riemann-Stieltjes integration) or equivalently we perform summation by parts to see that
$$ \sum_{d > \sqrt X} \frac{\mu(d)}{d^2} = O\left( \int_{\sqrt X}^\infty \frac{M(t)}{t^3} dt \right)$$ where $$ M(t) = \sum_{n \leq t} \mu(n).$$ By the prime number theorem (and as you mention in your question), we know that $M(t) = o(t)$. (In fact, the analytic prime number theorem in one of the easy forms is that $M(X) = O(X e^{-c (\log X)^{1/9}})$, which we might use here). This means that this last term is bounded by $$ o(\sqrt X)$$ if we just use that $M(t) = o(t)$, or $$ O(\sqrt X e^{-c (\log X)^{1/9}})$$ if we use more. This completes the first term $\spadesuit$ Second term
This is easier now. We again use integration by parts. Notice that
$$ \begin{align} \sum_{d \leq \sqrt X} \mu(d) E(X,d) &= \sum_{d \leq \sqrt X} (M(d) – M(d – 1))E(X,d) \\ &= M(\lfloor \sqrt X \rfloor) E(X, \lfloor \sqrt X \rfloor) + \sum_{d \leq \sqrt X – 1} M(d) (E(X, d) – E(X, d+1)). \end{align}$$
By using that $E(X,d) \leq 1$ and $M(X) = o(\sqrt X)$ (or the stronger version), we match the results from the first part. $\spadesuit$
Putting these two results together, we have proven that the number of squarefree integers up to $X$ is
$$\frac{X}{\zeta(2)} + o(\sqrt X)$$ using only that $M(X) = o(\sqrt X)$, and alternately $$ \frac{X}{\zeta(2)} + O(\sqrt X e^{-c (\log X)^{1/9}})$$ using a bit of the zero-free region. This completes the proof. $\diamondsuit$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.