text
stringlengths 256
16.4k
|
|---|
can someone please help me with a trivial question?
Write $-3i$ in polar coordinates.
So $z=a+bi=rcis\theta$ with $r=\sqrt{a^{2}+b^{2}}$ and $\theta = arctan\frac{b}{a}$. However, what if $a=0$ such as the case for $-3i$? I am confused!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
According to a definition we always have $$-\pi<\theta\le \pi$$here we can write $$\theta=\tan^{-1}\dfrac{-3}{0}=-\dfrac{\pi}{2}\\r=3$$therefore $$-3i=3e^{-i\dfrac{\pi}{2}}$$
Try to draw a picture. I think you will be able to see the angle:
Think of the complex number $z=x+iy$ as a vector from the origin to the point $(x,y)$. Then, it can be characterized by the length $r=|z|$ of the this vector and the angle between the axis $x>0$ and the vector (calculated counterclockwise). Thus, $z=re^{i\theta}$ where $r=\sqrt{x^2+y^2}=3$ and the angle is $-\frac{\pi}{2}$. i.e, $z=3e^{-i\frac{\pi}{2}}$.
|
I am working with sparse matrices (not particularly huge, <100Mb) and I want to compute the largest independent set on the bipartite graph $(N,E)$ defined as follows: suppose the matrix is named $A$ and is of size $m \times n$.
$m+n$ nodes ($1,\dots,m$ for the rows, $m+1,\dots,m+n$ for the columns) the edge $(i,j) \in E \iff a_{ij} \neq 0$
An independent set is then a subset of rows and columns such that they "do not intersect".
The graph is clearly bipartite (edges are from the set $\{1,\dots,m\}$ to the set $\{m+1,\dots,m+n\}$), and therefore the computation of the largest independent set is definitely simpler than the general case (as suggested also in this other question).
This computation is not entirely critical for my work (my thesis, actually) so I would like to use as much code from existing libraries as possible. I tried igraph, but they use a general algorithm for the largest independent set (which is terribly slow for even a matrix of 5MB), and I have yet to try and see if doing explicitly all the passages "Maximum Matching > Minimum vertex cover > maximum independent set" helps (it depends on their implementation).
The fact that I have a bipartite graph definitely helps from the theoretical point of view, so bigger instances are solvable, but yet I have to find a nice library which might work well with big, sparse, bipartite graphs.
Do you know any other existing libraries to do this kind of job?
|
The Art and Science of Electrochemical Plating
Electroplating is commonly used for surface finishing due to its effectiveness across the automotive, electronics, corrosion protection, aerospace, and defense industries. Since WWII, the number of patents claiming to achieve “perfect plating” has increased exponentially. The focus of the narrative surrounding electrochemical plating has also shifted from complex chemical reactions toward perfecting operating conditions. In this blog post, we show how to achieve smoother metal surfaces during
reverse pulse plating (RPP) using the COMSOL Multiphysics® software and add-on Electrodeposition Module.
What Is Reverse Pulse Electroplating?
Electroplating involves immersing metal electrodes in an electrolyte bath and then applying an external current over the electrodes. At the cathode, we obtain a reduction of the ions in the bath, forming a metal coating. The anode can either be a dimensionally stable anode, where oxygen evolution or chlorine evolution occurs, or an electrode that dissolves anodically (which is also known as
stripping), where the electrode is oxidized so that the metal goes into the solution as ions. A model of decorative electroplating on a furniture fitting.
In electroplating, typically either a direct current (DC) or current pulse(s) can be applied. Pulsed-current techniques involve the application of a forward current for a certain time interval with either a short, high-current reverse pulse or zero-current periods interposed. These current pulses are also known as
duty cycles. For the RPP process, current pulses of equal or varying amplitude, duration, and polarity are used to combine electroplating and stripping.
RPP is composed of a forward duty cycle (t_{fwd}), at which a cathodic current is applied and the deposition of metal takes place (plating), and reverse duty cycle (t_{rev}), during which the plating current is reversed for the dissolution of metal ions to take place (stripping). In each direction (forward and reverse), the duty cycle is defined as a ratio of the plating/dissolution time to the total time of the applied current. The average current density with respect to the duty cycle is given by:
where t_{fwd} and t_{rev} are the forward and reverse duty cycles and hold the relation t_{rev}+t_{fwd}=1.
By optimizing both the electroplating and the stripping processes, through controlling the duty cycles, RPP allows the preparation of smooth coatings. For a constant average current density (i_{avg}) and dissolution current density (i_{rev}), the plating current density i_{fwd} can be defined as:
Quantifying the Current Distribution
According to IUPAC definitions, secondary current distribution is valid when the influence of the activation overpotential cannot be neglected but concentration overpotentials are negligible. When the activation overpotential is included, a high local current density introduces a high local activation overpotential at the electrode surface, which causes the current to naturally become more uniform. (For more information, you can read these blog posts on the theory of current distribution and on how to choose between the current distribution interfaces.)
The secondary current distribution is often analyzed in terms of the Wagner number (Wa), a dimensionless quantity given by:
where \kappa is the conductivity of the electrolyte bath; d\eta/di is the slope of the overpotential-current curve under the above conditions; and \ell is a characteristic length of the system (for instance, the length of the electrode). The Wagner number can thus also be seen as the ratio between primary current distribution effects (via \kappa/\ell, which is influenced by the geometry and electrolyte properties) and secondary current distribution effects (via d\eta/di, the kinetic polarization).
At the Tafel limit or high (anodic or cathodic) overpotentials, Wa becomes inversely proportional to the current density for the process:
where \beta is the Tafel slope.
A higher Wagner number essentially means that the primary current distribution effects are superseded by the secondary current effects, resulting in a more uniform current distribution. Alternatively, for geometries with peaks and recesses, a leveling effect could be achieved by using the primary current density distribution around the working electrode. In the following case, we will see how to use RPP to achieve a better surface finish at a geometry with a given protrusion.
Modeling RPP with COMSOL Multiphysics®
The Pulse Reverse Plating model in the Application Gallery uses the
Secondary Current Distribution interface to account for activation overpotentials (reaction kinetics) together with primary current distribution effects (geometric effects and electrolyte conductivity).
We set up a simple 2D geometry with a small protrusion that acts as a site of interest for the shape evolution when subject to different applied current forms (see the image below). The 2D model simulates a copper substrate with a protrusion exposed to an electrolytic bath of finite conductivity. It is assumed that the electrochemical cell consists of a well-stirred electrolyte with constant conductivity (with no concentration gradients); electrodes; and an anode and cathode, where the ohmic losses are neglected.
2D geometry used to set up the electroplating model.
To act as a reference for the RPP process, we first set up and solve a model for DC plating.
The electrolyte bath has the conductivity \sigma_l and is set to a domain where Ohm’s law dictates the current density in accordance with:
At the electrode-electrolyte interface, the local current (i_{loc}) is given by:
where i_{loc} is given by the Butler-Volmer equation:
where i_0 is the exchange current density; \alpha_{a,c} is the transfer coefficient; \eta is the overpotential; F is Faraday’s constant; R is the gas constant; and T is the temperature.
The above equation defines the charge transfer kinetics at the interface. When these two processes are accounted for, namely Ohm’s law and the charge transfer kinetics, the analysis is referred to as secondary current distribution.
Current is applied to the boundary surface for the counter electrode. At the plating surface, a redox reaction with known kinetic parameters (reaction rate constants) takes place. The local growth velocity on the electrode surface is then based on the time-averaged sum of the local current densities during the forward and reverse pulses. At any instant during the plating process, each point of the electrode surface advances proportionally to the local current density and in the direction normal to the electrode surface:
where M_{\textrm{Cu}} is the molar mass, R_{\textrm{Cu}} is the rate of mass deposition, \rho_{\textrm{Cu}} is the density, and \nu_{\textrm{Cu}} is the stoichiometric coefficient of the deposited copper.
Surface profile evolution versus arc length when using a DC pulse for electroplating.
A time-dependent study is set for an hour to observe the electrode shape evolution for the prescribed current distribution conditions in the DC plating scenario. The figure above shows how the protrusion becomes deeper and wider during the DC plating process.
How Does a Reverse Pulse Benefit the Electrochemical Plating Process?
In RPP, operational parameters such as the current density and the pulse width are crucial to achieving desirable results. The average current density at the working electrode is related to the ratio of duty cycles via Eq. 1. Thus, the plating process can be fine-tuned through the choice of operating cycle parameters. We now proceed to compare the results for the DC and the RPP mode.
We set up the model for RPP by modifying the
Electrolyte Current node in the existing study to i_{fwd} (to define the forward pulse for plating) and by introducing an additional Secondary Current Distribution node to define the superimposed reversed pulse for dissolution (stripping). In the second Secondary Current Distribution node, we set the electrolyte current as i_{rev} for the reverse pulse while keeping the kinetic parameters identical to those of the previous study.
The average electrolyte current is kept constant while only varying the pulse width using t_{fwd}; i.e., the forward duty cycle. The plating current densities at the cathode surface are then predicted via Eq. 2 and influence the shape evolution via Eq. 8. We compute the time-dependent study for an hour to get the surface profile for a pulse-reversed current.
Surface profile evolution versus arc length using RPP for a duty cycle of 0.85.
As shown above, the surface profile shows a more buffed metal surface than for the DC plating using the same average electrolyte current. The study illustrates the application of pulsed currents to achieve the optimal current distribution around the electrode surface. Using a pulsed current density at the electrode surface gives a smoother surface profile without using any electrolytic bath additives, thus reducing the cost and toxicity associated with additional chemical additives.
The Wagner number ( Wa) for a plating process is always smaller when using a pulse-reversed process than with DC, as we get higher instantaneous plating current densities (see i_{fwd} in Eq. 2). This indicates sensitivity to the geometrical features; hence, the localization of the current distribution is enhanced while using pulsed-reversed current. We now try to exploit local current distribution to yield a leveled surface by tuning the ratio of forward and reverse pulse.
Surface profile evolution versus arc length using RPP for different duty cycles.
In the plot above, we see the continued furbishing effect of attenuating the ratio for the forward and reverse duty cycles for reverse-pulsed current. For RPP, increasing the reverse duty cycle (t_{rev}) increases the dissolution time and the overall current distribution becomes more uneven, leading to different plating rates on the evolving surface profile. During this increased dissolution period, the peak is recessed due to a prolonged dissolution reaction at a higher current density, as ohmic effects dominate the dissolution process. As we decrease t_{fwd}, the ohmic losses become larger relative to the activation losses during both the plating and reverse cycles, leading to a geometry-sensitive, instantaneous current density distribution. This current density distribution, obtained via varying the duty cycles, makes the surface smoother and yields geometric leveling of the protrusion relative to the thickness of the plating.
Concluding Remarks
We have discussed the role of using current pulses instead of additional chemical additives to seek operating conditions for the plating current in order to make the plated surface as smooth as we can afford. For a system with known kinetic parameters, the Wagner number is inversely proportional to the current density and can be adjusted accordingly to achieve the desired plating on a surface. If we want to fill out the crevices or recess the peaks, we should aim for a system with a lower Wagner number, as in this case. When a uniform coating thickness is required on the surface, we should have high Wagner numbers for the plating process. Current pulses with the smaller forward duty cycle used for RPP result in a lower Wagner number, hence buffing out small protrusions and defects. This is the art in the science of electroplating: to tune the operating conditions without really affecting the chemical composition of the electrolytic bath and eliminating the need for chemical additives.
The model demonstrates the following features:
Smoother surface profiles are obtained for RPP than for the DC plating process The choice of pulse width in RPP can significantly alter the current distribution at the working electrode Smoother finishes can be obtained by tuning the ratio of forward and reverse duty cycles
By tuning the operational parameters, simulation can help us understand the advantage of using an RPP technique with respect to DC plating for identical electrolytic baths. Simulation thus provides a tool to minimize the use of chemical additives, toxicity, cost, and maintenance. RPP offers better metal distribution and leveling effects than DC plating for chemically equivalent electrolytic baths.
Next Steps
Learn more about how to simulate the RPP process in COMSOL Multiphysics. Click the button below, which will take you to the Application Gallery with the MPH-file (note that you must have a COMSOL Access account and valid software license).
You can also check out the Electrochemical Polishing model, which is defined using the
Electric Currents interface for a 2D geometry. Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
This is an excellent question! Just to elaborate upon what @porphyrn and @M. Farooq have said, imagine that the universe is entirely empty except for one sodium atom and one photon (particle of light) that has a wavelength of $\pu{242 nm}$. If the photon bumps into the sodium atom, it might just bounce off: this is called scattering. But another
big possibility is that the photon will be absorbed ( i.e., ‘taken in’, as it were) by sodium’s 3s valence electron. The energy of a $\pu{242 nm}$ photon is $E_{\pu{242 nm}} = h\nu = \frac{hc}\lambda$, where $h = \pu{6.626 x 10^{-34} Js}$ and $c = \pu{2.9979E8 m/s} = \pu{2.9979E17 nm/s}$. This energy is $\pu{8.21E-19 J}$.
Then the sodium’s 3s electron would have
just enough energy to escape, i.e., ‘break free’, from the sodium atom. The result would be a sodium ion, $\ce{Na+}$, and an electron, $\ce {e-}$, that were free from one another: the sodium atom has been ionized, i.e., has become an ion, so the energy of the $\pu{242 nm}$ photon is the ionization energy of the 3s electron in sodium. Using the $h$ and $c$ values, the required ionization energy, for one sodium atom, is $\pu{8.21 x 10^{-19} J}$. For a mole of sodium atoms, multiply by Avogadro’s number, resulting in $\pu{494 kJ/mol}$.
So where does the infinity enter the picture? Well, a positively charge sodium ion and the electron that escaped from it attract: opposite charges attract. The Coulombic force of attraction is an inverse square law, similar to gravity, so
total absolute freedom would mean that the electron was infinitely far away from the sodium ion and had zero kinetic energy, i.e., zero energy of motion. But, as a practical matter, the separation can be relatively small, not infinity! As a practical matter, it is known experimentally that an isolated sodium atom, doing its thing in a vacuum, can be ionized by shining $\pu{242 nm}$ light on it.
In your example, suppose a sodium ion, in vacuum, happened to recombine with a free electron. Assume the sodium ion and electron are moving
very slowly relatively, so there is negligible excess kinetic energy: we don't want the electron to 'harpoon' the poor sodium ion! The electron could drop down into any empty orbital of the sodium ion. Potentially, that is an infinite number of possibilities, hence M. Farooq's answer (and sweet Grotian diagram). The result would be release of a photon with wavelength longer (numerically 'larger') than $\pu{242 nm}$. This excited (it has more energy than a 'ground state') sodium atom must eventually drop to the 'ground state' (the only state where it is indefinitely stable) by emitting one or more additional photons or perhaps getting 'mugged' of its excess energy, via collision with something else. This is collisional deactivation.
But what if the electron dropped
all the way down, in one fell swoop, to the 3s orbital? Then the resulting sodium atom would be in its 'ground state', i.e., lowest energy (and only indefinitely stable) state, and the emitted photon would be at $\pu{242 nm}$.
|
This question is an attempt to make progress on domotorp's interesting challenge. This question was originally asked in two parts; the former of which was answered by Ilya Bogdanov, and the latter of which is still stumping me. I'll keep both parts of the question for the record, but the interesting part is after the second dividing line.
Let $G$ be a directed graph on $n$ vertices. For any vertices $u$ and $v$, let $\delta(u,v)$ be the length of the shortest directed path from $u$ to $v$ and let $d(u,v)$ be the length of the shortest path from $u$ to $v$ ignoring edge orientations. We will assume that $\delta(u,v)$ (and hence $d(u,v)$) if finite; the term for this is that the graph is strongly connected. I'll write $d=\max_{(u,v)} d(u,v)$ and $\delta = \max_{(u,v)} \delta(u,v)$.
Is there a bound for $\delta$ in terms of $d$, independent of $n$?
This question was answered in the negative by Ilya Bogdanov, below. The question I can't answer:
Define a directed graph to have the
pairwise domination property if, for any two distinct vertices $u$ and $v$ of $G$, there is a vertex $x$ with $u \rightarrow x \leftarrow v$. (In particular, this implies $d \leq 2$.) What I really need is:
Is there an integer $k$ such that every graph on $\geq 2$ vertices with the pairwise domination property contains an oriented cycle of length $\leq k$?
Note that it is enough to study strongly connected graphs here: If $G$ has the pairwise domination property, and $H$ is a strongly connected component of $G$ with no edges coming out of it, then $H$ also has the pairwise domination property.
In fact, I can't even prove or disprove the following (hence the bounty):
Does every graph on $\geq 2$ vertices with the pairwise domination property contain an oriented triangle?
|
I want to create formulae where the
\rightarrow is enclosed in double quotation marks and appears like any other word in a formula. What I'm getting instead, are formulae where these quotation marks are drawn to adjacent words, leaving a gap between the quotation marks and the
\rightarrow.
Example: I'm getting
Some'' -> ''Word. I want
Some ''->'' Word.The sequence
''->'' should be treated the same as
Some or
Word.The
-> (
\rightarrow) should be rendered as if it were used in an arbitrary math-environment.
I have looked at this answer, but I could not turn that into something useful, as my MWE will show. Also, I do not wish to break up my math environment, since the quotation marks are part of the formulae.
\documentclass{standalone}\usepackage{varwidth}\usepackage{amsmath}\DeclareMathSymbol{\mlq}{\mathord}{operators}{``}\DeclareMathSymbol{\mrq}{\mathord}{operators}{`'}\begin{document}\begin{varwidth}{\linewidth}$Op \mapsto Op ``\rightarrow'' Op$%large gap on left; ' treated as prime$Op \mapsto Op {``\rightarrow''} Op$%large gap on left; ' treated as prime$Op \mapsto Op \mlq\rightarrow\mrq Op$%large gaps on left & right$Op \mapsto Op {\mlq\rightarrow\mrq} Op$%large gaps on left & rightQuotation marks close\\to arrow: ``$\rightarrow$''\end{varwidth}\end{document}
|
Help:Editing
This Editing Overview has a lot of examples. You may want to keep this page open in a separate browser window for reference while you edit. If you want the super-quickie help, see the Quickstart Guide.
Each of the topics covered here is covered somewhere else in more detail. See box at right for that
Contents Editing basics Start editing To start editing a MediaWiki page, click the Edit this page(or just edit) link at one of its edges. This brings you to the edit page: a page with a text box containing the wikitext: the editable source code from which the server produces the webpage. If you just want to experiment, please do so in the sandbox, not here. Type your changes You can just type your text. However, also using basic wiki markup (described in the next section) to make links and do simple formatting adds to the value of your contribution. Summarize your changes Write a short edit summary in the small field below the edit-box. You may use shorthand to describe your changes, as described in the legend. Preview before saving When you have finished, click Show previewto see how your changes will look -- beforeyou make them permanent. Repeat the edit/preview process until you are satisfied, then click Save pageand your changes will be immediately applied to the article. Basic text formatting
What it looks like What you type
You can
3 apostrophes will bold
5 apostrophes will bold and italicise
(4 apostrophes don't do anything special -- there's just '
You can ''italicize text'' by putting 2 apostrophes on each side. 3 apostrophes will bold '''the text'''. 5 apostrophes will bold and italicize '''''the text'''''. (4 apostrophes don't do anything special -- there's just ''''one left over''''.)
A single newline has no effect on the layout. But an empty line
starts a new paragraph.
A single newline has no effect on the layout. But an empty line starts a new paragraph.
You can break lines
You can break lines<br /> without a new paragraph.<br /> Please use this sparingly.
You should "sign" your comments on talk pages:
You should "sign" your comments on talk pages: <br /> - Three tildes gives your user name: ~~~ <br /> - Four tildes give your user name plus date/time: ~~~~ <br /> - Five tildes gives the date/time alone: ~~~~~ <br />
You can use
Put text in a
Superscripts and subscripts:X
Invisible comments to editors (<!-- -->) only appear while editing the page.
If you wish to make comments to the public, you should usually go on the talk page, though.
You can use <b>HTML tags</b>, too, if you want. Some useful ways to use HTML: Put text in a <tt>typewriter font</tt>. The same font is generally used for <code> computer code</code>. <strike>Strike out</strike> or <u>underline</u> text, or write it <span style= "font-variant:small-caps"> in small caps</span>. Superscripts and subscripts: X<sup>2</sup>, H<sub>2</sub>O Invisible comments to editors (<!-- -->) only appear while editing the page. <!-- Note to editors: blah blah blah. --> If you wish to make comments to the public, you should usually go on the [[talk page]], though.
For a list of HTML tags that are allowed, see HTML in wikitext. However, you should avoid HTML in favor of Wiki markup whenever possible.
Organizing your writing
What it looks like What you type
Subsection
Using more equals signs creates a subsection.
A smaller subsection
Don't skip levels, like from two to four equals signs.
Start with 2 equals signs not 1 because 1 creates H1 tags which should be reserved for page title.
== Section headings == ''Headings'' organize your writing into sections. The Wiki software can automatically generate a table of contents from them. === Subsection === Using more equals signs creates a subsection. ==== A smaller subsection ==== Don't skip levels, like from two to four equals signs. Start with 2 equals signs not 1 because 1 creates H1 tags which should be reserved for page title.
marks the end of the list.
* ''Unordered lists'' are easy to do: ** Start every line with a star. *** More stars indicate a deeper level. *: Previous item continues. ** A new line * in a list marks the end of the list. * Of course you can start again.
A new line marks the end of the list.
# ''Numbered lists'' are: ## Very organized ## Easy to follow A new line marks the end of the list. # New numbering starts with 1.
Another kind of list is a
Another kind of list is a ''definition list'': ; Word : Definition of the word ; Here is a longer phrase that needs a definition : Phrase defined ; A word : Which has a definition : Also a second one : And even a third * You can even do mixed lists *# and nest them *# inside each other *#* or break lines<br />in lists. *#; definition lists *#: can be *#;; nested too
A new line after that starts a new paragraph.
: A colon (:) indents a line or paragraph. A new line after that starts a new paragraph. <br /> This is often used for discussion on talk pages. : We use 1 colon to indent once. :: We use 2 colons to indent twice. ::: We use 3 colons to indent 3 times, and so on.
You can make horizontal dividing lines (----) to separate text.
But you should usually use sections instead, so that they go in the table of contents.
You can make horizontal dividing lines (----) to separate text. ---- But you should usually use sections instead, so that they go in the table of contents.
You can add footnotes to sentences using the
You can add footnotes to sentences using the ''ref'' tag -- this is especially good for citing a source. :There are over six billion people in the world.<ref>CIA World Factbook, 2006.</ref> <br /> References: <references/> For details, see [[Wikipedia:Footnotes]] and [[Help:Footnotes]].
You will often want to make clickable
links to other pages.
What it looks like What you type Here's a link to a page named [[Arts and Letters Magazine]]. You can even say [[arts and letters magazine]]s and the link will show up correctly.
You can put formatting around a link.Example:
You can put formatting around a link. Example: ''[[Wikipedia]]''. The ''first letter'' of articles is automatically capitalized, so [[wikipedia]] goes to the same place as [[Wikipedia]]. Capitalization matters after the first letter.
The weather in Riga is a page that doesn't exist yet. You could create it by clicking on the link.
[[The weather in Riga]] is a page that doesn't exist yet. You could create it by clicking on the link.
You can link to a page section by its title:
If multiple sections have the same title, add a number. #Example section 3 goes to the third section named "Example section".
You can link to a page section by its title: *[[List of cities by country#Morocco]]. If multiple sections have the same title, add a number. [[#Example section 3]] goes to the third section named "Example section".
You can make a link point to a different place with a piped link. Put the link target first, then the pipe character "|", then the link text.
Or you can use the "pipe trick" so that text in brackets does not appear.
You can make a link point to a different place with a [[Help:Piped link|piped link]]. Put the link target first, then the pipe character "|", then the link text. *[[Help:Link|About Links]] *[[List of cities by country#Morocco| Cities in Morocco]] Or you can use the "pipe trick" so that text in brackets does not appear. *[[Spinning (textiles)|Spinning]]
You can make an external link just by typing a URL: http://www.nupedia.com
You can give it a title: Nupedia
Or leave the title blank: [1]
You can make an external link just by typing a URL: http://www.nupedia.com You can give it a title: [http://www.nupedia.com Nupedia] Or leave the title blank: [http://www.nupedia.com] Linking to an e-mail address works the same way: mailto:someone@domain.com or [mailto:someone@domain.com someone]
You can redirect the user to another page.
#REDIRECT [[Official position]]
Category links do not show up in linebut instead at page bottom
Add an extra colon to
[[Help:Category|Category links]] do not show up in line but instead at page bottom ''and cause the page to be listed in the category.'' [[Category:English documentation]] Add an extra colon to ''link'' to a category in line without causing the page to be listed in the category: [[:Category:English documentation]]
The Wiki reformats linked dates to match the reader's date preferences. These three dates will show up the same if you choose a format in your Preferences:
The Wiki reformats linked dates to match the reader's date preferences. These three dates will show up the same if you choose a format in your [[Special:Preferences|]]: * [[July 20]], [[1969]] * [[20 July]] [[1969]] * [[1969]]-[[07-20]] Just show what I typed
A few different kinds of formatting will tell the Wiki to display things as you typed them.
What it looks like What you type
The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing new lines and multiple spaces. It still interprets special characters: →
<nowiki> The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing new lines and multiple spaces. It still interprets special characters: → </nowiki> The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → <pre> The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → </pre>
Leading spaces are another way to preserve formatting.
Putting a space at the beginning of each line stops the text from being reformatted. It still interprets Wiki Leading spaces are another way to preserve formatting. Putting a space at the beginning of each line stops the text from being reformatted. It still interprets [[Wiki]] ''markup'' and special characters: → Images, tables, video, and sounds
After uploading, just enter the filename, highlight it and press the "embedded image"-button of the edit_toolbar.
This will produce the syntax for uploading a file
[[Image:filename.png]]
This is a very quick introduction. For more information, see:
Help:Images and other uploaded files for how to upload files Help:Extended image syntax for how to arrange images on the page Help:Tables for how to create a table
What it looks like What you type
A picture, including alternate text:
You can put the image in a frame with a caption:
A picture, including alternate text: [[Image:Wiki.png|The logo for this Wiki]] You can put the image in a frame with a caption: [[Image:Wiki.png|frame|The logo for this Wiki]]
A link to Wikipedia's page for the image: Image:Wiki.png
Or a link directly to the image itself: Media:Wiki.png
A link to Wikipedia's page for the image: [[:Image:Wiki.png]] Or a link directly to the image itself: [[Media:Wiki.png]]
Use
Use '''media:''' links to link directly to sounds or videos: [[media:Sg_mrob.ogg|A sound file]]
{| border="1" cellspacing="0" cellpadding="5" align="center" ! This ! is |- | a | table |- |} Mathematical formulas
You can format mathematical formulas with TeX markup. See Help:Formula.
What it looks like What you type
<math>\sum_{n=0}^\infty \frac{x^n}{n!}</math>
<math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> Templates Templates are segments of Wiki markup that are meant to be copied automatically ("transcluded") into a page.You add them by putting the template's name in {{double braces}}.
Some templates take
parameters, as well, which you separate with the pipe character.
What it looks like What you type {{Transclusion demo}}
This template takes two parameters, and creates underlined text with a hover box for many modern browsers supporting CSS:
Go to this page to see the H:title template itself: {{H:title}}
This template takes two parameters, and creates underlined text with a hover box for many modern browsers supporting CSS: {{H:title|This is the hover text| Hover your mouse over this text}} Go to this page to see the H:title template itself: {{tl|H:title}} Minor edits
A logged-in user can mark an edit as "minor". Minor edits are generally spelling corrections, formatting, and minor rearrangement of text. Users may choose to
hide minor edits when viewing Recent Changes.
Marking a significant change as a minor edit is considered bad Wikiquette. If you have accidentally marked an edit as minor, make a dummy edit, verify that the "
[ ] This is a minor edit" check-box is unchecked, and explain in the edit summary that the previous edit was not minor. See also Help:Editing FAQ Help:Automatic conversion of wikitext Help:Calculation Help:Editing toolbar Help:HTML in wikitext Protecting pages Wikipedia:Cheatsheet—a listing of the basic editing commands. Help:Starting a new page Help:Variable HTML elements. Wikipedia:Manual of Style
|
Edit: an earlier version of this answer muddled a distinction between lax limit and 2-limit. I've decided to undelete it in case someone sees how to complete the argument at the end.
If $C$ is locally presentable and $S$ is a semi-monad whose underlying functor is accessible, then there exists a unitalization of $S$. Here is a proof modeled after an idea discussed at the nLab at the page free monad.
Define an algebra of a semi-monad $S: C \to C$ in the expected way, as an object $X$ of $C$ equipped with a morphism (an "action") $SX \to X$ satisfying the usual associativity law for an action. Morphisms between algebras are also defined in the expected way, so that there is a full embedding $S$-$\mathrm{Alg}_\mathrm{smd} \hookrightarrow S \downarrow C$ into the comma category. (I use the subscripts "smd" and "mnd" to indicate algebras qua semi-monads and monads.)
The main thing to check is that the forgetful functor $S$-$\mathrm{Alg}_\mathrm{smd} \to C$ is monadic in the "evil" sense, so that there is an
isomorphism $F$-$\mathrm{Alg}_\mathrm{mnd} \simeq S$-$\mathrm{Alg}_\mathrm{smd}$ in $Cat/C$ for some monad $F$. The claim is that then $F$ is the free monad on the semi-monad $S$. For in that case, given a monad $M$ on $C$ we have natural bijections between
Semi-monad morphisms $S \to M$,
$S$-algebra structures $S U_M \to U_M$ where $U_M:$ $M$-$\mathrm{Alg}_\mathrm{mnd} \to C$ is the forgetful functor,
Morphisms $M$-$\mathrm{Alg}_\mathrm{mnd} \to S$-$\mathrm{Alg}_\mathrm{smd}$ in $Cat/C$,
Morphisms $M$-$\mathrm{Alg}_\mathrm{mnd} \to F$-$\mathrm{Alg}_\mathrm{mnd}$ in $Cat/C$,
$F$-algebra structures (qua algebras over a
monad) $F U_M \to U_M$,
Monad morphisms $F \to M$
so that $F$ is evidently the
free monad on the semi-monad $S$.
So now we check monadicity, using the precise monadicity theorem. It is straightforward that the forgetful functor $U: S$-$\mathrm{Alg}_{\mathrm{smd}} \to C$
creates (not just reflects!) $U$-split coequalizers, so we just have to check that $U$ has a left adjoint. However, since the 2-category of locally presentable categories and accessible functors inherits 2-limits from $Cat$, and since $S$-$\mathrm{Alg}_\mathrm{smd}$ is a 2-limit ( edit: lax limit) in $Cat$ (for essentially the same reason that Eilenberg-Moore categories for monads are 2-limits lax limits), we see that $U: S$-$\mathrm{Alg}_\mathrm{smd} \to C$ is an accessible functor between locally presentable categories. In this situation, existence of a left adjoint to $U$ is equivalent to preservation of limits by $U$. But limit-preservation is clear. So the conditions of the precise monadicity theorem are satisfied.
Edit: The last paragraph would need to be fixed to make the argument complete, either by somehow showing $U$ lives in the 2-category of accessible categories and accessible functors (note that $S$-$\mathrm{Alg}_{\mathrm{smd}}$ is complete, and so would then be locally presentable), or e.g. by showing that the full inclusion $S$-$\mathrm{Alg}_{\mathrm{smd}} \hookrightarrow S \downarrow C$ is reflective, or by some other means.
|
Renormalization and $\alpha$-limit set for expanding Lorenz maps
1.
Wuhan Institute of Physics and Mathematics, The Chinese Academy of Sciences, Wuhan 430071, China
Mathematics Subject Classification:Primary: 37E05, 37F25; Secondary: 54H2. Citation:Yiming Ding. Renormalization and $\alpha$-limit set for expanding Lorenz maps. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 979-999. doi: 10.3934/dcds.2011.29.979
References:
[1]
V. S. Afraimovich, V. V. Bykov and L. P. Shil'nikov.,
[2]
L. Alsedà and A. Falcò,
[3]
L. Alsedà, J. Llibre, M. Misiurewicz and C. Tresser,
[4]
K. M. Brucks and H. Bruin, "Topics From One-Dimensional Dynamics,",
[5] [6] [7]
H. F. Cui and Y. M. Ding,
[8]
Y. M. Ding and W. T. Fan,
[9]
L. Flatto and J. C. Lagarias,
[10] [11] [12] [13]
J. Guckenheimer and R. F. Williams,
[14] [15]
G. Keller and P. Matthias,
[16] [17]
S. Luzzatto and W. Tucker,
[18]
M. I. Malkin,
[19] [20]
C. A. Morales, M. J. Pacifico and B. San Martin,
[21]
C. A. Morales, M. J. Pacifico and B. San Martin,
[22]
M. R. Palmer, "On the Classification of Measure Preserving Transformations of Lebesgue Spaces,",
[23] [24]
W. Parry,
[25] [26] [27]
C. Sparrow, "The Lorenz Equations: Bifurcations, Chaos and Strange Attractors,",
[28]
W. Tucker,
[29]
W. Tucker,
[30] [31]
R. F. Williams,
show all references
References:
[1]
V. S. Afraimovich, V. V. Bykov and L. P. Shil'nikov.,
[2]
L. Alsedà and A. Falcò,
[3]
L. Alsedà, J. Llibre, M. Misiurewicz and C. Tresser,
[4]
K. M. Brucks and H. Bruin, "Topics From One-Dimensional Dynamics,",
[5] [6] [7]
H. F. Cui and Y. M. Ding,
[8]
Y. M. Ding and W. T. Fan,
[9]
L. Flatto and J. C. Lagarias,
[10] [11] [12] [13]
J. Guckenheimer and R. F. Williams,
[14] [15]
G. Keller and P. Matthias,
[16] [17]
S. Luzzatto and W. Tucker,
[18]
M. I. Malkin,
[19] [20]
C. A. Morales, M. J. Pacifico and B. San Martin,
[21]
C. A. Morales, M. J. Pacifico and B. San Martin,
[22]
M. R. Palmer, "On the Classification of Measure Preserving Transformations of Lebesgue Spaces,",
[23] [24]
W. Parry,
[25] [26] [27]
C. Sparrow, "The Lorenz Equations: Bifurcations, Chaos and Strange Attractors,",
[28]
W. Tucker,
[29]
W. Tucker,
[30] [31]
R. F. Williams,
[1] [2]
James W. Cannon, Mark H. Meilstrup, Andreas Zastrow.
The period set of a map from the Cantor set to itself.
[3]
Oliver Díaz-Espinosa, Rafael de la Llave.
Renormalization and central limit theorem for critical dynamical systems with weak external noise.
[4]
Karla Díaz-Ordaz.
Decay of correlations for non-Hölder observables for one-dimensional expanding Lorenz-like maps.
[5]
C. R. Chen, S. J. Li.
Semicontinuity of the solution set map to a set-valued weak vector variational inequality.
[6]
Jiawei Chen, Zhongping Wan, Liuyang Yuan.
Existence of solutions and $\alpha$-well-posedness for a system of constrained set-valued variational inequalities.
[7]
Kai Liu, Zhi Li.
Global attracting set, exponential decay and stability in distribution of neutral SPDEs driven by additive $\alpha$-stable processes.
[8]
Changjing Zhuge, Xiaojuan Sun, Jinzhi Lei.
On positive solutions and the Omega limit set for a class of delay differential equations.
[9]
Yu-Hao Liang, Wan-Rou Wu, Jonq Juang.
Fastest synchronized network and synchrony on the Julia set of complex-valued coupled map lattices.
[10]
Francisco Balibrea, J.L. García Guirao, J.I. Muñoz Casado.
A triangular map on $I^{2}$ whose $\omega$-limit sets are all compact intervals of $\{0\}\times I$.
[11]
Alexander Blokh, Michał Misiurewicz.
Dense set of negative Schwarzian maps whose critical points have minimal limit sets.
[12] [13]
Fuchen Zhang, Xiaofeng Liao, Guangyun Zhang, Chunlai Mu, Min Xiao, Ping Zhou.
Dynamical behaviors of a generalized Lorenz family.
[14] [15] [16] [17] [18]
M. Phani Sudheer, Ravi S. Nanjundiah, A. S. Vasudeva Murthy.
Revisiting the slow manifold of the Lorenz-Krishnamurthy quintet.
[19]
C. Foias, M. S Jolly, I. Kukavica, E. S. Titi.
The Lorenz equation as a metaphor for the Navier-Stokes equations.
[20]
Sergey Gonchenko, Ivan Ovsyannikov.
Homoclinic tangencies to resonant saddles and discrete Lorenz attractors.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Colossally abundant numbers are positive integers $n$ for which there exists a positive exponent $\epsilon$ such that
$$\frac{\sigma(n)}{n^{1+\epsilon}}>\frac{\sigma(m)}{m^{1+\epsilon}}$$
for all integers $m>1,m\ne n$. Here, $\sigma(n)$ denotes the sum-of-divisors function $\sum_{d|n}d$.
The first few colossally abundant numbers are $2,6,12,60,120,360...$ (from Wolfram Mathworld here).
My question is, how does one go about discovering such numbers, or proving that a number is colossally abundant? One can't test individually for all combinations of $n,m,\epsilon$, so there must be an algebraic method. What is it? (Google is no help.)
UPDATE
In light of @Mindlack's and @John Omielan's helpful comments below, and in order not to end up with an extended comment section, I thought it might be good to elaborate on my original question here.
@John: Yes, I take your point, but it still sounds a lot like searching for a needle in a haystack. Maybe that's what you're trying to say? @Mindlack: OK, so setting $n=2$ gives $\frac{\sigma(n)}{n^{1+\epsilon}}=\frac{\sigma(2)}{2^{1+\epsilon}}=\frac{3}{2^{1+\epsilon}}$, with you so far But where does $\frac{\sigma(n)}{n}=\sum_{d|n}\frac{n/d}{n}$ come from? It seems to me that we have $\frac{\sigma(n)}{n}=\frac{\sum_{d|n}d}{n}$. So you are suggesting that $\frac{\sum_{d|n}d}{n}=\sum_{d|n}\frac{n/d}{n}$... How so? Surely we should have $\frac{\sigma(n)}{n}=\frac{\sum_{d|n}d}{n}=\sum_{d|n}\frac{d}{n}$ And... well, there I lose you. I can't follow the rest because I can't really get past this one issue
I'm 100% sure it's me being stupid - I'm teaching myself all this stuff for the first time, and on my own. I realise that no one on MathStackExchange signs up to hold the hands of newbies, but if you have the time (or anyone else does) I'd really appreciate some clarification.
BTW: aren't we all, as a community and as a species, incredibly that such sites exist? Wow.
|
1. Homework StatementShow that f is 2-pi periodic and analytic on the strip \vert Im(z) \vert < \eta, iff it has a Fourier expansion f(z) = \sum_{n = -\infty}^{\infty} a_{n}z^{n}, and that a_n = \frac{1}{2 \pi i} \int_{0}^{2\pi} e^{-inx}f(x) dx. Also, there's something about the lim sup of...
1. Homework StatementShow that G is isomorphic to the Galois group of an irreducible polynomial of degree d iff is has a subgroup H of index d such that \bigcap_{\sigma \in G} \sigma H \sigma^{-1} = {1} .2. Homework Equations3. The Attempt at a SolutionI know that if G acts...
1. Homework StatementI'm trying to come up with an example of a quartic polynomial over a field F which has a root in F, but whose splitting field isn't the same as its resolvent cubic.2. Homework Equations3. The Attempt at a SolutionWell, I know the splitting field of the...
1. Homework StatementLet p(z) be a polynomial of degree n \geq 1. Show that \left\{z \in \mathbb{C} : \left|p(z)\right| > 1 \right\}[/tex] is connected with connectivity at most n+1.2. Homework EquationsA region (connected, open set) considered as a set in the complex plane has...
1. First, the integers aren't even a group under multiplication. So you should be using addition as your operation. Under addition the integers are an (infinite) cyclic group. What does that tell you?2. I'm not sure what you mean by mapping to an infinite number of items. But for the first...
I think I understand... The splitting field is the minimal field that contains all the roots of the minimal polynomial, and anything that's Galois over the rationals contains all the conjugates of \alpha (i.e., the roots of the minimal polynomial). So it contains the splitting field. Thanks...
I believe the splitting field is just the original extension adjoin i. So that's handy. But it seems like that really shouldn't be the Galois closure. Why would any Galois extension of the rationals that contains \mathbb{Q}(\alpha) have to contain i? It seems like it would only have to be a...
1. Homework StatementFind the Galois closure of the field \mathbb{Q}(\alpha) over \mathbb{Q}, where \alpha = \sqrt{1 + \sqrt{2}}.2. Homework EquationsUm...the Galois closure of E over F, where E is a finite separable extension is a Galois extension of F containing E which is minimal...
Well, I believe that to use the Weierstrass M-test you have to find a bound that works for all x on whatever set you're testing convergence in, not just a given x in the set. That is, your bound a is allowed to be a function of n, but it cannot be a function of x.
1. Homework StatementOkay, I'm trying to explicitly determine the Galois group of x^p - 2, for p a prime.2. Homework Equations3. The Attempt at a SolutionOkay, so what I've come up with is that I'm going to have extensions \textbf{Q} \subset \textbf{Q}(\zeta) \subset...
Well, if you don't mind a cheat-ish answer, you could just look at the logarithm, then use the facts that logarithmic and exponential functions are continuous and that products of convergent sequences converge to the product of the limits
In general, a subring of R is a subset of R which is a ring with structure comparable to R. So you don't actually have to show all the axioms because multiplication being associative and distributive is inherited just by being a subset of R. Similarly, some of the additive group structure is...
You're completely right. It doesn't imply g^8 = e. You're going to have to do a bit more work than just finding an element of the preimage of g'. Unfortunately, I'm having trouble coming up with the exact proof off the top of my head, so I can't give you much of a kick in the right direction...
Wow, I'm really slow today. Okay, g(z) is not entire because, as you said, it's not analytic at 0. Why does your argument fail? Because the integral \displaystyle\oint_{c} g(w) dw is not defined if c passes through 0, so it's not 0 for every closed curve in the complex plane. However, since...
|
LaTeX typesetting is made by using special tags or commands that provide a handful of ways to format your document. Sometimes standard commands are not enough to fulfil some specific needs, in such cases new commands can be defined and this article explains how.
Contents
Most of the LaTeX commands are simple words preceded by a special character.
In a document there are different types of \textbf{commands} that define the way the elements are displayed. This commands may insert special elements: $\alpha \beta \Gamma$
In the previous example there are different types of commands. For instance,
\textbf will make boldface the text passed as parameter to the command. In mathematical mode there are special commands to display Greek characters.
Commands are special words that determine LaTeX behaviour. Usually this words are preceded by a backslash and may take some parameters.
The command
\begin{itemize} starts an
environment, see the article about environments for a better description. Below the environment declaration is the command
\item, this tells LaTeX that this is an item part of a list, and thus has to be formatted accordingly, in this case by adding a special mark (a small black dot called bullet) and indenting it.
Some commands need one or more parameters to work. The example at the introduction includes a command to which a parameter has to be passed,
textbf; this parameter is written inside braces and it's necessary for the command to do something.
There are also optional parameters that can be passed to a command to change its behaviour, this optional parameters have to be put inside brackets. In the example above, the command
\item[\S] does the same as
item, except that inside the brackets is
\S that changes the black dot before the line for a special character.
LaTeX is shipped with a huge amount of commands for a large number of tasks, nevertheless sometimes is necessary to define some special commands to simplify repetitive and/or complex formatting.
New commands are defined by
\newcommand statement, let's see an example of the simplest usage.
\newcommand{\R}{\mathbb{R}} The set of real numbers are usually represented by a blackboard bold capital r: \( \R \).
The statement
\newcommand{\R}{\mathbb{R}} has two parameters that define a new command
\R
\mathbb{R}
\mathbb the package
After the command definition you can see how the command is used in the text. Even tough in this example the new command is defined right before the paragraph where it's used, good practice is to put all your user-defined commands in the preamble of your document.
It is also possible to create new commands that accept some parameters.
\newcommand{\bb}[1]{\mathbb{#1}} Other numerical systems have similar notations. The complex numbers \( \bb{C} \), the rational numbers \( \bb{Q} \) and the integer numbers \( \bb{Z} \). The line
\newcommand{\bb}[1]{\mathbb{#1}} defines a new command that takes one parameter.
\bb
[1]
\mathbb{#1}
User-defined commands are even more flexible than the examples shown above. You can define commands that take optional parameters:
\newcommand{\plusbinomial}[3][2]{(#2 + #3)^#1} To save some time when writing too many expressions with exponents is by defining a new command to make simpler: \[ \plusbinomial{x}{y} \] And even the exponent can be changed \[ \plusbinomial[4]{y}{y} \]
Let's analyse the syntax of the line
\newcommand{\plusbinomial}[3][2]{(#2 + #3)^#1}:
\plusbinomial
[3]
[2]
(#2 + #3)^#1
If you define a command that has the same name as an already existing LaTeX command you will see an error message in the compilation of your document and the command you defined will not work. If you really want to override an existing command this can be accomplished by
\renewcommand:
\renewcommand{\S}{\mathbb{S}} The Riemann sphere (the complex numbers plus $\infty$) is sometimes represented by \( \S \)
In this example the command
\S (see the example in the commands section) is overwritten to print a blackboard bold S.
\renewcommand uses the same syntax as
\newcommand.
For more information see:
|
Basic definitions
We now formalize the concepts introduced in the previous lecture. It turns out that it's easiest to deal with events as subsets of a large set called the
probability space, instead of as abstract logical statements. The logical operations of negation, conjunction and disjunction are replaced by the set-theoretic operations of taking the complement, intersection or union, but the intuitive meaning attached to those operations is the same.
Definition: Algebra
If \(\Omega\) is a set, an
\[ \emptyset \in {\cal F}, \tag{A1}\qquad\qquad\qquad \]
\[ A \in {\cal F} \implies \Omega\setminus A \in {\cal F}, \tag{A2} \]
\[ A,B \in {\cal F} \implies A\cup B \in {\cal F}. \tag{A3}\]A word synonymous with algebra in this context is
field.
Definition: Sigma Algebra
A \(\sigma\)-algebra (also called a \(\sigma\)-field) is an algebra \({\cal F}\) that satisfies the additional axiom
\[ A_1, A_2, A_3, \ldots \in {\cal F} \implies \cup_{n=1}^\infty A_n \in {\cal F}. \tag{A4}\]
Example If \(\Omega\) is any set, then \(\{ \emptyset, \Omega \}\) is a \(\sigma\) algebra -- in fact it is the smallest possible \(\sigma\) algebra of subsets of \(\Omega\) Similarly, the power set \({\cal P}(\Omega)\) of all subsets of \(\Omega\) is a \(\sigma\) algebra, and is (obviously) the largest \(\sigma\) algebra of subsets of \(\Omega\)
Definition: Measurable Space A measurable space is a pair \((\Omega,{\cal F})\) where \(\Omega\) is a set and \(\cal F\) is a \(\sigma\)-algebra of subsets of \(\Omega\).
Definition: Probability Measure
Given a measurable space\((\Omega, {\cal F})\) a
\[\prob(\emptyset) = 0, \quad \prob(\Omega) = 1, \tag{P1}\]
\begin{equation}
Definition: Probability Space A probability space is a triple \((\Omega, {\cal F}, \prob)\), where \((\Omega, {\cal F})\) is a measurable space, and \(\prob\) is a probability measure on \((\Omega, {\cal F})\).
Intuitively, we think of \(\Omega\) as representing the set of possible
outcomes of a probabilistic experiment, and refer to it as the sample space. The \(\sigma\) algebra \({\cal F}\) is the \(\sigma\)\) algebra of events, namely those subsets of \(\Omega\) which have a well-defined probability (as we shall see later, it is not always possible to assign well-defined probabilities to all sets of outcomes). And \(\prob\) is the ``notion'' or ``measure'' of probability on our sample space.
Probability theory can be described loosely as the study of probability spaces (this is of course a gross oversimplification...). A more general mathematical theory called
measure theory studies measure spaces, which are like probability spaces except that the measures can take values in \([0,\infty]\) instead of \([0,1]\) and the total measure of the space is not necessarily equal to\(1\) (such measures are referred to as \(\sigma\) additive nonnegative measures. Measure theory is an important and non-trivial theory, and studying it requires a separate concentrated effort. We shall content ourselves with citing and using some of its most basic results. For proofs and more details, refer to Chapter 1 and the measure theory appendix in [Dur2010] or to a measure theory textbook. Properties and examples
Lemma: Probability Space Properties
If \((\Omega, {\cal F}, \prob)\) is a probability space, then we have:
Exercise Prove Lemma \ref{lem-probspaceproperties}.
Example: Discrete Probability Spaces
Let \(\Omega\) be a countable set and let \(p:\Omega\to[0,1]\) be a function such that
\[ \sum_{\omega \in \Omega} p(\omega) = 1. \]
This corresponds to the intuitive notion of a probabilistic experiment with a finite or countably infinite number of outcomes, where each individual outcome \(\omega\) has a probability \(p(\omega)\) of occurring. We can put such an ``elementary'' or ``discrete'' experiment in our more general framework by defining the\(\sigma\)algebra of events \({\cal F}\) to be the set of subsets of \(\Omega\) and defining the probability measure \(\prob\) by
\[ \prob(A) = \sum_{\omega \in A} p(\omega), \qquad A\in {\cal F}. \]
If \(\Omega\) is a finite set, a natural probability measure to consider is the
\[ \prob(A) = \frac{|A|}{|\Omega|}. \]
Example: Choosing a random number uniformly in \((0,1)\)
The archetypical example of a ``non-elementary'' probability space (i.e., one which does not fall within the scope of the previous example) is the experiment of choosing a random number uniformly in the interval \((0,1)\) How do we know that it makes sense to speak of such an experiment? We don't,
\[ (a,b) \in {\cal F}, \qquad (0\le a<b\le 1),\]
and that
How do we generate a\(\sigma\)algebra of subsets of \((0,1)\) that contains all the intervals? We already saw that the set of
What about the probability measure \(\prob\)? Here we will simply cite a result from measure theory that says that the measure we are looking for exists, and is unique. This is not too difficult to prove, but doing so would take us a bit too far off course.
Exercise: The \(\sigma\)-algebra generated by a set of subsets of \(\Omega\)
Hint for (ii): Let \(({\cal F}_i)_{i\in I}\) be the collection of all \(\sigma\) algebras of subsets of \(\Omega\) that contain \({\cal A}\) This is a non-empty collection, since it contains for example \({\cal P}(\Omega)\) the set of all subsets of \(\Omega\) Any \(\sigma\) algebra \(\sigma({\cal A})\) that satisfies the two properties above is necessarily a subset of any of the \({\cal F}_i\)'s, hence it is also contained in the intersection of all the \({\cal F}_i\)'s, which is a \(\sigma\) algebra by part (i) of the exercise.
Definition: \(\sigma\) algebra If \({\cal A}\) is a collection of subsets of a set \(\Omega\) the \(\sigma\)algebra \(\sigma({\cal A})\) discussed above is called the \(\sigma\) algebra generated by \({\cal A}\).
Example: The space of infinite coin toss sequences
Another archetypical experiment in probability theory is that of a sequence of independent fair coin tosses, so let's try to model this experiment with a suitable probability space. If for convenience we represent the result of each coin as a binary value of \(0\) or \(1\) then the sample space \(\Omega\)is simply the set of infinite sequences of \(0\)'s and\(1\)'s, namely
\[ \Omega = \big\{ (x_1, x_2, x_3, \ldots) : x_i \in \{0,1\}, i=1,2,\ldots \big\} = \{ 0,1 \}^{\mathbb{N}}. \]
What about the \(\sigma\) algebra \({\cal F}\)? We will take the same approach as we did in the previous example, which is to require certain natural sets to be events, and to take as our\(\sigma\)algebra the \(\sigma\) algebra generated by these ``elementary'' events. In this case, surely, for each \(n\ge 1\) we would like the set
\begin{equation}
to be an event (in words, this represents the event ``the coin toss \(x_n\) came out Heads''). Therefore we take \({\cal F}\) to be the \(\sigma\) algebra generated by the collection of sets of this form.
Finally, the probability measure \(\prob\) should conform to our notion of a sequence of independent fair coin tosses. Generalizing the notation in \eqref{eq:specialcase}, for \(a\in\{0,1\}\) define
\[A_n(a) = \{ \mathbf{x}=(x_1,x_2,\ldots) \in \Omega : x_n = a \}.\]
Then \(\prob\) should satisfy
\[ \prob(A_n(a)) = \frac12, \]
representing the fact that the\(n\)-th coin toss is unbiased. But more generally, for any \(n\ge 1)) and \((a_1,a_2,\ldots,a_n) \in \{0,1\}^n\) since the first \(n\) coin tosses are independent, \(\prob\) should satisfy
\[ \prob\big(A_1(a_1) \cap A_2(a_2) \cap \ldots \cap A_n(a_n)\big) = \frac{1}{2^n}. %quad (n\ge 1, \ \ (a_1,\ldots,a_n)\in\{0,1\}^n) \]
As in the example of
Theorem: Products of Probability Spaces Let
\[(\Omega_n, {\cal F}_n, \prob_n)_{n=1}^\infty\]
be a sequence of probability spaces. Denote
\[\Omega = \prod_{n=1}^\infty \Omega_n\]
(the Cartesian product of the outcome sets), and let \({\cal F}\) be the \(\sigma\) algebra of subsets of \(\Omega\) generated by sets which are of the form
\[ \{ (x_1,x_2,\ldots) \in \Omega : x_n \in A \} \]
for some \(n\ge1\) and set \(A \in {\cal F}_n\). Then there exists a unique probability measure \(\prob\) on \((\Omega, {\cal F})\) such that for any \(n\ge 1\) and any finite sequence
\[(A_1, A_2, \ldots, A_n) \in {\cal F}_1 \times {\cal F}_2 \times \ldots \times {\cal F}_n\]
the equation
\[ \prob\Big(\big\{ (x_1,x_2,\ldots ) \in \Omega : x_1\in A_1, x_2 \in A_2, \ldots, x_n \in A_n \big\} \Big) = \prod_{k=1}^n \prob_k(A_k) \]holds.
Exercise Explain why the ``infinite sequence of coin tosses'' experiment is a special case of a product of probability spaces, and why the existence and uniqueness of a probability measure satisfying \eqref{eq:indcointosses} follows from Theorem \ref{thm-productspaces}. Contributors Dan Romik (UC Davis)
|
Restriktor allows for easy-to-use testing of linear equality and inequality restrictions about parameters and effects in linear models.
Many researchers have specific expectations about the relations between theparameters (e.g., means, regression coefficients). These expectations can oftenbe expressed in terms of order/inequality constraints between the parameters. Thisis known as informative hypothesis testing. The one-sided t-test is the simplestexample (e.g., $\mu_1 \geq$ 0 and $\mu_1 \leq \mu_2$). This readily extends to themulti-parameter setting, where more than one inequality constraint can be imposedon the parameters (e.g., $\mu_1 \leq \mu_2 \leq \mu_3$). Incorporating such priorknowledge in the hypothesis test has some
benefits :
The long-term goal of restriktor is to become the software tool for inequality and/or equality constraint estimation and testing, in a large variety of statistical models.
To get a first impression of how restriktor works in practice, consider the following example of order-constrained means. The dataset (Zelazo, Zelazo and Kolb, 1972) consists of ages (in months) at which an infant starts to walk alone from four different treatment groups (Active, Passive, Control, No). The assumption is that the walking exercises would not have a negative effect of increasing the mean age at which a child starts to walk. The figure below shows the model of our order-constrained hypothesis. The four groups are ordered by means of imposing inequality constraints (<, 'smaller than') between the group means.
## to run this example, copy and paste the syntax below into R## load the restriktor librarylibrary(restriktor)## construct the constraint syntax. This can simply be done## by using the factor level names (here Active, Passive,## Control, No) preceded by the factor name (here Group) variable.myConstraints <- ' GroupActive < GroupPassive GroupPassive < GroupControl GroupControl < GroupNo '## fit the unrestricted linear model. We removed the intercept (-1)## such that the parameter estimates reflect the group means.fit.ANOVA <- lm(Age ~ -1 + Group, data = ZelazoKolb1972)## evaluate the informative hypothesisiht(fit.ANOVA, constraints = myConstraints)
Restriktor: restricted hypothesis tests ( 19 residual degrees of freedom ):Multiple R-squared remains 0.986Constraint matrix: GroupActive GroupControl GroupNo GroupPassive op rhs active1: -1 0 0 1 >= 0 no2: 0 1 0 -1 >= 0 no3: 0 -1 1 0 >= 0 noOverview of all available hypothesis tests:Global test: H0: all parameters are restricted to be equal (==) vs. HA: at least one inequality restriction is strictly true (>) Test statistic: 6.4267, p-value: 0.03085Type A test: H0: all restrictions are equalities (==) vs. HA: at least one inequality restriction is strictly true (>) Test statistic: 6.4267, p-value: 0.03085Type B test: H0: all restrictions hold in the population vs. HA: at least one restriction is violated Test statistic: 0.0000, p-value: 1Type C test: H0: at least one restriction is false or active (==) vs. HA: all restrictions are strictly true (>) Test statistic: 0.3807, p-value: 0.3538Note: Type C test is based on a t-distribution (one-sided), all other tests are based on a mixture of F-distributions.
|
Note that by denoting $f(x) = \tan x \sin x -x^2$, you found that $$f'''(x)=-\sin x (1-6\sec^4x+\sec^2x) = \sin x (1+3\sec^2 x)(2\sec^2 x -1 ) \geq 0 $$Hence $f''(x)$ is increasing, with $f''(0)=0$, we conclude that $f''(x) \geq 0$.
Hence $f'(x)$ is increasing, with $f'(0)=0$, we conclude that $f'(x) \geq 0$.
Hence $f(x)$ is increasing, with $f(0)=0$, we conclude that $f(x) \geq 0$.
This is what we wish to prove.
From a more advanced perspective, the inequality follows from the fact that Taylor expansion of $$\tan x \sin x = x^2+\frac{x^4}{6}+\cdots$$ at $x=0$ have all coefficients positive, the radius of convergence of this series is $\pi/2$.
To see why all coefficients are positive, write$$\tan x \sin x = \frac{1}{\cos x} - \cos x$$
The Taylor expansion of $\sec x$ at $x=0$ is $$\sec x = \sum_{n=0}^{\infty} \frac{(-1)^n E_{2n}}{(2n)!} x^{2n}$$where $E_{2n}$ are Euler number. The fact that $(-1)^n E_{2n}$ is positive follows from the series evaluation:$$\beta(2n+1) = \frac{(-1)^n E_{2n} \pi^{2n+1}}{4^{2n+1} (2n)!}$$with $\beta(n)$ the Dirichlet beta function.
Also note that we have $|E_{2n}| > 1 $ when $n>1$, hence the power series of $\frac{1}{\cos x}-\cos x$ has all coefficients positive.
From this, you might want to prove the stronger inequality:
When $0<x<\frac{\pi}{2}$,
$$\tan x \sin x > x^2 + \frac{x^4}{6} $$
$$\tan x \sin x > x^2 + \frac{x^4}{6} + \frac{31x^6}{360} $$
|
(Disclaimer: I'm a high school student, and my knowledge of mathematics extends only to some elementary high school calculus. I don't know if what I'm about to do is valid mathematics.)
I noticed something really neat the other day.
Suppose we define $L$ as a "left-shift operator" that takes a function $f(x)$ and returns $f(x+1)$. Clearly, $(LLL\ldots LLLf)(x)=f(x+(\text{number of $L$s}))$, so it would seem a natural extension to denote $(L^hf)(x)=f(x+h)$.
Now, by the definition of the Taylor series, $f(x+h)=\sum\limits_{k=0}^\infty \frac{1}{k!}\frac{d^kf}{dx^k}\bigg|_{x}h^k$. Let's rewrite this as $\sum\limits_{k=0}^\infty \left(\frac{\left(h\frac{d}{dx}\right)^k}{k!}f\right)(x)$. Now, we can make an interesting observation: $\sum\limits_{k=0}^\infty \frac{\left(h\frac{d}{dx}\right)^k}{k!}$ is simply the Taylor series for $e^u$ with $u=h\frac{d}{dx}$. Let's rewrite the previous sum as $\left(e^{h\frac{d}{dx}}f\right)(x)$. This would seem to imply that $(L^hf)(x)=\left(e^{h\frac{d}{dx}}f\right)(x)$, or equivalently, $L=e^\frac{d}{dx}$. We might even say that $\frac{d}{dx}=\ln L$.
My question is, does what I just did have any mathematical meaning? Is it valid? I mean, I've done a bit of creative number-shuffling, but how does one make sense of exponentiating or taking the logarithm of an operator? What, if any, significance does a statement like $\frac{d}{dx}=\ln L$ have?
|
The Problem
Find the absolute minimum and maximum values of the following function on the given interval: $\ f(\theta) = \cos \theta; -\pi \leq \theta \leq {\pi \over 6} $
What I've Done So Far Find the derivative of $\ f(\theta) $. This equals $\ -\sin \theta $. Determine critical points. It won't ever be undefined, but I know there are points within the interior of the domain where the function is equal to zero.
So where I'm stuck is finding at what points $\ f(\theta) $ equals zero. Is it as simple as taking the inverse $\ \sin $ of $\ \theta $?
Thanks! Any help will be much appreciated!
Garren
|
I'm trying to find a formula for the average distance between a point,$\ P$ and a curve segment,$\ C$
When I tried to use the formule for the mean of a function with a distance function $$\frac{1}{a+b}\int_{a}^{b}(\|f(t),P\|)dt$$ where I defined $\ C$ in terms of $\ f:{\rm I\!R} \to {\rm I\!R}^2, C= \{f(i)|a \leq i \leq b\}$
and $\ P = (0,0), a=0$ and $\ b=1$, the average distance between $\ P$ and $f(t) = (t,t^2)$ should be the same as the distance between $\ P$ and $f(t) = (t,\sqrt{t})$, as the 2 curve segments are just mirror images. This is not the case when I use my formula.
Does anyone why my formula is wrong and which formula is the correct one?
|
How many solutions does the equation $\cos^2x-\cos x-x=0$ have in the interval $\displaystyle \left[0,\frac\pi2\right]$?
Clearly, $x=0$ is a solution. Are there any other? I couldn't proceed after differentiating it to find the extremes.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
How many solutions does the equation $\cos^2x-\cos x-x=0$ have in the interval $\displaystyle \left[0,\frac\pi2\right]$?
Clearly, $x=0$ is a solution. Are there any other? I couldn't proceed after differentiating it to find the extremes.
You don't need to take the derivative for this.
We know that $x=0$ is an obvious solution and $\frac{\pi}{2}$ is not a solution. We also know that on the interval of $(0,\frac{\pi}{2})$, we have $0<\cos x<1$. Thus, we have the inequality $\cos^2 x<\cos x<\cos x+x$. Therefore, $\cos^2 x - \cos x-x<0$. This means that the only solution is $x=0$. Therefore, there is only $1$ solution.
The derivative is $\sin x - \sin 2x - 1$ which is $\lt 0$ when $x \in [0,\frac{\pi}{2})$ (as $\sin 2x \gt 0$ and $\sin x \lt 1$)
You can fill in the rest.
|
In this section, we use some basic integration formulas studied previously to solve some key applied problems. It is important to note that these formulas are presented in terms of indefinite integrals. Although definite and indefinite integrals are closely related, there are some key differences to keep in mind. A definite integral is either a number (when the limits of integration are constants) or a single function (when one or both of the limits of integration are variables). An indefinite integral represents a family of functions, all of which differ by a constant. As you become more familiar with integration, you will get a feel for when to use definite integrals and when to use indefinite integrals. You will naturally select the correct approach for a given problem without thinking too much about it. However, until these concepts are cemented in your mind, think carefully about whether you need a definite integral or an indefinite integral and make sure you are using the proper notation based on your choice.
Basic Integration Formulas
Recall the integration formulas given in [link] and the rule on properties of definite integrals. Let’s look at a few examples of how to apply these rules.
Example \(\PageIndex{1}\): Integrating a Function Using the Power Rule
Use the power rule to integrate the function \( ∫^4_1\sqrt{t}(1+t)dt\).
Solution
The first step is to rewrite the function and simplify it so we can apply the power rule:
\[ ∫^4_1\sqrt{t}(1+t)dt=∫^4_1t^{1/2}(1+t)dt=∫^4_1(t^{1/2}+t^{3/2})dt.\]
Now apply the power rule:
\[ ∫^4_1(t^{1/2}+t^{3/2})dt=(\frac{2}{3}t^{3/2}+\frac{2}{5}t^{5/2})∣^4_1\]
\[ =[\frac{2}{3}(4)^{3/2}+\frac{2}{5}(4)^{5/2}]−[\frac{2}{3}(1)^{3/2}+\frac{2}{5}(1)^{5/2}]=\frac{256}{15}.\]
Exercise \(\PageIndex{1}\)
Find the definite integral of \( f(x)=x^2−3x\) over the interval \([1,3].\)
Hint
Follow the process from Example to solve the problem.
Answer
\[ −\frac{10}{3}\]
The Net Change Theorem
The
net change theorem considers the integral of a rate of change. It says that when a quantity changes, the new value equals the initial value plus the integral of the rate of change of that quantity. The formula can be expressed in two ways. The second is more familiar; it is simply the definite integral.
Net Change Theorem
The new value of a changing quantity equals the initial value plus the integral of the rate of change:
\[F(b)=F(a)+∫^b_aF'(x)dx\]
or
\[∫^b_aF'(x)dx=F(b)−F(a).\]
Subtracting \(F(a)\) from both sides of the first equation yields the second equation. Since they are equivalent formulas, which one we use depends on the application.
The significance of the net change theorem lies in the results. Net change can be applied to area, distance, and volume, to name only a few applications. Net change accounts for negative quantities automatically without having to write more than one integral. To illustrate, let’s apply the net change theorem to a
velocity function in which the result is displacement.
We looked at a simple example of this in The Definite Integral. Suppose a car is moving due north (the positive direction) at 40 mph between 2 p.m. and 4 p.m., then the car moves south at 30 mph between 4 p.m. and 5 p.m. We can graph this motion as shown in Figure.
Figure \(\PageIndex{1}\): The graph shows speed versus time for the given motion of a car.
Just as we did before, we can use definite integrals to calculate the net displacement as well as the total distance traveled. The net displacement is given by
\[ ∫^5_2v(t)dt=∫^4_240dt+∫^5_4−30dt=80−30=50.\]
Thus, at 5 p.m. the car is 50 mi north of its starting position. The total distance traveled is given by
\[ ∫^5_2|v(t)|dt=∫^4_240dt+∫^5_430dt=80+30=110.\]
Therefore, between 2 p.m. and 5 p.m., the car traveled a total of 110 mi.
To summarize, net displacement may include both positive and negative values. In other words, the velocity function accounts for both forward distance and backward distance. To find net displacement, integrate the velocity function over the interval. Total distance traveled, on the other hand, is always positive. To find the total distance traveled by an object, regardless of direction, we need to integrate the absolute value of the velocity function.
Example \(\PageIndex{2}\): Finding Net Displacement
Given a velocity function \(v(t)=3t−5\) (in meters per second) for a particle in motion from time \(t=0\) to time \(t=3,\) find the net displacement of the particle.
Solution
Applying the net change theorem, we have
\[ ∫^3_0(3t−5)dt=\frac{3t^2}{2}−5t∣^3_0=[\frac{3(3)^2}{2}−5(3)]−0=\frac{27}{2}−15=\frac{27}{2}−\frac{30}{2}=−\frac{3}{2}.\]
The net displacement is \( −\frac{3}{2}\) m (Figure).
Figure \(\PageIndex{2}\): The graph shows velocity versus time for a particle moving with a linear velocity function.
Example \(\PageIndex{3}\): Finding the Total Distance Traveled
Use Example to find the total distance traveled by a particle according to the velocity function \(v(t)=3t−5\) m/sec over a time interval \([0,3].\)
Solution
The total distance traveled includes both the positive and the negative values. Therefore, we must integrate the absolute value of the velocity function to find the total distance traveled.
To continue with the example, use two integrals to find the total distance. First, find the t-intercept of the function, since that is where the division of the interval occurs. Set the equation equal to zero and solve for t. Thus,
\(3t−5=0\)
\(3t=5\)
\( t=\frac{5}{3}.\)
The two subintervals are \( [0,\frac{5}{3}]\) and \( [\frac{5}{3},3]\). To find the total distance traveled, integrate the absolute value of the function. Since the function is negative over the interval \([0,\frac{5}{3}]\), we have \(|v(t)|=−v(t)\) over that interval. Over \([ \frac{5}{3},3]\), the function is positive, so \(|v(t)|=v(t)\). Thus, we have
\( ∫^3_0|v(t)|dt=∫^{5/3}_0−v(t)dt+∫^3_{5/3}v(t)dt\)
\( =∫^{5/3}_05−3tdt+∫^3_{5/3}3t−5dt\)
\( =(5t−\frac{3t^2}{2})∣^{5/3}_0+(\frac{3t^2}{2}−5t)∣^3_{5/3}\)
\( =[5(\frac{5}{3})−\frac{3(5/3)^2}{2}]−0+[\frac{27}{2}−15]−[\frac{3(5/3)^2}{2}−\frac{25}{3}]\)
\( =\frac{25}{3}−\frac{25}{6}+\frac{27}{2}−15−\frac{25}{6}+\frac{25}{3}=\frac{41}{6}\).
So, the total distance traveled is \( \frac{14}{6}\) m.
Exercise \(\PageIndex{2}\)
Find the net displacement and total distance traveled in meters given the velocity function \(f(t)=\frac{1}{2}e^t−2\) over the interval \([0,2]\).
Hint
Follow the procedures from Example and Example. Note that \(f(t)≤0\) for \(t≤ln4\) and \(f(t)≥0\) for \(t≥ln4\).
Answer
Net displacement: \( \frac{e^2−9}{2}≈−0.8055m;\) total distance traveled: \( 4ln4−7.5+\frac{e^2}{2}≈1.740 m\)
Applying the Net Change Theorem
The net change theorem can be applied to the flow and consumption of fluids, as shown in Example.
Example \(\PageIndex{4}\): How Many Gallons of Gasoline Are Consumed?
If the motor on a motorboat is started at \(t=0\) and the boat consumes gasoline at the rate of \(5−t^3\) gal/hr, how much gasoline is used in the first 2 hours?
Solution
Express the problem as a definite integral, integrate, and evaluate using the Fundamental Theorem of Calculus. The limits of integration are the endpoints of the interval [0,2]. We have
\[ ∫^2_0(5−t^3)dt=(5t−\frac{t^4}{4})∣^2_0=[5(2)−\frac{(2)^4}{4}]−0=10−\frac{16}{4}=6.\]
Thus, the motorboat uses 6 gal of gas in 2 hours.
Example \(\PageIndex{5}\): Chapter Opener: Iceboats
As we saw at the beginning of the chapter, top
iceboat racers can attain speeds of up to five times the wind speed. Andrew is an intermediate iceboater, though, so he attains speeds equal to only twice the wind speed. Figure \(\PageIndex{3}\): (credit: modification of work by Carter Brown, Flickr)
Suppose Andrew takes his iceboat out one morning when a light 5-mph breeze has been blowing all morning. As Andrew gets his iceboat set up, though, the wind begins to pick up. During his first half hour of iceboating, the wind speed increases according to the function \(v(t)=20t+5.\) For the second half hour of Andrew’s outing, the wind remains steady at 15 mph. In other words, the wind speed is given by
\[ v(t)=\begin{cases}20t+5& for 0≤t≤\frac{1}{2}\\15 & for \frac{1}{2}≤t≤1\end{cases}.\]
Recalling that Andrew’s iceboat travels at twice the wind speed, and assuming he moves in a straight line away from his starting point, how far is Andrew from his starting point after 1 hour?
Solution
To figure out how far Andrew has traveled, we need to integrate his velocity, which is twice the wind speed. Then
Distance =\( ∫^1_02v(t)dt.\)
Substituting the expressions we were given for \(v(t)\), we get
\( ∫^1_02v(t)dt=∫^{1/2}_02v(t)dt+∫^1_{1/2}2v(t)dt\)
\( =∫^{1/2}_02(20t+5)dt+∫^1_{1/3}2(15)dt\)
\( =∫^{1/2}_0(40t+10)dt+∫^1_{1/2}30dt\)
\( =[20t^2+10t]|^{1/2}_0+[30t]|^1_{1/2}\)
\( =(\frac{20}{4}+5)−0+(30−15)\)
\(=25.\)
Andrew is 25 mi from his starting point after 1 hour.
Exercise \(\PageIndex{3}\)
Suppose that, instead of remaining steady during the second half hour of Andrew’s outing, the wind starts to die down according to the function \(v(t)=−10t+15.\) In other words, the wind speed is given by
\( v(t)=\begin{cases}20t+5 & for 0≤t≤\frac{1}{2}\\−10t+15& for\frac{1}{2}≤t≤1\end{cases}\).
Under these conditions, how far from his starting point is Andrew after 1 hour?
Hint
Don’t forget that Andrew’s iceboat moves twice as fast as the wind.
Answer
\(17.5 mi\)
Integrating Even and Odd Functions
We saw in Functions and Graphs that an
even function is a function in which \(f(−x)=f(x)\) for all x in the domain—that is, the graph of the curve is unchanged when x is replaced with −x. The graphs of even functions are symmetric about the y-axis. An odd function is one in which \(f(−x)=−f(x)\) for all x in the domain, and the graph of the function is symmetric about the origin.
Integrals of even functions, when the limits of integration are from −a to a, involve two equal areas, because they are symmetric about the y-axis. Integrals of odd functions, when the limits of integration are similarly \([−a,a],\) evaluate to zero because the areas above and below the x-axis are equal.
Rule: Integrals of Even and Odd Functions
For continuous even functions such that \(f(−x)=f(x),\)
\[∫^a_{−a}f(x)dx=2∫^a_0f(x)dx.\]
For continuous odd functions such that \(f(−x)=−f(x),\)
\[∫^a_{−a}f(x)dx=0.\]
Example \(\PageIndex{6}\): Integrating an Even Function
Integrate the even function \( ∫^2_{−2}(3x^8−2)dx\) and verify that the integration formula for even functions holds.
Solution
The symmetry appears in the graphs in Figure. Graph (a) shows the region below the curve and above the x-axis. We have to zoom in to this graph by a huge amount to see the region. Graph (b) shows the region above the curve and below the x-axis. The signed area of this region is negative. Both views illustrate the symmetry about the y-axis of an even function. We have
\( ∫^2_{−2}(3x^8−2)dx=(\frac{x^9}{3}−2x)∣^2_{−2}\)
\( =[\frac{(2)^9}{3}−2(2)]−[\frac{(−2)^9}{3}−2(−2)]\)
\( =(\frac{512}{3}−4)−(−\frac{512}{3}+4)\)
\( =\frac{1000}{3}\).
To verify the integration formula for even functions, we can calculate the integral from 0 to 2 and double it, then check to make sure we get the same answer.
\[ ∫^2_0(3x^8−2)dx=(\frac{x^9}{3}−2x)∣^2_0=\frac{512}{3}−4=\frac{500}{3}\]
Since \( 2⋅\frac{500}{3}=\frac{1000}{3},\) we have verified the formula for even functions in this particular example.
Figure \(\PageIndex{4}\): Graph (a) shows the positive area between the curve and the x-axis, whereas graph (b) shows the negative area between the curve and the x-axis. Both views show the symmetry about the y-axis.
Example \(\PageIndex{7}\): Integrating an Odd Function
Evaluate the definite integral of the odd function \(−5sinx\) over the interval \([−π,π].\)
Solution
The graph is shown in Figure. We can see the symmetry about the origin by the positive area above the x-axis over \([−π,0]\), and the negative area below the x-axis over \([0,π].\) we have
\[ ∫^π_{−π}−5sinxdx=−5(−cosx)|^π_{−π}=5cosx|^π_{−π}=[5cosπ]−[5cos(−π)]=−5−(−5)=0.\]
Figure \(\PageIndex{5}\):The graph shows areas between a curve and the x-axis for an odd function.
Exercise \(\PageIndex{4}\)
Integrate the function \( ∫^2_{−2}x^4dx.\)
Hint
Integrate an even function.
Answer
\(\dfrac{64}{5}\)
Key Concepts The net change theorem states that when a quantity changes, the final value equals the initial value plus the integral of the rate of change. Net change can be a positive number, a negative number, or zero. The area under an even function over a symmetric interval can be calculated by doubling the area over the positive x-axis. For an odd function, the integral over a symmetric interval equals zero, because half the area is negative. Key Equations Net Change Theorem
\( F(b)=F(a)+∫^b_aF'(x)dx\) or \(∫^b_aF'(x)dx=F(b)−F(a)\)
Glossary net change theorem if we know the rate of change of a quantity, the net change theorem says the future quantity is equal to the initial quantity plus the integral of the rate of change of the quantity Contributors
Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
|
Research Open Access Published: Convergence rates of nonlinear Stokes problems in homogenization Boundary Value Problems volume 2019, Article number: 96 (2019) Article metrics
242 Accesses
Abstract
In this paper, we study the convergence rates of solutions in homogenization of nonlinear Stokes Dirichlet problems. The main difficulty of this work is twofold. On the one hand, the nonlinear Stokes problems do not fit the standard framework of second-order elliptic equations in divergence form. On the other hand, nonlinear problems may cause new difficulties in the estimation of the quantity as well as first-order approximate term. As a consequence, we establish the sharp rates of convergence in \(H^{1}\) and \(L^{2}\). This work may be regarded as an extension of the approach for the linear Stokes problems to the nonlinear case.
Introduction
The main purpose of this paper is to establish the sharp rates of convergence in \(H^{1}\) and \(L^{2}\) for nonlinear Stokes problems with the Dirichlet boundary condition. More precisely, let
Ω be a bounded \(C^{1,1}\) domain in \(\mathbb{R}^{n}\), \(n \geq 3\). Let \(u_{\varepsilon }\in H^{1}(\varOmega;\mathbb{R}^{n})\) be a weak solution to the following problem, which arose in fluid dynamics with porous media:
with the compatibility condition
where
n is the outward unit normal to ∂Ω.
Throughout this paper, the summation convention is used. The nonlinear operator \(L_{\varepsilon }\) is defined by
We will assume that the function
A satisfies the periodicity condition
coerciveness and growth conditions
for all \(y\in \mathbb{R}^{n}\) and \(\xi,\xi '\in \mathbb{R}^{n}\), where \(\lambda _{1}>0\). We impose the smoothness condition
where \(\lambda _{2}>0\) and \(0<\alpha \leq 1\). Without loss of generality, we also assume that
Associated with (1.1) is the homogenized problem
The homogenized operator is defined by
where the function
Q is given, for each \(\xi \in \mathbb{R}^{n}\), by
The periodic functions \((N, \chi )\in H^{1}(\mathbb{R} ^{n})\times L^{2}(\mathbb{R}^{n})\) are the so-called correctors, satisfying the following cell problem:
It is well known that, by the homogenization theory of Stokes problems, the solution \(u_{\varepsilon }\rightharpoonup u_{0}\) weakly in \(H^{1}(\varOmega;\mathbb{R}^{n})\), \(p_{\varepsilon }\rightharpoonup p _{0}\) weakly in \(L^{2}(\varOmega )\), and \(A(x/\varepsilon, \triangledown u_{\varepsilon })\rightharpoonup Q(\triangledown u_{0})\) weakly in \(L^{2}(\varOmega,\mathbb{R}^{n\times n})\), as \(\varepsilon \rightarrow 0\). The existence and convergence results of the weak solution to problem (1.1) may be found in [4, 12].
The convergence rate estimate is one of the fundamental issues in quantitative homogenization. There are many such classic works about convergence results of solutions in homogenization of second-order elliptic equations with the various settings. In 2011, Gérard and Masmoudi [6] got the \(L^{2}\) convergence for the Neumann boundary layer problems. In 2012, Kenig, Lin, and Shen [13] established \(L^{2}\) as well as \(H^{\frac{1}{2}}\) convergence in Lipschitz domains for Dirichlet and Neumann problems. In 2013, Aleksanyan, Shahgholian, and Sjölin [1, 2] proved pointwise and \(L^{p}\) convergence estimates for fixed operators and oscillating Dirichlet boundary data. In 2014, Kenig, Lin, and Shen [14] also obtained \(W^{k,p}\) convergence rates of Dirichlet or Neumann problems for the second-order equations with rapidly oscillating periodic coefficients by using the asymptotic estimates of the Green or Neumann functions. In 2015, the second author [24] proved the pointwise as well as \(W^{1,p}\) convergence estimates for the fixed operators and oscillating Neumann boundary data by utilizing oscillation integral estimates in Fourier analysis. In 2016, Shen [17] proved the \(L^{q}\) convergence rates with Dirichlet or Neumann problems with no smoothness assumption on the coefficients. In 2018, Shen and Zhuge [19] got the \(L^{2}\) convergence rate for the Neumann problems with first-order oscillating Neumann boundary data.
For the case of Stokes problems, some outstanding results about regularity and convergence of solutions in homogenization were established by Gu and Shen in a series of papers. The uniform interior estimates and boundary Hölder estimates for the Dirichlet problem have been established in [8]. Then, the authors in [9] obtained the sharp boundary regularity estimates in homogenization of Dirichlet problem. In 2015, Gu [7] also proved convergence rates in \(L^{2}\) and \(H^{1}\) of Dirichlet problems for linear Stokes systems. In 2017, Gu and Shen [10] got the asymptotic behaviors of the Green functions as well as the convergence rates in \(L^{p}\) and \(L^{\infty }\) for solutions. Recently, other authors have also been interested in the regularity estimates for the Stokes problems, see [3, 5, 11, 23] and their references for more results.
The main difficulty of this work is twofold. On the one hand, the nonlinear Stokes problems do not fit the standard framework of second-order elliptic equations in divergence form, which is caused by the pressure term. On the other hand, nonlinear problems may cause new difficulties in the estimation of the quantity as well as first-order approximate term.
The motivation for studying this paper is inspired by the technology used to deal with linear Stokes problems studied by Gu in [7]. The novelty of this paper lies in that it may be regarded as an extension of the approach for the linear Stokes problems to the nonlinear case. As the author knows, very few convergence rate results are known in the field of nonlinear Stokes problems.
The following are the main results of this paper.
Theorem 1 Let Ω be a bounded \(C^{1,1}\) domain in \(\mathbb{R}^{n}\). Let \(u_{\varepsilon }\in H^{1}(\varOmega;\mathbb{R} ^{n})\) and \(u_{0}\in H^{2}(\varOmega;\mathbb{R}^{n})\) be the weak solutions of the mixed boundary value problems (1.1) and (1.9), respectively. Then, under assumptions (1.2) –(1.8), there exists a constant C such that where \(T_{\varepsilon }\) is the smoothing operator, \(\widetilde{u} _{0}\) is an extension of \(u_{0}\), and \(\omega _{\varepsilon }\) is an approximate function. Theorem 2 Under the same assumptions as Theorem 1, there exists a constant C such that Theorem 3 Under the same conditions as Theorem 1, there exists a constant C such that
The rest of the paper is organized as follows. Section 2 contains some basic definitions and useful propositions which will play important roles in obtaining convergence rates. In Sect. 3, we show that the solution \(u_{\varepsilon }\) of nonlinear Stokes problems is convergent to the solution \(u_{0}\) of the corresponding homogenized problems, this is based on using of a smoothing operator as well as homogenization tools.
Preliminaries
We begin by specifying some of our notations.
Let \(B_{r}(x)\) denote an open ball with center
x and radius r. \(\varOmega _{\varepsilon }=\{x\in \mathbb{R}^{n}: \operatorname{dist}(x,\partial \varOmega )\leq \varepsilon \}\). Since Ω is Lipschitz, then there exists a bounded extension operator \(E:H^{2}(\varOmega )\rightarrow H^{2}( \mathbb{R}^{n})\) such that \(\widetilde{u}_{0}\) is an extension of \(u_{0}\) satisfying \(\Vert \widetilde{u}_{0} \Vert _{H^{2}(\mathbb{R}^{n})}\leq C \Vert u_{0} \Vert _{H^{2}(\varOmega )}\). We also set \(\varphi \in C_{0}^{\infty }( \varOmega;\mathbb{R}^{n})\) is a smooth function and \(\Vert \varphi \Vert _{H^{1}(\mathbb{R}^{n})}\leq C \Vert \varphi \Vert _{H^{1}(\varOmega )}\). We choose a cut-off function \(\eta _{\varepsilon }\in C_{0}^{\infty }(\mathbb{R}^{n})\), which satisfies the conditions: \(\operatorname{supp}{(}\eta _{\varepsilon }{)} \subset \varOmega _{\varepsilon }\), \(\eta _{\varepsilon }|_{\partial \varOmega }\equiv 1\) and \(\vert \triangledown \eta _{\varepsilon } \vert \leq C/\varepsilon \). In this paper, C always denotes a positive constant which may vary in different formulas.
Associated with operator \(L_{\varepsilon }\) in (1.1), the homogenized operator is
Proposition 2.1 The function Q defined in (1.10) satisfies the analogous properties as function A: and for all \(\xi,\xi '\in \mathbb{R}^{n}\). Proof
The proof could be found in [16], which is similar to the linear operator case. Obviously, this shows that the homogenized operator \(L_{0}\) still satisfies the same coerciveness and growth conditions. □
Proposition 2.2 The function \(N(\cdot, \xi )\in H^{1}(Y)\) is a weak solution to (1.11). Then we have and for all \(y,\xi \in \mathbb{R}^{n}\). Proof
The next proposition is the special relation between
Q and A in homogenization. We also call them the flux correctors. Proposition 2.3 Let where \(y\in Y\) and \(\xi \in \mathbb{R}^{n}\). Together with (1.10) and (1.11), it is easy to know that \(F(\cdot,\xi )\) satisfies conditions \(\int_{Y}F(y,\xi )\,dy=0\) and \(\operatorname{div}_{y}F(y, \xi )=\triangledown \chi \). Then there exists \(\varPhi _{ij}(\cdot,\xi )\in H^{1}(\mathbb{R}^{n})\) such that Moreover, Proof
The linear operator case is well known (see, for example, [13], Lemma 3.1). This proposition is quite similar to the linear case. Let \(f_{j}\in H^{2}(Y)\) be the solution to the cell problem \(\triangle f_{j}+\chi _{j}=F_{j}\) in
Y. Then, we could define \(\varPhi _{ij}(y,\xi )=\frac{\partial }{\partial y_{i}}[f_{j}(y,\xi )]-\frac{ \partial }{\partial y_{j}}[f_{i}(y,\xi )]\). From every estimate and (1.11), we may get the desired properties. We refer the reader to [7] for more details. □
Recently, the smoothing operators were introduced by Suslina in [20, 21], which was used to establish the convergence estimate in \(L^{2}\) for a broad class of elliptic or parabolic operators. This work seems to extend the usage of smoothing operators to the case of nonlinear Stokes problems.
Fix \(\psi \in C^{\infty }_{0}(B_{1}(0))\) such that \(\psi \geq 0\) and \(\int _{\mathbb{R}^{n}}\psi \,dx=1\). Define operator \(T_{\varepsilon }\) on \(L^{2}\) as
where \(\psi _{\varepsilon }(x)=\varepsilon ^{-n}\psi (x/\varepsilon )\).
Proposition 2.4 If \(u_{0}\in H^{2}(\mathbb{R}^{n})\), then and Proof
By Parseval’s theorem and Hölder’s inequality, we could get the desired result. The proof could be found in [17]. □
Proposition 2.5 If \(u_{0}\in H^{2}(\mathbb{R}^{n})\), then Proof Proofs of theorems
The goal of this section is to establish \(H^{1}\) and \(L^{2}\) convergence rates of solutions.
Proof of Theorem 1
Let \(\omega _{\varepsilon }\in H^{1}(\varOmega )\) be a weak solution of
We will use \(\omega _{\varepsilon }\) to approximate the difference of pressure term.
Introduce the first-order approximation of \(u_{\varepsilon }\):
Note that, for any \(\varphi \in C_{0}^{\infty }(\varOmega;\mathbb{R} ^{n})\),
A simple calculation then gives that
Next, we shall estimate \(I_{2}\). Let
Note that \(F(x/\varepsilon, T_{\varepsilon }(\triangledown \widetilde{u}_{0}))\) is a periodic function with respect to the first variable, and it satisfies the conditions of Proposition 2.3. Then there exists \(\varPhi _{ij}(\cdot,\xi )\in H^{1}(\mathbb{R}^{n})\) satisfying
Thus, it gives that
For the first term,
where the first term vanishes in the last equality, which depends on the antisymmetry of \(\varPhi _{ij}\).
where we have used Proposition 2.2.
Then let \(\varphi =u_{\varepsilon }-v_{\varepsilon }=u_{\varepsilon }-u _{0}-\varepsilon N(x/\varepsilon, T_{\varepsilon }(\triangledown \widetilde{u}_{0}))+\omega _{\varepsilon }\). By the coercive condition and the equation satisfied by \(\omega _{\varepsilon }\), we can get the desired result, which completes the proof. □
Proof of Theorem 2
According to Theorem 1, we obtain the estimate
Hence, it suffices to show that
In fact, by equation (3.1) and energy estimate, we obtain
This completes the proof of Theorem 2. □
Proof of Theorem 3
which completes the proof of Theorem 3. □
References 1.
Aleksanyan, H., Shahgholian, H., Sjölin, P.: Applications of Fourier analysis in homogenization of Dirichlet problem I. Pointwise estimates. J. Differ. Equ.
254, 2626–2637 (2013) 2.
Aleksanyan, H., Shahgholian, H., Sjölin, P.: Applications of Fourier analysis in homogenization of the Dirichlet problem: \(L^{p}\) estimates. Arch. Ration. Mech. Anal.
215, 65–87 (2015) 3.
Alghamdi, A.M., Gala, S., Ragusa, M.A.: On the blow-up criterion for incompressible Stokes-MHD equations. Results Math.
73, 110 (2018) 4.
Bensoussan, A., Lions, J.L., Papanicolaou, G.: Asymptotic Analysis for Periodic Structures. North-Holland, Amsterdam (1978)
5.
Gala, S., Ragusa, M.A.: A new regularity criterion for the Navier Stokes equations in terms of the two components of the velocity. Electron. J. Qual. Theory Differ. Equ.
2016, 26 (2016) 6.
Gérard, D., Masmoudi, N.: Homogenization and boundary layers. Acta Math.
209, 133–178 (2012) 7.
Gu, S.: Convergence rates in homogenization of Stokes systems. J. Differ. Equ.
260, 5796–5815 (2016) 8.
Gu, S., Shen, Z.: Homogenization of Stokes systems and uniform regularity estimates. SIAM J. Math. Anal.
47, 4025–4057 (2015) 9.
Gu, S., Xu, Q.: Optimal boundary estimates for Stokes systems in homogenization theory. SIAM J. Math. Anal.
49, 3831–3853 (2016) 10.
Gu, S., Zhuge, J.: Periodic homogenization of Green’s functions for Stokes systems (2017) arXiv:1710.05383v2
11.
Huang, L., Lian, R.: Regularity to the spherically symmetric compressible Navier–Stokes equations with density-dependent viscosity. Bound. Value Probl.
2018, 85 (2018) 12.
Jikov, V., Kozlov, S., Oleinik, O.: Homogenization of Differential Operators and Integral Functionals. Springer, Berlin (1994)
13.
Kenig, C.E., Lin, F.H., Shen, Z.W.: Convergence rates in \(L^{2}\) for elliptic homogenization problems. Arch. Ration. Mech. Anal.
203, 1009–1036 (2012) 14.
Kenig, C.E., Lin, F.H., Shen, Z.W.: Periodic homogenization of Green and Neumann functions. Commun. Pure Appl. Math.
67, 1219–1262 (2012) 15.
Pakhnin, M.A., Suslina, T.A.: Operator error estimates for the homogenization of the elliptic Dirichlet problem in a bounded domain. St. Petersburg Math. J.
24, 949–976 (2013) 16.
Pastukhova, S.E.: Operator estimates in nonlinear problems of reiterated homogenization. Proc. Steklov Inst. Math.
261, 214–228 (2008) 17.
Shen, Z.W.: Boundary estimates in elliptic homogenization. Mathematics
10, 653–694 (2017) 18.
Shen, Z.W., Zhuge, J.: Convergence rates in periodic homogenization of systems of elasticity. Proc. Am. Math. Soc.
145, 1187–1202 (2016) 19.
Shen, Z.W., Zhuge, J.: Boundary layers in periodic homogenization of Neumann problems. Commun. Pure Appl. Math.
71, 2163–2219 (2018) 20.
Suslina, T.: Homogenization of the Dirichlet problem for elliptic systems: \(L^{2}\)-operator error estimates. Mathematika
59, 463–476 (2013) 21.
Suslina, T.: Homogenization of the Neumann problem for elliptic systems with periodic coefficients. SIAM J. Math. Anal.
45, 3453–3493 (2013) 22.
Wang, L., Xu, Q., Zhao, P.: Convergence rates on periodic homogenization of p-Laplace type equations (2018) arXiv:1812.04837
23.
Zhang, Z., Zhong, D., Cao, S., Qiu, S.: Fundamental Serrin type regularity criteria for 3D MHD fluid passing through the porous medium. Filomat
31, 1287–1293 (2017) 24.
Zhao, J.: Homogenization of the boundary value for the Neumann problem. J. Math. Phys.
56, 021508 (2015) Acknowledgements
The authors would like to thank the reviewers for their valuable comments and helpful suggestions to improve the quality of this paper. The part of this work was done while the author was visiting School of Mathematics and Applied Statistics, University of Wollongong, Australia.
Availability of data and materials
Not applicable.
Funding
This work has been supported by the Natural Science Foundation of China (No. 11626239), China Scholarship Council (No. 201708410483), as well as Foundation of Education Department of Henan Province (No. 18A110037).
Ethics declarations Competing interests
The authors declare that they have no competing interests.
Additional information Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
In this second chapter on Taylor Series, we will be studying the case where the n.th derivative of an infinitely differentiable function, does not go to zero. In such cases we therefore have to restrict our values of \(x\) such that for these values the series does converge and not diverge. We will also take a look at the series for the Exponential and Sine function.
Let us begin with a look once again at the series for the infinitely differentiable function, the square root of \(x\). The theories that we are to develop will hold true for any function raised to a positive or negative fractional exponent.
\[ \sqrt{x}\approx 1+\dfrac{(x-1)}{2}-\dfrac{(x-1)^2}{8}+\dfrac{(x-1)^3}{16}-\dfrac{5(x-1)^4}{128}+\dfrac{7(x-1)^5}{256}-\dfrac{21(x-1)^6}{1024}+\dfrac{231(x-1)^7}{14336} \]
Remember we let \(a=1\) to find the square root of any number \(x\). Take a close look at the fractions preceding each term:
\[\dfrac{1}{2},\dfrac{1}{8},\dfrac{1}{16},\dots,\dfrac{21}{1024},\dfrac{231}{14336} . \]
It seems that they are slowly going to zero, but now look at the terms of the Taylor Series:
\[(x-1)^1,(x-1)^2,(x-1)^3,\dots,(x-1)^n . \]
Since each term represents a power of some number \((x-1)\), then we can conclude that if this number is greater than 1, then each term in the series will get larger and larger and approach infinity, in the infinite term.
However if the absolute value of \((x-1)\) were less than 1, then as the number of terms in the series increased, the value of each term will decrease. Therefore we can say that the following series is only valid for:
\[\begin{align} &|x-1|<1 \\ \iff & -1<x-1<1 \\ \iff & 0<x<2 .\end{align}\]
This tells us that for \(x\) in this range of values, the infinite term will go to zero and the series will converge. Remember, the more times you multiply a fraction by itself, the
smaller it becomes.
Now that we have defined the series for a certain range of numbers then we can extend this range to include all the real numbers. For example we can find the square root of any fraction \(\frac{1}{x}\) where \(1<x<\infty\). This means we can find the square root of any number, greater than 2, by taking the reciprocal of its square root. If we had to find \(\sqrt{47}\), then we just rewrite it as \(\dfrac{1}{\frac{1}{\sqrt{47}}}\). Since \(0<\frac{1}{47}<2\) then we just find the square root of \(\frac{1}{47}\) by letting \(x=\frac{1}{47} \), then taking the reciprocal of this value.
Now let us move on to finding Taylor Series for the exponential and Complex Sine Function. The Taylor series for \(y=e^x\) can be easily found since its \(n\) derivatives are all the same, \(e^x\). The series is then:
\[f(x)=e^x=e^a+e^a(x-a)+e^a\dfrac{(x-a)^2}{2!}+e^a\dfrac{(x-a)^3}{3!}+e^a\dfrac{(x-a)^4}{4!}+\dots . \]
The easiest value to choose for \(a\) is 0 since \(e^0=1\)
\[f(x)=e^x=1+x+\dfrac{x^2}{2!}+\dfrac{x^3}{3!}+\dfrac{x^4}{4!}+\dfrac{x^4}{4!}+\dfrac{x^5}{5!}+\dfrac{x^6}{6!}+\dfrac{x^7}{7!}+\dots \]
Since the limit as \(n\) goes to infinity of \(\frac{x^n}{n!}\) is zero, regardless of what value \(x\) is, the series is valid for any value of \(x\).
Letting \(x=1\) and using only the first eight terms gives us the value for \(e\):
\[f(x)=e=1+1+\dfrac{1}{2!}+\dfrac{1}{3!}+\dfrac{1}{4!}+\dfrac{1}{5!}+\dfrac{1}{6!}+\dfrac{1}{7!}+\dots \]
\[\implies e\approx 2.718253968 .\]
The calculator value for \(e\) is 2.718281828 which corresponds to an error of less than .001 % using only eight terms. The more terms used the more accurate your answer will be.
Now that we have found the series for \(y=e^x\) we can find the Taylor series for \(y=\ln (x)\), which is also an infinitely differentiable function. Fist let is find its \(n\) derivatives:
\[\begin{align} &f(x)=\ln(x), &f'(x)\dfrac{1}{x}, &&f''(x)=-\dfrac{1}{x^2}, &&f'''(x)=\dfrac{2!}{x^3} \\ &f^{(4)}(x)=-\dfrac{3!}{x^4}, &f^{(5)}=\dfrac{4!}{x^5}, &&f^{(6)}=-\dfrac{5!}{x^6}, &&f^{(7)}=\dfrac{6!}{x^7} . \end{align}\]
Using the first seven derivatives we write the following Taylor series:
\[\begin{align} f(x)=\ln(x) &=\ln(a)+\dfrac{1}{a}(x-a)-\dfrac{1}{a^2}\dfrac{(x-a)^2}{2!}+\dfrac{2!}{a^3}\dfrac{(x-a)^3}{3!}-\dfrac{3!}{a^4}\dfrac{(x-a)^4}{4!} \\ &+\dfrac{4!}{a^5}\dfrac{(x-a)^5}{5!}-\dfrac{5!}{a^6}\dfrac{(x-a)^6}{6!}+\dfrac{6!}{a^7}\dfrac{(x-a)^7}{7!}. \end{align}\]
Letting \(a\) equal 1 and simplifying factorials:
\[f(x)=\ln(x)=(x-1)-\dfrac{(x-1)^2}{2}+\dfrac{(x-1)^3}{3}-\dfrac{(x-1)^4}{4}+\dfrac{(x-1)^5}{5}-\dfrac{(x-1)^6}{6}+\dfrac{(x-1)^7}{7}-\dots .\]
There is one small problem here. Though the Natural Log of \(x\) is defined for all values of \(x\) greater than zero, the Taylor series on the other hand is only valid for \(1<x<2\). If \(x\) were three for example, the series on the right would diverge as each term, \(\frac{2^n}{n}\) will get larger and larger.
In an infinitely differentiable function as in \(y=\sqrt{x}\), we assumed that the last term in:
\[f(x)=\sum_{k=0}^{n-1} f^{(k)}(a)\dfrac{(x-a)^k}{k!}+f^{(n)}(c)\dfrac{(x-a)^n}{n!} \]
\(f^{(n)}(c)\dfrac{(x-a)^n}{n!}\) went to zero as \(n\) went to infinity. This was based on the fact that
\[\lim_{n\rightarrow \infty}c\cdot \dfrac{x^n}{n!}=c\lim_{n\rightarrow \infty} \dfrac{x^n}{n!}=c\cdot 0 = 0 . \]
However in the case of the series for the natural log of \(x\), the last few terms become:
\[\dfrac{(n-1)!}{a^n}\cdot \dfrac{(x-a)^n}{n!}=\dfrac{1}{a^n}\cdot \dfrac{(x-a)^n}{n} .\]
Taking the limit as \(n\) goes to infinity:
\[\lim_{n\rightarrow \infty} \dfrac{1}{a^n}\cdot \dfrac{(x-a)^n}{n}= \lim_{n\rightarrow\infty} \dfrac{1}{n}\left( \dfrac{x-a}{n} \right)^n .\]
The limit of this function clearly goes to infinity as long as the absolute value of \(\left|\dfrac{x-a}{a} \right|\) is greater than 1. Conversely if \(\left| \dfrac{x-a}{a} \right|\) is less than 1 or \(-a<x-a<a\) or \(0<x<2a\), then the last term will go to zero and the Taylor Series will hold true. By making the modification to the Taylor Series for \(\ln(x)\), the series will converge for \(0<x<2a\). Since \(a =1\) the series converges for:
\[\begin{align}\left|\dfrac{x-a}{a} \right|&<1 \\ |x-1|&<1 \\ 0<x&<2 . \end{align} \]
Hence the series:
\[f(x)=\ln(x)=(x-1)-\dfrac{(x-1)^2}{2}+\dfrac{(x-1)^3}{3}-\dfrac{(x-1)^4}{4}+\dfrac{(x-1)^5}{5}-\dfrac{(x-1)^6}{6}+\dfrac{(x-1)^7}{7}-\dots .\]
is valid for \(x\) between 0 and 2. Despite the restricted value of \(x\), we are still able to calculate \(\ln(x)\) for any \(x\), by just taking reciprocal values. If we wanted to find the \(\ln(40)\), \(40>2\) we would have to calculate \(\ln(40)^{-1}\) which equals
\[\begin{align} \ln \left(\dfrac{1}{40}\right)&=-\ln(40) \\ \therefore \ln(x)&= -\ln \left( \dfrac{1}{x} \right) \\ \text{for } &x>1 .\end{align}\]
The Taylor Series for \(\sin(x)\) and \(\cos(x)\) are also quite easy to find. Since we know the derivative of \(\sin(x)\) is \(\cos(x)\) and \(\cos(x)\) is \(-\sin(x)\) and we can evaluate these functions at \(a=0\), as \(\sin(0)=0\) and \(\cos(0)=1\), the Taylor Series are as follows. First find the first few derivatives.
\[\begin{align} &f(x)=\sin(x), &f'(x)=\cos(x), &&f''(x)=-\sin(x), &&f'''(x)=-\cos(x) \\ &f^{(4)}(x)=\sin(x), &f^{(5)}(x)=\cos(x), &&f^{(6)}(x)=-\sin(x), &&f^{(7)}(x)=-\cos(x) .\end{align}\]
Using this to write the Taylor Series for the first eight terms is:
\[\begin{align} &f(x)=\sin(x)= \sin(a)+\cos(a)(x-a)-\sin(a)\dfrac{(x-a)^2}{2!} -\cos(a)\dfrac{(x-a)^3}{3!} +\sin(a)\dfrac{(x-a)^4}{4!} \\ &+\cos(a)\dfrac{(x-a)^5}{5!} -\sin(a)\dfrac{(x-a)^6}{6!}-\cos(a)\dfrac{(x-a)^7}{7!}. \end{align}\]
Letting \(a=1\),
\[\begin{align} f(x)&=\sin(x)=0+x-0-1\cdot \dfrac{x^3}{3!}+0+1\cdot \dfrac{x^5}{5!}-0 -1\cdot \dfrac{x^7}{7!}+0 +\dots \\ &\sin(x)=x-\dfrac{x^3}{3!}+\dfrac{x^5}{5!}-\dfrac{x^7}{7!}+\dfrac{x^9}{9!}-\dfrac{x^11}{11!}+\dots .\end{align}\]
Since the limit the infinite term in this series goes to zero as \(n\) goes to infinity, then the series is convergent for all values of \(x\).
Differentiating the series gives us the Taylor series for \(\cos(x)\):
\[\cos(x)=1-\dfrac{x^2}{2!}+\dfrac{x^4}{4!}-\dfrac{x^6}{6!}+\dfrac{x^8}{8!}-\dfrac{x^{10}}{10!}+\dots \]
This solution is remarkable .It allows us to define the Sine and Cosine functions mathematically in terms of an infinite series. You might be wondering how the series is defined for any arc length, \(\pi,\; 2\pi,\; 30\pi\) etc.
From the graph of the circle it is clear that its arc length is continuous and passes throughout the same point infinite times as it completes its rounds. For this reason our integral for the inverse sine function could only be solved using imaginary numbers. Our study of the Sine function began with simplicity, rose to reason, climaxed in the abstract, fell into the imaginary, and now ends in perfection.
Our study of Taylor Series showed us how by integrating a function \(f^{(n)}(x)\), \(n\) times, we were able to express \(f(x)\) in terms of its \(n\) derivatives evaluated at a point \(a\) with the last terms being evaluated at some point \(c\), between \(a\) and \(x\). The series took the form
\[f(x)=f(a)+f'(a)(x-a)+f''(a)\dfrac{(x-a)^2}{2!}+f'''(a)\dfrac{(x-a)^3}{3!}+\dots+f^{(n)}(c)\dfrac{(x-a)^n}{n!} \]
Taylor's Theorem thus states:
\[f(x)=\sum_{k=0}^{n-1} f^{(k)}(a)\dfrac{(x-a)^k}{k!}+f^{(n)}(c)\dfrac{(x-a)^n}{n!} \]
The important point to realize here is that \(n\) stands for an integer, such that a finite differentiable function can be expressed as a series of its \(n\) derivatives evaluated at some point \(a\). The last term, \(f^{(n)}(c)\dfrac{(x-a)^n}{n!}\) can be found regardless of the value of \(c\) sine the last derivative of a finite differentiable function is always a constant.
As we saw, expressing finite differentiable functions in terms of a series of powers of \(x\) turned out to be impractical. It is here where we saw the value of extending Taylor's theorem to infinitely differentiable functions. This would mean we would have an infinite series with infinite derivatives. The only way to prove that such a series could be written and would hold true for any value of \(x\) was by taking the limit of:
\[\lim_{n\rightarrow \infty}f^{(n)}(a)\dfrac{(x-a)^n}{n!}. \]
If the limit was zero then we know that \(f^{(n)}(a)\dfrac{(x-a)^n}{n!}\), or the last term of the series also goes to zero and therefore the series converges for all values of \(x\). If this limit did not go to zero then we had to modify or restrict the values of such that the series would converge for those values of \(x\). We could then extend this interval for all real numbers.
Integrated by Justin Marshall.
|
I have a question regarding the uniqueness of the potential flow past a cylinder.
Consider a two dimensional uniform potential flow in $x_1$-direction past the cylinder $B_R = \{ x = (x_1, x_2) \in \mathbb{R}^2 : \vert x \vert < R \}$ for some radius $R > 0$. The velocity potential $\varphi(x)$ is then subject to the boundary value problem
\begin{equation} \begin{aligned} \Delta_x \, \varphi(x) &= 0 \qquad &&\text{in} \quad \mathbb{R}^2 \setminus \overline{B_R} \; , \\ \frac{x}{\vert x \vert} \cdot \nabla_x \, \varphi(x) &= 0 \qquad &&\text{on} \quad \partial B_R \; , \\ \end{aligned} \end{equation}
together with the limiting behaviour $\nabla_x \, \varphi(x) \to (v_\infty, 0)$ as $\vert x \vert \to \infty$ for some $v_\infty > 0$. In most hydrodynamic lectures the general solution to this problem is given as the superposition of a uniform flow and doublet in $x_1$-direction,
\begin{equation} \varphi(x) = v_\infty \left( x_1 + \frac{R^2 x_1}{x_1^2 + x_2^2} \right) \; . \end{equation}
While I see that the given solution solves the stated problem I do not see how to proof uniqueness or wether this is even possible.
As far as I know the exterior Neumann problem of the Laplace equation for some bounded (and for simplicity connected) domain $\Omega \subset \mathbb{R}^2$ with outer normal $\nu$,
\begin{equation} \begin{aligned} \Delta u&= 0 \qquad &&\text{in} \quad \mathbb{R}^2 \setminus \overline{\Omega} \; , \\ \frac{\partial u}{\partial \nu} &= 0 \qquad &&\text{on} \quad \partial \Omega \; , \\ \end{aligned} \end{equation}
together with the limiting behaviour $\vert u(x) \vert / \ln(\vert x \vert) \to 0$ as $\vert x \vert \to \infty$ has the unique harmonic solution $u = const.$ on $\mathbb{R}^2 \setminus \Omega$ up to an additive constant.
Since for the above solution $\vert \varphi(x) \vert / \ln(\vert x \vert)$ diverges as $\vert x \vert \to \infty$ I wonder wether it is the unique harmonic solution and how to proof/contradict it.
|
I'm working on a smooth $(d-1)$-dimensional surface $M\subset \mathbb{R}^d$. Let $(\phi_k)_{k\in\mathbb{N}}$ be an orthonormal basis of $L^2(M)$ consisting of the eigenfunctions of the Laplace-Beltrami operator $-\triangle_{M}$, with corresponding eigenvalues $(\lambda_k)_{k\in\mathbb{N}}$.
Claim: the $(\phi_k)$ are $H^1$ orthogonal, where $H^s=W^{s,2}$ is an $L^2$ Sobolev space, and $\lVert\phi_k\rVert_{H^1}^2 = 1+\lambda_k$.
Proof: By a Green's formula on $M$ (a generalisation of the divergence theroem; note I'm assuming $\partial M=\emptyset$) $$\int_M \nabla \phi_j \cdot \nabla \phi_k dx= -\int_M \phi_j \triangle \phi_k dx = \lambda_k \int_M \phi_j \phi_k dx= \lambda_k\delta_{jk}.$$
This generalises to $H^n$ for any positive integer $n$. I'm confident the $\phi_k$ are also $H^s$ orthogonal, with norms $\lVert \phi_k \rVert_{H^s}^2 \sim 1+\lambda_k^s$ (with the exact constants depending on how you define the norm) for any $s\in\mathbb{R}$, defining $H^s$ in some appropriate sense. Does someone have a reference for this? I've not done much differential geometry so ideally one which is fairly gentle in that regard!
|
I don't know how to solve the following:
Let $\alpha$ be a real root of $f(x)=x^4+3x-3\in Q[x]$. Is $\alpha$ a constructible number?
Any help is welcome.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I tried to get some info about the Galois group $G$ of $f$ (over $\mathbb Q$). It turns out that $f$ has a root $-1$ in $\mathbb F_5$ and $f(x)/(x+1)$ is irreducible in $\mathbb F_5[x]$. The group $G\subset S_4$ thus contains a $3$-cycle (Frobenius at $p=5$), in particular the order of $G$ is not a power of $2$, so the roots of $f$ are not constructible.
|
If you're not quite in the market for a full proof:$$a^n=a\times a\times a\times a...\times a$$$$n!=1\times 2\times 3\times 4...\times n$$Now what happens as $n$ gets much bigger than $a$? In this case, when $n$ is huge, $a$ will have been near some number pretty early in the factorial sequence. The exponential sequence is still being multiplied by that ...
Of course $C e^x$ has the same property for any $C$ (including $C = 0$). But these are the only ones.Proposition: Let $f : \mathbb{R} \to \mathbb{R}$ be a differentiable function such that $f(0) = 1$ and $f'(x) = f(x)$. Then it must be the case that $f = e^x$.Proof. Let $g(x) = f(x) e^{-x}$. Then$$g'(x) = -f(x) e^{-x} + f'(x) e^{-x} = (f...
There are two things I should point out. One is that the arithmetic mean doesn't properly measure annual growth rate. The second is why.The correct calculation for average annual growth is geometric mean.Let $r_1,r_2,r_3,\ldots,r_n$ be the yearly growth of a particular investment/portfolio/whatever. Then if you invest $P$ into this investment, after $n$ ...
"Why is this hard?" I think a different question would be "Why would it be easy?"But there are some things that are known. It is known that $\pi$ and $e$ are transcendental. Thus $(x-\pi)(x-e) = x^2 - (e + \pi)x + e\pi$ cannot have rational coefficients. So at least one of $e + \pi$ and $e\pi$ is irrational. It's also known that at least one of $e \pi$ ...
Yes. For example\begin{align*}\sinh x &= -i \sin(ix) \\\cosh x &= \cos(ix) \\\tanh x &= -i \tan(ix) \\\end{align*}These identities come from the definitions,$$ \sin x = \frac{e^{xi}-e^{-xi}}{2i} \text{ and } \sinh x = \frac{e^x - e^{-x}}{2} $$and similar for cosine and tangent.
What answer you find most elegant may depend on what definition of $e$ you're starting with, as Dylan suggests, but I find this argument quite short and sweet:$$\begin{align}&\quad 1 + 1 &= 2\\&< 1 + 1 + \frac12 + \frac1{2\cdot3} + \frac1{2\cdot3\cdot4} + \cdots &= e \\&< 1 + 1 + \frac12 + \frac1{2\cdot2} + \frac1{2\cdot2\cdot2} ...
Analogously, here are several ways to define me:I am the citizen of the US with social security number [XYZ]. This is of primary interest to the government.I am the oldest son of [my mother's name]. This is of primary interest to my family.I am the instructor of [particular course meeting at particular days/times] at [university]. This is of primary ...
$x^{n+1}=x\cdot x^n$ right?so$x^1=x \cdot x^0$ but $x=x^1$ so for that to hold true, $x^0$ must be $1$.Similarily,$\large x^{-n} = \frac{1}{x^n}$.So $\large x^n \cdot x^{-n} = x^n \frac{1}{x^n} = 1$.But $\large x^n \cdot x^{-n} = x^{n+(-n)} = x^0$, so once more, $x^0=1$.There are really many reasons for that to hold, and all of them are just ...
An intuitive way to see this is to consider that you're trying to show$$a^n < n!$$for sufficiently large $n$. Take the log of both sides, you get$$n\log(a) = \log(a^n) < \log(n!) = \sum_{i = 1}^n\log(i).$$Now as you increase $n$ you only add $\log(a)$ to the left side, but the $\log(n + 1)$ that you add to the right can be arbitrarily large as $n$ ...
You may write, for $N \geq 2$,$$\begin{align}e^{3/2}\prod_{n=2}^{N}e\left(1-\dfrac{1}{n^2}\right)^{n^2}&=e^{3/2}\times\prod_{n=2}^{N}e\times\prod_{n=2}^{N}\left(1-\dfrac{1}{n^2}\right)^{n^2}\\\\&=e^{3/2}\times e^{N-1}\times\prod_{n=2}^{N}\dfrac{(n-1)^{n^2}}{n^{n^2}}\dfrac{(n+1)^{n^2}}{n^{n^2}}\\\\&=e^{3/2}\times e^{N-1}\times\prod_{n=2}^{N}\...
The shortest proof I could think of:$$1 + x \leq 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots = e^x.$$However, it is not completely obvious for negative $x$.Using derivatives:Take $f(x) = e^x - 1 - x$. Then $f'(x) = e^x - 1$ with $f'(x) = 0$ if and only if $x = 0$. But this is a minimum (global in this case) since $f''(0) = 1 > 0$ (the second ...
Yes. They are the same thing.When exponents get really really complicated, mathematicians tend to start using $\exp(\mathrm{stuff})$ instead of $e^{\mathrm{stuff}}$.For example: $e^{x^5+2^x-7}$ is kind of hard to read. So instead one might write: $\exp(x^5+2^x-7)$.Note: For those who use Maple or other computer algebra systems, e^x is not usually the ...
Consider the function $x^{\frac{1}{x}}$. Differentiating gives $x^{\frac{1}{x}}(\frac{1}{x^2})(1-\ln x)$, so the function attains its global maximum at $x=e$.Thus $e^{\frac{1}{e}} \geq \pi^{\frac{1}{\pi}}$, and it is clear that the inequality is strict, so $e^{\pi}>\pi^{e}$.
The point is that $1-\frac{1}{n}$ is less than $1$, so raising it to a large power will make it even less-er than $1$. On the other hand, $1+\frac{1}{n}$ is bigger than $1$, so raising it to a large power will make it even bigger than $1$.There's been some brouhaha in the comments about this answer. I should probably add that $(1-\epsilon(n))^n$ could go ...
If someone asks me, "what is $e$?" I sketch the graph of $y=1/x$, draw a line segment from $(1,1)$ on the curve down to $(1,0)$ on the $x$-axis, and ask, how far to the right do I have to draw another vertical segment to rope off an area of 1? Anyone who is familiar with the idea of graphing a function can appreciate that definition, and it's not surprising ...
Besides the connections between hyperbolic and circular functions which arise from substitutions involving imaginary arguments the functions can also be related using only real arguments via the Gudermannian function defined as $$\text{gd}(x)=\int_0^x\text{sech}\,t\,dt$$This leads to identities such as $\sinh x = \tan (\text{gd}\,x)$ and $\sin x = \tanh( \...
Let $f(x)$ be a differentiable function such that $f'(x)=f(x)$. This implies that the $k$-th derivative, $f^{(k)}(x)$, is also equal to $f(x)$. In particular, $f(x)$ is $C^\infty$ and we can write a Taylor expansion for $f$:$$T_f(x) = \sum_{k=0}^\infty c_k x^k.$$Notice that the fact that $f(x)=f^{(k)}(x)$, for all $k\geq 0$, implies that the Taylor ...
If you know Taylor expansion: then$$e^x=1+x+\frac{x^2}{2!}+...$$We can get (Or you may take derivative to prove it)$$e^{x}>1+x, \forall x>0$$Then set$$x=\frac{\pi}{e}-1>0$$We get$$e^{\frac{\pi}{e}-1}>1+\frac{\pi}{e}-1\Leftrightarrow \frac{e^{\frac{\pi}{e}}}{e}>\frac{\pi}{e}\Leftrightarrow e^{\frac{\pi}{e}}>\pi\Leftrightarrow e^{\pi}...
Hints.We first show that $2<\mathrm{e}<3$ (see below), and hence $\mathrm{e}$ is not an integer.Next, following up OP's thought, assuming $\mathrm{e}=a/b$, we multiply by $b!$ and we obtain$$\sum_{k=0}^\infty \frac{b!}{k!}=a\cdot (b-1)! \tag{1}$$The right hand side of $(1)$ is an integer.The left hand side of $(1)$ is of the form$$\sum_{k=...
After a 50% loss you need a 100% gain to break even. In that scenario the arithmetic average return is 25% and the geometric average return is 0%.It is more important to maximize geometric rather than arithmetic average return -- and this is intimately connected with the concept of risk-adjusted return and mean-variance optimization.Given a set of ...
We can confine attention to $b \ge 1$. This is because, if $0<b<1$, then $n^b \le n$. If we can prove that $n/a^n$ approaches $0$, it will follow that $n^b/a^n$ approaches $0$ for any positive $b\le 1$. So from now on we take $b \ge 1$.Now look at $n^b/a^n$, and take the $b$-th root. We get$$\frac{n}{(a^{1/b})^n}$$or equivalently$$\frac{n}{c^n}$...
The number $e$ has many different characterizations.The word "characterization" has a precise meaning in mathematics. An exercise in a textbook may say:(a) Prove that $X$ is enormously purple but not largely purple.(b) Prove that the property of being enormously purple but not largely purple characterizes $X$.The student is expected to understand ...
|
A sequence of events $A_n, n \in \mathbb{N}$ is said to have a high probability, if $\mathrm{P} (A_n^c) \leq \frac{c}{n^d}$ for some $c, d >0$.
Chernoff bounds are often used to prove some (upper or lower) tail events of a binomial distributed random variable $X$ have a high probability. In such proofs, we are given a function $f$, and often need to find some $a, b>0$, such that $e^{f(n)} \leq \frac{a}{n^b}$ for $n \in \mathbb{N}$.
For example,
$$ e^{-\frac{10 \ln n}{2} \ln(10 \ln n)} \leq \frac{1}{n^8}, \text{ for every } n \geq 2 $$ and $$ e^{-\frac{9 \ln n \ln 9}{2} } \leq \frac{1}{n^8}. $$ How are the exponents $8$ of $\frac{1}{n^8}$ determined in the above two examples? Is it possible to determine the constants $c, a, b$ for each of the following two examples, $$ e^{- \frac{(c \ln n -\frac{\ln n}{n^2} + 1) * \ln (cn^2 + \frac{n^2}{\ln n}-1)}{2}} \leq \frac{a}{n^b}, $$ and $$ e^{- (c \ln n + 1) * \ln (c\ln n+1) } \leq \frac{a}{n^b}? $$ Does the LHS of each above example belong to the sub-exponential time complexity?
Thanks!
|
I’m currently getting to grips with the
AdaptiveMonteCarlo method in the
NIntegrate function. I’ve been using the sub-method
MonteCarloRule, however I’m unsure from reading the Mathematica documentation exactly how the option
Points (for this sub-method) actually works. Suppose for example I have the following integral $$\int_{0}^{\pi}\sin(x)dx $$ Then using
NIntegrate I have
NIntegrate[Sin[x], {x,0,Pi}, Method->{“AdaptiveMonteCarlo”, Method->{“MonteCarloRule”, “Points”->5}, “MaxRecursion”->200}, AccuracyGoal->5, PrecisionGoal->5]
Now, I realise that I could simply use
Integrate on this integral and get an exact result, but I wanted to choose a relatively simple analytic integral as practise.
Using
Points->5 gives a pretty accurate result, but does specifying
Points->5 mean that
NIntegrate only uses 5 points in its Monte Carlo routine? I’m assuming there must be more to it than that otherwise I wouldn’t expect the result to be so close to the true result.
Any help would be much appreciated.
|
Existence of positive ground state solutions for Choquard equation with variable exponent growth
School of Mathematics and Statistics, Southwest University, Chongqing 400715, China
$ \begin{equation*} -\Delta u = (I_\alpha*|u|^{f(x)})|u|^{f(x)-2}u \ \ \ \ {\rm in} \ \mathbb{R}^N, \end{equation*} $
$ N\geq 3 $
$ \alpha\in (0,N) $
$ I_\alpha $
$ \begin{equation*} f(x) = \begin{cases} p, &x\in\Omega,\\ ( N+\alpha)/(N-2), &x\in\mathbb{R}^N \backslash\Omega, \end{cases} \end{equation*} $
$ 1< p <\frac{N+\alpha}{N-2} $
$ \Omega\subset\mathbb{R}^N $ Keywords:Choquard equation, zero mass, Nehari manifold, positive ground state solutions, variable exponent growth. Mathematics Subject Classification:Primary: 35A15, 35B50; Secondary: 35J61. Citation:Gui-Dong Li, Chun-Lei Tang. Existence of positive ground state solutions for Choquard equation with variable exponent growth. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2035-2050. doi: 10.3934/dcdss.2019131
References:
[1] [2] [3]
C. O. Alves, D. Cassani, C. Tarsi and M. Yang,
Existence and concentration of ground state solutions for a critical nonlocal Schrodinger equation in $\mathbb{R}^2$,
[4] [5]
C. O. Alves, A. Nobrega and M. Yang, Multi-bump solutions for Choquard equation with deepening potential well,
[6]
C. O. Alves and M. Yang,
Investigating the multiplicity and concentration behaviour of solutions for a quasilinear Choquard equation via the penalization method,
[7]
S. Antontsev and S. Shmarev,
Elliptic equations and systems with nonstandard growth conditions: existence, uniqueness and localization properties of solutions,
[8] [9] [10]
H. Brézis and L. Nirenberg,
Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents,
[11] [12] [13]
F. Gao and M. Yang, A strongly indefinite Choquard equation with critical exponent due to the Hardy-Littlewood-Sobolev inequality,
[14] [15] [16]
D. Gilbarg and N. S. Trudinger,
[17]
G.-D. Li and C.-L. Tang, Existence of ground state solutions for Choquard equation involving the general upper critical,
[18]
G.-D. Li and C.-L. Tang, Existence of a ground state solution for Choquard equation with the upper critical exponent,
[19] [20] [21] [22] [23] [24] [25] [26]
R. A. Mashiyev, B. Cekic, M. Avci and Z. Yucedag,
Existence and multiplicity of weak solutions for nonuniformly elliptic equations with nonstandard growth condition,
[27] [28]
I. M. Moroz, R. Penrose and P. Tod,
Spherically-symmetric solutions of the Schrödinger-Newton equations,
[29] [30]
V. Moroz and J. Van Schaftingen,
Existence of groundstates for a class of nonlinear Choquard equations,
[31]
V. Moroz and J. Van Schaftingen,
Groundstates of nonlinear Choquard equations: Existence, qualitative properties and decay asymptotics,
[32]
V. Moroz and J. Van Schaftingen, Groundstates of nonlinear Choquard equations: Hardy-Littlewood-Sobolev critical exponent,
[33] [34]
S. Pekar,
[35]
P. Pucci and Q. Zhang,
Existence of entire solutions for a class of variable exponent elliptic equations,
[36] [37] [38]
D. Ruiz and J. Van Schaftingen,
Odd symmetry of least energy nodal solutions for the Choquard equation,
[39] [40]
J. Van Schaftingen and J. Xia, Choquard equations under confining external potentials,
[41] [42] [43] [44] [45]
V. V. Zhikov,
Averaging of functionals of the calculus of variations and elasticity theory,
show all references
References:
[1] [2] [3]
C. O. Alves, D. Cassani, C. Tarsi and M. Yang,
Existence and concentration of ground state solutions for a critical nonlocal Schrodinger equation in $\mathbb{R}^2$,
[4] [5]
C. O. Alves, A. Nobrega and M. Yang, Multi-bump solutions for Choquard equation with deepening potential well,
[6]
C. O. Alves and M. Yang,
Investigating the multiplicity and concentration behaviour of solutions for a quasilinear Choquard equation via the penalization method,
[7]
S. Antontsev and S. Shmarev,
Elliptic equations and systems with nonstandard growth conditions: existence, uniqueness and localization properties of solutions,
[8] [9] [10]
H. Brézis and L. Nirenberg,
Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents,
[11] [12] [13]
F. Gao and M. Yang, A strongly indefinite Choquard equation with critical exponent due to the Hardy-Littlewood-Sobolev inequality,
[14] [15] [16]
D. Gilbarg and N. S. Trudinger,
[17]
G.-D. Li and C.-L. Tang, Existence of ground state solutions for Choquard equation involving the general upper critical,
[18]
G.-D. Li and C.-L. Tang, Existence of a ground state solution for Choquard equation with the upper critical exponent,
[19] [20] [21] [22] [23] [24] [25] [26]
R. A. Mashiyev, B. Cekic, M. Avci and Z. Yucedag,
Existence and multiplicity of weak solutions for nonuniformly elliptic equations with nonstandard growth condition,
[27] [28]
I. M. Moroz, R. Penrose and P. Tod,
Spherically-symmetric solutions of the Schrödinger-Newton equations,
[29] [30]
V. Moroz and J. Van Schaftingen,
Existence of groundstates for a class of nonlinear Choquard equations,
[31]
V. Moroz and J. Van Schaftingen,
Groundstates of nonlinear Choquard equations: Existence, qualitative properties and decay asymptotics,
[32]
V. Moroz and J. Van Schaftingen, Groundstates of nonlinear Choquard equations: Hardy-Littlewood-Sobolev critical exponent,
[33] [34]
S. Pekar,
[35]
P. Pucci and Q. Zhang,
Existence of entire solutions for a class of variable exponent elliptic equations,
[36] [37] [38]
D. Ruiz and J. Van Schaftingen,
Odd symmetry of least energy nodal solutions for the Choquard equation,
[39] [40]
J. Van Schaftingen and J. Xia, Choquard equations under confining external potentials,
[41] [42] [43] [44] [45]
V. V. Zhikov,
Averaging of functionals of the calculus of variations and elasticity theory,
[1]
Yinbin Deng, Wentao Huang.
Positive ground state solutions for a quasilinear elliptic equation with critical exponent.
[2]
Maoding Zhen, Jinchun He, Haoyuan Xu, Meihua Yang.
Positive ground state solutions for fractional Laplacian system with one critical exponent and one subcritical exponent.
[3]
Gui-Dong Li, Chun-Lei Tang.
Existence of ground state solutions for Choquard equation involving the general upper critical Hardy-Littlewood-Sobolev nonlinear term.
[4]
Dengfeng Lü.
Existence and concentration behavior of ground state solutions for magnetic nonlinear Choquard equations.
[5]
Min Liu, Zhongwei Tang.
Multiplicity and concentration of solutions for Choquard equation via Nehari method and pseudo-index theory.
[6]
Kaimin Teng, Xiumei He.
Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent.
[7]
Xianhua Tang, Sitong Chen.
Ground state solutions of Nehari-Pohozaev type for Schrödinger-Poisson problems with general potentials.
[8]
Sitong Chen, Junping Shi, Xianhua Tang.
Ground state solutions of Nehari-Pohozaev type for the planar Schrödinger-Poisson system with general nonlinearity.
[9]
Hongyu Ye.
Positive high energy solution for Kirchhoff equation in $\mathbb{R}^{3}$ with superlinear nonlinearities via Nehari-Pohožaev manifold.
[10]
Marco A. S. Souto, Sérgio H. M. Soares.
Ground state solutions for quasilinear stationary Schrödinger equations with critical growth.
[11]
Claudianor Oliveira Alves, M. A.S. Souto.
On existence and concentration behavior of ground state solutions for a class of problems with critical growth.
[12]
Zhanping Liang, Yuanmin Song, Fuyi Li.
Positive ground state solutions of a quadratically coupled schrödinger system.
[13]
Qingfang Wang.
The Nehari manifold for a fractional Laplacian equation involving critical nonlinearities.
[14]
C. Cortázar, Marta García-Huidobro.
On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian.
[15]
C. Cortázar, Marta García-Huidobro.
On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian.
[16]
Li Yin, Jinghua Yao, Qihu Zhang, Chunshan Zhao.
Multiple solutions with constant sign of a Dirichlet problem for a class of elliptic systems with variable exponent growth.
[17]
Sergey Zelik.
Asymptotic regularity of solutions of a nonautonomous damped wave equation with a critical growth exponent.
[18] [19]
Yanfang Xue, Chunlei Tang.
Ground state solutions for asymptotically periodic quasilinear Schrödinger equations with critical growth.
[20]
Giovany M. Figueiredo, Tarcyana S. Figueiredo-Sousa, Cristian Morales-Rodrigo, Antonio Suárez.
Existence of positive solutions of an elliptic equation with local and nonlocal variable diffusion coefficient.
2018 Impact Factor: 0.545
Tools Metrics Other articles
by authors
[Back to Top]
|
Show that the Post Correspondence Problem (PCP) is decidable over the unary alphabet ? = {0}.
I will use the following formalization: A PCP-instance is a set $X = \{(a_1, b_1), ..., (a_n, b_n)\}$ where $a_i, b_i \in \Sigma^\ast$ where $\Sigma = \{0\}$ is our alphabet (note that thus every $a_i, b_i$ is of the form $0^k$). Such an instance is a yes-instance if there exists a finite sequence $I \in \{1, ..., n\}^m$ such that $a_{I_1}...a_{I_m} = b_{I_1}...b_{I_m}$.
Now, obviously, if there exists a pair such that $a_i = b_i$, then we have a yes-instance and if $|a_i| < |b_i|$ (or, by symmetry, $|b_i| < |a_i|$) for all $i$, then the instance must be a no-instance. Otherwise, there exist $i \neq j$ such that $|a_i| < |b_i|$ and $|a_j| > |b_j|$ and note that the problem now boils down to finding $k_i, k_j \in \mathbb{N}$ such that $k_i \Delta_i = k_j \Delta_j$ where $\Delta_i = |b_i| - |a_i|$ and $\Delta_j = |a_j| - |b_j|$. But this equation clearly has a solution for all $(\Delta_i, \Delta_j) \in \mathbb{N}^2$ by simply choosing $k_i = \Delta_j$ and $k_j = \Delta_i$, giving us that $$ a_i^{\Delta_j} a_j^{\Delta_i} = b_i^{\Delta_j} b_j^{\Delta_i} $$ as $|b_i^{\Delta_j}| - |a_i^{\Delta_j}| = \Delta_j (|b_i| - |a_i|) = \Delta_i \Delta_j$ and $|a_j^{\Delta_i}| - |b_j^{\Delta_i}| = \Delta_i (|a_j| - |b_j|) = \Delta_i \Delta_j$ and thus, $$ \begin{align} |a_i^{\Delta_j} a_j^{\Delta_i}| - |b_i^{\Delta_j} b_j^{\Delta_i}| &= |a_i^{\Delta_j}| + |a_j^{\Delta_i}| - |b_i^{\Delta_j}| - |b_j^{\Delta_i}| \\ &= (|a_j^{\Delta_i}| - |b_j^{\Delta_i}|) - (|b_i^{\Delta_j}| -|a_i^{\Delta_j}|) \\ &= \Delta_i \Delta_j - \Delta_i \Delta_j\\ &= 0 \end{align} $$.
Hence, it suffices to check whether there exist $i, j$ such that $|a_i| \leq |b_i|$ and $|b_j| \leq |a_j|$ (output yes if this is the case and no otherwise) and it follows that the PCP over unary alphabets is decidable.
|
How do I evaluate $$\lim_{x \to 0}\left(\frac{e^{2\sin x}-1}{x}\right)$$
I know it's the indeterminate form since the numerator and denominator both approach 0, but I can't use l'Hopital's rule so I'm not sure how to go about finding the limit.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
How do I evaluate $$\lim_{x \to 0}\left(\frac{e^{2\sin x}-1}{x}\right)$$
I know it's the indeterminate form since the numerator and denominator both approach 0, but I can't use l'Hopital's rule so I'm not sure how to go about finding the limit.
Hint: Consider the function $f(x)=e^{2\sin x}$. What is the derivative of said function at $x=0$?
HINT: $\lim_{y\to0}\frac{e^y-1}{y}=1$. Hence $$ \lim_{x\to0}\frac{e^{2\sin x}-1}{x}= \lim_{x\to0}\frac{e^{2\sin x}-1}{2\sin x}\cdot\frac{2\sin x}{x} $$ and the rest should be easy.
$$ \underbrace{f'(0) = \lim_{h\to0} \frac{f(0+h)-f(0)} h}_{\text{definition of ``derivative"}} = \lim_{h\to0}\frac{e^{2\sin h} - e^{2\sin 0}} h = \lim_{h\to0}\frac{e^{2\sin h} - 1} h. $$ So just find $f'(0)$ by the methods you would normally use to compute a derivative.
A definition of the derivative is at $x=a$ is $$f'(a)=\lim\limits_{x\to a}\frac{f(x)-f(a)}{x-a}$$ Now plug in $a=0,f(x)=e^{2\sin x}$ and you'll have your answer.
When given a task that includes the text "You are not allowed to use so-and-so Rule", you can still use the
proof of that rule in your answer.
So, look up the proof of L'H in your text book or on Wikipedia, put in your function where apropriate. Ignore everything that isn't relevant to your case.
In the end you will have your answer as well as a better understanding of the rule.
With $x$ in the neigborhood of $0$: $$e^{2sin(x)}=1+2x+o(x^2)$$ so $$\frac{e^{2sin(x)}-1}{x}=2+o(x)$$.
I guess the answer should be $2$. Multiply up and down by $2 \sin x$. Since we know $\lim_{x \to0} \frac{e^x -1}{x} =1$. Proceeding that way leaves us only with $\lim_{x \to0} \frac{2 \sin x}{x}$ (and we know $\lim_{x \to 0} \frac{\sin x}{x}=1$) So answer should be $2$.
Well, there is an easy solution if you know the series expansion of $e^x$.
Near $x=0$, $\sin(2x) \approx 2x$ therefor the given equation reduces to
$$\lim_{x\to0} \frac{e^{2x}-1}{x}$$
Now apply series expansion for $e^{2x}=1+x+...$ , neglecting higher powers as $x\to0$ There for it reduces to
$$\lim_{x\to0}\frac{1+x-1}{x}=1$$
|
By Dr Adam Falkowski (Résonaances; Orsay, France)
The title of this post is purposely over-optimistic in order to increase the traffic. A more accurate statement is that a recent analysis
of X-ray spectrum of galactic clusters claims the presence of a monochromatic \(3.5\keV\) photon line which can be interpreted as a signal of a\[ Detection of An Unidentified Emission Line in the Stacked X-ray spectrum of Galaxy Clustersby Esra Bulbul and 5 co-authors (NASA/Harvard-Smithsonian)
\large{m_{\nu({\rm ster})} = 7\keV }
\]sterile neutrino dark matter candidate decaying into a photon and an ordinary neutrino. It's a long way before this claim may become a well-established signal. Nevertheless, in my opinion, it's not the least believable hint of dark matter coming from astrophysics in recent years.
First, let me explain why one would anyone dirty their hands to study X-ray spectra. In the most popular scenario the dark matter particle is a WIMP — a particle in the \(\GeV\)-\(\TeV\) mass ballpark that has weak-strength interactions with the ordinary matter. This scenario may predict signals in gamma rays, high-energy anti-protons, electrons etc, and these are being searched high and low by several Earth-based and satellite experiments.
But in principle the mass of the dark matter particle could be anywhere between \(10^{-30}\) and \(10^{50}\GeV\), and there are many other models of dark matter on the market. One serious alternative to WIMPs is a \(\keV\)-mass sterile neutrino. In general, neutrinos
aredark matter: they are stable, electrically neutral, and are produced in the early universe. However we know that the 3 neutrinos from the Standard Model constitute only a small fraction of dark matter, as otherwise they would affect the large-scale structure of the universe in a way that is inconsistent with observations. The story is different if the 3 "active" neutrinos have partners from beyond the Standard Model that do not interact with W- and Z-bosons — the so-called "sterile" neutrinos. In fact, the simplest UV-complete models that generate masses for the active neutrinos require introducing at least 2 sterile neutrinos, so there are good reasons to believe that these guys exist. A sterile neutrino is a good dark matter candidate if its mass is larger than \(1\keV\) (because of the constraints from the large-scale structure) and if its lifetime is longer than the age of the universe.
How can we see if this is the right model? Dark matter that has no interactions with the visible matter seems hopeless. Fortunately, sterile neutrino dark matter is expected to decay and produce a smoking-gun signal in the form of a monochromatic photon line. This is because, in order to be produced in the early universe, the sterile neutrino should mix slightly with the active ones. In that case, oscillations of the active neutrinos into sterile ones in the primordial plasma can populate the number density of sterile neutrinos, and by this mechanism it is possible to explain the observed relic density of dark matter. But the same mixing will make the sterile neutrino decay, as shown in the diagrams here. If the sterile neutrino is light enough and/or the mixing is small enough then its lifetime can be much longer than the age of the universe, and then it remains a viable dark matter candidate.
The tree-level decay into 3 ordinary neutrinos is undetectable, but the 2-body loop decay into a photon and and a neutrino results in production of photons with the energy\[
\large{E=\frac{m_{\rm DM}}{2}.}
\] Such a monochromatic photon line can potentially be observed. In fact, in the simplest models sterile neutrino dark matter heavier than \(\approx 50\keV\) would produce a too large photon flux and is excluded. Thus the favored mass range for dark matter is between \(1\) and \(50\keV\). Then the photon line is predicted to fall into the X-ray domain that can be studied using X-ray satellites like XMM-Newton, Chandra, or Suzaku.
Until last week these searches were only providing lower limits on the lifetime of sterile neutrino dark matter. This paper claims they may have hit the jackpot. The paper use the XMM-Newton data to analyze the stacked X-ray spectra of many galaxy clusters where dark matter is lurking. After subtracting the background they see is this:
Although the natural reaction here is a loud "are you kidding me", the claim is that the excess near \(3.56\keV\) (red data points) over the background model is very significant, at 4-5 astrophysical sigma. It is difficult to assign this excess to any know emission lines from usual atomic transitions. If interpreted as the signal of sterile neutrino dark matter, the measured energy and the flux corresponds to the red star in the plot, with the mass \(7.1\keV\) and the mixing angle of order \(5\times 10^{-5}\). This is allowed by other constraints and, by twiddling with the lepton asymmetry in the neutrino sector, consistent with the observed dark matter relic density.
Clearly, a lot could go wrong with this analysis. For one thing, the suspected dark matter line doesn't stand alone in the spectrum. The background mentioned above consists not only of continuous X-ray emission but also of monochromatic lines from known atomic transitions. Indeed, the \(2\)-\(10\keV\) range where the search was performed is pooped with emission lines: the authors fit 28 separate lines to the observed spectrum before finding the unexpected residue at \(3.56\keV\). The results depend on whether these other emission lines are modeled properly. Moreover, the known argon XVII dielectronic recombination line happens to be nearby at \(3.62\keV\). The significance of the signal decreases when the flux from that line is allowed to be larger than predicted by models. So this analysis needs to be confirmed by other groups and by more data before we really get excited.
Decay diagrams borrowed from this review. For more up-to-date limits on sterile neutrino DM see this paper, or this plot. Update: another independent analysis of XMM-Newton data observes the anomalous 3.5 keV line in the Andromeda and the Perseus cluster. The text was reposted from Adam's blog with his permission...
|
The hyperbolic functions appear with some frequency in applications, and are quite similar in many respects to the trigonometric functions. This is a bit surprising given our initial definitions.
Definition 4.11.1: Hyperbolic Cosines and Sines
The
hyperbolic cosine is the function
\[\cosh x ={e^x +e^{-x }\over2},\]
and the
hyperbolic sine is the function
\[\sinh x ={e^x -e^{-x}\over 2}.\]
Notice that \(\cosh\) is even (that is, \(\cosh(-x)=\cosh(x)\)) while \(\sinh\) is odd (\(\sinh(-x)=-\sinh(x)\)), and \( \cosh x + \sinh x = e^x\). Also, for all \(x\), \(\cosh x >0\), while \(\sinh x=0\) if and only if \( e^x -e^{-x }=0\), which is true precisely when \(x=0\).
Lemma 4.11.2
The range of \(\cosh x\) is \([1,\infty)\).
Proof
Let \(y= \cosh x\). We solve for \(x\):
\[\eqalign{y&={e^x +e^{-x }\over 2}\cr 2y &= e^x + e^{-x }\cr 2ye^x &= e^{2x} + 1\cr 0 &= e^{2x}-2ye^x +1\cr e^{x} &= {2y \pm \sqrt{4y^2 -4}\over 2}\cr e^{x} &= y\pm \sqrt{y^2 -1}\cr} \]
From the last equation, we see \( y^2 \geq 1\), and since \(y\geq 0\), it follows that \(y\geq 1\).
Now suppose \(y\geq 1\), so \( y\pm \sqrt{y^2 -1}>0\). Then \( x = \ln(y\pm \sqrt{y^2 -1})\) is a real number, and \(y =\cosh x\), so \(y\) is in the range of \(\cosh(x)\).
\(\square\)
Definition 4.11.3: Hyperbolic Tangent and Cotangent
The other hyperbolic functions are
\[\eqalign{\tanh x &= {\sinh x\over\cosh x}\cr \coth x &= {\cosh x\over\sinh x}\cr \text{sech} x &= {1\over\cosh x}\cr \text{csch} x &= {1\over\sinh x}\cr} \]
The domain of \(\coth\) and \(\text{csch}\) is \(x\neq 0\) while the domain of the other hyperbolic functions is all real numbers. Graphs are shown in Figure \(\PageIndex{1}\)
Certainly the hyperbolic functions do not closely resemble the trigonometric functions graphically. But they do have analogous properties, beginning with the following identity.
Theorem 4.11.4
For all \(x\) in \(\mathbb{R}\), \( \cosh ^2 x -\sinh ^2 x = 1\).
Proof
The proof is a straightforward computation:
\[\cosh ^2 x -\sinh ^2 x = {(e^x +e^{-x} )^2\over 4} -{(e^x -e^{-x} )^2\over 4}= {e^{2x} + 2 + e^{-2x } - e^{2x } + 2 - e^{-2x}\over 4}= {4\over 4} = 1. \]
\(\square\)
This immediately gives two additional identities:
\[1-\tanh^2 x =\text{sech}^2 x\qquad\hbox{and}\qquad \coth^2 x - 1 =\text{csch}^2 x.\]
The identity of the theorem also helps to provide a geometric motivation. Recall that the graph of \( x^2 -y^2 =1\) is a hyperbola with asymptotes \(x=\pm y\) whose \(x\)-intercepts are \(\pm 1\). If \((x,y)\) is a point on the right half of the hyperbola, and if we let \(x=\cosh t\), then \( y=\pm\sqrt{x^2-1}=\pm\sqrt{\cosh^2x-1}=\pm\sinh t\). So for some suitable \(t\), \(\cosh t\) and \(\sinh t\) are the coordinates of a typical point on the hyperbola. In fact, it turns out that \(t\) is twice the area shown in the first graph of Figure \(\PageIndex{2}\). Even this is analogous to trigonometry; \(\cos t\) and \(\sin t\) are the coordinates of a typical point on the unit circle, and \(t\) is twice the area shown in the second graph of Figure \(\PageIndex{2}\).
Given the definitions of the hyperbolic functions, finding their derivatives is straightforward. Here again we see similarities to the trigonometric functions.
Theorem 4.11.5
\( {d\over dx}\cosh x=\sinh x\) and \thmrdef{thm:hyperbolic derivatives} \( {d\over dx}\sinh x = \cosh x\).
Proof
\[ {d\over dx}\cosh x= {d\over dx}{e^x +e^{-x}\over 2} = {e^x- e^{-x}\over 2} =\sinh x,\]
and
\[ {d\over dx}\sinh x = {d\over dx}{e^x -e^{-x}\over 2} = {e^x +e^{-x }\over 2} =\cosh x.\]
Since \(\cosh x > 0\), \(\sinh x\) is increasing and hence injective, so \(\sinh x\) has an inverse, \(\text{arcsinh} x\). Also, \(\sinh x > 0\) when \(x>0\), so \(\cosh x\) is injective on \([0,\infty)\) and has a (partial) inverse, \(\text{arccosh} x\). The other hyperbolic functions have inverses as well, though \(\text{arcsech} x\) is only a partial inverse. We may compute the derivatives of these functions as we have other inverse functions.
Theorem 4.11.6
\( {d\over dx}\text{arcsinh} x = {1\over\sqrt{1+x^2}}\).
Proof
Let \(y=\text{arcsinh} x\), so \(\sinh y=x\). Then
\[ {d\over dx}\sinh y = \cosh(y)\cdot y' = 1,\]
and so
\[ y' ={1\over\cosh y} ={1\over\sqrt{1 +\sinh^2 y}} = {1\over\sqrt{1+x^2}}.\]
The other derivatives are left to the exercises.
|
Practice Paper 1 Question 10
A circle of radius \(r\) is tangent at two points on the parabola \(y=x^2\) such that the angle between the two radii at the tangent points is \(2\theta\), where \(0<2\theta<\pi\). Find \(r\) as a function of \(\theta\).
Related topics Warm-up Questions What is the slope of the tangent to the curve \(y = 2x^2+3x-2\) at \(x=4\)? Find the value of \(a\) such that the line \(y_1=3x+a\) is tangent to the curve \(y_2=2x^2+3x+1\) You are given \(3\) lines: \(y=2x\), \(y=2x-2\), \(y=-\frac{1}{2}x+3\). For each pair of lines, state whether they are parallel or perpendicular. Hints Hint 1Let \(P\) be one of the points where the parabola and the circle touch. What is the slope of the tangent at \(P\), given you know the equation of the curve? Hint 2What is the angle made by the tangent at \(P\) with the \(x\)-axis in terms of \(\theta\)? Hint 3What is another expression of the slope of the tangent at \(P\) in terms of this angle. Hint 4Can you extract the tangent of this angle from the right triangle with sides \(r,\) \(x\) and \(r\cos(\theta)?\) Solution
Let \(P=(x,x^2)\) be one of the two points where the parabola and circle are tangent to each other. The radius \(r\) sits on the line normal at \(P\). The slope of the tangent line at \(P\) is equal to the derivative of \(x^2\), i.e. \(2x\), but also equal to the tangent of the angle the line makes with the \(x\)-axis, which in this case is \(\theta\) (owing to the fact we have a right angle at \(P\) between the tangent and the normal), i.e. \(\tan\theta=\frac{x}{r\cos\theta}\). Equating we get \(2x=\frac{x}{r\cos\theta}\) and so \(r=\frac{1}{2\cos\theta}\).
Alternatively, we could also have used the fact that the product between the slopes of the tangent and the normal is \(-1\). The slope of the normal in our case is minus the tangent of the angle opposite \(\theta\) (can you explain why?), i.e. \(-\frac{r\cos\theta}{x}\). Hence \(r\) is extracted from \(-\frac{r\cos\theta}{x}\cdot2x=-1\).
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
In this entry I'll explain and give code for the easiest method I know to reconstruct a surface represented by a scalar field \(f: \mathbb R^3 \rightarrow \mathbb R\) by interpolating a set of point with their normals. The method is called Hermite Radial Basis Function (HRBF) interpolation. The audience aimed is anyone with some basic knowledge about implicit surfaces (which people often know as metaballs and render with the marching cube algorithm)
Quick links: [ HRBF core sources ] [ HRBF toy app sources ] [ HRBF toy app binaries ] [ maths details ] [ maths summary ] [ tex sources (FR) (EN) ] Method An example
The goal is to build a surface \( f \) going through \( N \) points \( \mathbf p_i \), moreover the normals of \( f \) must match the user defined normals at each point \( \mathbf p_i \):
In these figures we can see two configurations. Here, points coordinates are the same, only a normal changes its direction (they are both represented in green). As we can see it plays an important role in the final shape of the surface. Using normals in addition to points has several advantages: the underlying algorithm will be more robust and easier to implement unlike other methods who only uses points (like standard RBF reconstruction). Also normals gives more control to the user like tangents used for 2D curves.
Other examples: look at the introductory figures, we see that when a point is too far away from the other points, a piece of surface gets separated from the main object producing a second piece of surface shaped like a bubble.
The maths
I'll split this section into two parts. First part (
Conventions) I'll describe the conventions and basic notions I'm using, it's the easy part. The second part ( Building the scalar field \( f \)) is more difficult but not mandatory to use the C++ code available below, however it will be useful to anyone who want to re-implement the method. Conventions
So, given a set of \( N \) points and normals \( (\mathbf p_i, \mathbf n_i) \) we seek a scalar field \(f: \mathbb R^3 \rightarrow \mathbb R\). By convention we want \( f \) to be equal to zero everywhere the surface lies. Since the surface must go through every points \( \mathbf p_i \) then \( f(\mathbf p_i) = 0\). Inside the shape \( f \) is negative and outside \( f \) is positive. The function \( f \) returns values ranging from \( [-\infty; +\infty ] \) and has increasing values from the inside to the outside. We say that \( f \) has a global support or is globally supported. It means the function varies everywhere in the ambient space \( \mathbf R^3 \).
It's worth noticing Metaballs are not defined with globally supported functions but compactly supported functions. Indeed the scalar field of a metaball decreases from the center until it vanishes to zero at the metaball's radius. So the function of a metaball stops varying outside its radius which means we can bound the variation inside a bounding box hence the name compactly supported. If you want to blend objects by summing them as you usually do with metaballs they have to be compactly supported. In this other entry I explain how to convert a globally supported scalar field to a compact support.
Here is a picture to help you visualize our function \( f \):
We see the surface for \(f(\mathbf p) \) = 0. Curves inside are red with \( f(\mathbf p) < 0\) and outside blue with \( f(\mathbf p) > 0\). What I did is cut the 3D space with a 2D plane and draw at regular interval curves. Every points that lies on the same curve has the same scalar value of \( f \). For instance points at the frontier between blue and red are equal to zero. The next curve farther from the surface represent points for which \( f(\mathbf p) = 1 \) and so on. These curves are analog to contour lines in hiking maps but instead of representing the terrain elevation it tells the distance between a point and the implicit surface. (N.B: blue curves should continue indefinitely as the distance increases but I did not render all of them)
There is one thing I haven't defined yet which is how to compute the normals to the implicit surface. We have to compute what we call the gradient of the implicit surface which is written \( \nabla f \). The gradient return a vector which contains in each component the partial derivatives of \( f \):
$$ \nabla f = \begin{pmatrix} \frac {\partial f(x, y, z)}{\partial x}, && \frac {\partial f(x, y, z)}{\partial y}, && \frac {\partial f(x, y, z)}{\partial z}
\end{pmatrix} $$
The gradient vector gives at any point of \( f \) the direction of steepest ascent i.e. the direction towards which \(f\) increases most. If you look back to the iso-curves you'll understand that the gradient is always orthogonal to the curves and hence to the implicit surface. Indeed \( f \) doesn't vary if we follow a curve but increases the most when we go right through them.
Now we know how to compute the normal with \( \nabla f \) at any point of the implicit surface. In order to the implicit surface normals to match normals \( n_i \), we have to set \( \nabla f(\mathbf p_i) = \mathbf n_i \). This is what we have defined so far:
At each point \( f(\mathbf p_i) = 0\) and \( \nabla f(\mathbf p_i) = \mathbf n_i \) Inside the object \( f(\mathbf p) < 0\) Outside the object \( f(\mathbf p) > 0\) Building the scalar field \( f \)
Note: reading this section is not mandatory to use the C++ code.
Now comes the hard part: finding \( f \) that interpolates the set of points and normals \( (\mathbf p_i, \mathbf n_i) \). First we have to define what \( f \) looks like. How to find the form of \( f \) is outside the topic of this tutorial I'll just give the formula directly:
$$ f(\mathbf{x}) = \sum_i^N \mathbf{\alpha}_i \phi(\| \mathbf{x} - \mathbf{p}_i\|) +
\boldsymbol{\beta}_i . \nabla \left [\phi(\| \mathbf{x} - \mathbf{p}_i \|) \right ]$$
So with have \( \mathbf x \in \mathbb R^3 \) the evaluated position and \( \mathbf{p}_i \) the samples/points to be interpolated. The function \( \phi : \mathbb R \rightarrow \mathbb R \) is a called a radial basis function which I advice to define as \( \phi(x) = x^3 \). there are other possibilities but in my experience the method works best with \( x^3\). In order to be able to compute the value of \( f \) we will have to find the \( N \) values of \( \alpha_i \in \mathbb R \) and \( \boldsymbol{\beta}_i \in \mathbb R^3 \). Notice that \( \boldsymbol{\beta}_i \) and \( \nabla \phi \) being vectors \( \boldsymbol{\beta}_i . \nabla \phi \) is the dot product. In order to find the unknown \( \alpha_i \) and \( \boldsymbol{\beta}_i \) we have to solve this linear system of equations:
$$ \begin{pmatrix}
f( \mathbf{p}_i ) \\ \nabla f( \mathbf{p}_i ) \end{pmatrix}= \begin{pmatrix} 0 \\ \mathbf{n}_i \end{pmatrix} $$
As we said before points on the surface are associated to the value zero and normals and gradient must match. For this system I'm using a rather compact notation because there is actually \( 4N \) lines with \( 4N \) unknown. As it would take too much space I have put all the [ developments details in a pdf ]. If you're not interested in the developments I've made a [ summary with only the detailed matrix ] to solve the system.
Interpreting the formula of \( f \)
It may be a bit abstract to understand how summing the expression \( \mathbf{\alpha}_i \phi(\| \mathbf{x} - \mathbf{p}_i\|) +
\boldsymbol{\beta}_i . \nabla \left [\phi(\| \mathbf{x} - \mathbf{p}_i \|) \right ] \) with appropriate weights \( \mathbf{\alpha}_i \) and \( \boldsymbol{\beta}_i \) gives a correct interpolation of the implicit surface. Lets take an example and draw the implicit surface:
$$ g(\mathbf x) = 1 \phi(\| \mathbf{x} \|) + (1,0,0) . \nabla \left [\phi(\| \mathbf{x} \|) \right ] $$
It's a single HRBF centered at \( (0, 0, 0) \) with \( \mathbf{\alpha}_i = 1 \) and \( \boldsymbol{\beta}_i = (1, 0, 0) \) which looks like this:
The point position in red is \( (0,0,0) \) and the normal direction is \( (1,0,0) \). So we can observe that each point correspond to a blob like function. However the blob is not centered at the point position but its surface goes through the point. Moreover the normal controls the orientation of this blob. So the intuition is that by blending all the blobs with a sum we generate a final object whose shape goes through all the points.
The essential
The C++ code for HRBF interpolation uses the library Eigen to solve the linear system. Fortunately it's a header library. It means you won't have to compile/install it because it is only composed of headers. They are readily usable within any projects using the #include directive. The [ HRBF C++ code ] should not be hard to use. The class takes in input a vector of points and normals to compute the surface as shown here:
#include "hrbf_phi_funcs.h" #include "hrbf_core.h" typedef HRBF_fit<float, 3, Rbf_pow3<float> > HRBF; { HRBF fit; // Define samples (position, normal); HRBF::Vector pts[] = { HRBF::Vector( 0.f, 0.f, 0.f), HRBF::Vector( 1.f, 0.f, 0.f), HRBF::Vector( 0.f, 0.f, 2.f) }; HRBF::Vector nrs[] = { HRBF::Vector(-1.f, 0.f, 0.f), HRBF::Vector( 1.f, 0.f, 0.f), HRBF::Vector( 0.f, 0.f, 1.f) }; const int size = sizeof(pts) / sizeof(HRBF::Vector); std::vector p(pts, pts + size ); std::vector n(nrs, nrs + size ); // Solve linear system; fit.hermite_fit(p, n); }
Then the implicit surface potential can be easily evaluated at any point of the ambiant space:
HRBF::Vector x( 0.f, 0.f, 0.f); float potential = fit.eval( x ); HRBF::Vector gf = fit.grad( x );
Note also that the code works for any dimensions, for instance in 2D you can easily generate the function of an implicit curve \( f: \mathbb R^2 \rightarrow \mathbb R \)
typedef HRBF_fit<float, 2, Rbf_pow3<float> > HRBF_curve; Performances
Be aware compiling in Debug mode is a lot slower than Release mode to solve linear systems.
The toy application
In order to do some testing rapidly I provide a toy application. It enables the edition of HRBF points and the visualization with a marching cube algorithm. There is also an automatic bounding box fitting around the HRBF. Feel free to play and modify the program:
Reference:
five comments
|
If $f:[-1,1]\to \mathbb{R} $ with $f(x)= \begin{cases}\sin(1/x), & x\neq 0\\ a,& x=0\end{cases} $ , how can I prove that $f$ has intermediate value property if and only if $a$ is between $[-1,1]$?
Let $a\in[-1,1]$. Then:
for $-1\leq b<c<0\,$ (analogously for $0< b<c\leq 1\,$) $f$ on $[b,c]$ is continous, so it has intermediate value property for $-1\leq b \leq 0 < c\leq 1$ (analogously for $-1\leq b < 0 \leq c\leq 1$ ) : if $f(b)<f(c)$ (analogously for $f(b)>f(c)$), lets take $w=\frac{1}{(2k-\frac{1}{2})\pi}$, where $k$ is positive integer such that $k>\frac{1}{2\pi c}+\frac{1}{4}$. Then we have $0<w<c$ and $f(w)=-1 \leq f(b) < f(c)$. On $[w,c]$ function $f$ is continous, so it has intermediate value property and for every $p\in [-1,f(c)]$, especially for every $p\in [f(b),f(c)]$ there exists $z\in (w,c) \subseteq (b,c)$ that $f(z)=p$. So on interval $[b,c]$ function $f$ has intermediate value property. if $f(b)=f(c)$, then take $w=\frac{c}{2\pi c+1}$. We have then $0<w<c\,$ and $f(w)=f(c)=f(b)$, so on $[b,c]$ function $f$ has intermediate value property.
We see then, that if $a\in [-1,1]$, then function $f$ has intermediate value property.
On the other hand, if $a\not \in [-1,1]$, let's take interval $[b,0]$ ($b\in [-1,0)$)
if $a>1$ (analogously for $a<-1$), let's take $p=\frac{a+1}{2}$. Of course $f(b)\leq 1 < p < f(0)$. Because $1<p$, there is no value $z \in (b,0)$ that $\sin \frac{1}{z}=p$, so function $f$ doesn't have intermediate value property on interval $[b,0]\subseteq [-1,1]$.
We see then that if $a\not \in [-1,1]$, then $f$ doesn't have intermediate value property.
|
Practice Paper 1 Question 11
An organism is born on day \(k=1\) with \(1\) cells. During day \(k=2,3,\ldots\) the organism produces \(\frac{k^2}{k-1}\) times more new cells than it produced on day \(k-1\). Give a simplified expression for the total of all its cells after \(n\) days.
Hint: This is different to the number of new cells produced during day \(n.\) Related topics Warm-up Questions Every day, a builder lays 2 more bricks than the total amount of bricks laid in the last 2 days. Express the number of bricks laid a day as a recursive formula. Simplify \(\sum_{k=2}^n (\frac{1}{k} - \frac{1}{k-1})\). Hints Hint 1Try to formulate the number of new cells each day as a recurrence. Hint 2Can you write your recurrence relationship as a non-recursive expression? Hint 3The total number of cells on any day is the sum of the number of cells produced by the organism up to that day. Hint 4Rearrange \(k\cdot k!\) into a difference between 2 factorials. Solution
Denote with \(N_k\) the number of cells grown at step \(k\). We have the recurrence \(N_k=\frac{k^2}{k-1}\,N_{k-1},\) which we can expand as \[ =\frac{k^2}{k-1}\cdot\frac{(k-1)^2}{k-2}\,N_{k-2} \\ =\frac{k^2}{k-1}\cdot\frac{(k-1)^2}{k-2}\cdot\frac{(k-2)^2}{k-3}\,N_{k-3},\] where we notice successive terms simplify, hence in the end we get: \[N_k = k^2 \cdot (k-1) \cdots 1=k \cdot k!.\]
The total number of cells is thus \[\sum_{k=1}^n k\cdot k!= \sum_{k=1}^n (k+1-1)k! = \sum_{k=1}^n (k+1)! - \sum_{k=1}^n k!,\] where all the terms except the largest and smallest cancel each other leaving \((n+1)!-1\).
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
Practice Paper 1 Question 12
Let \(n < 10\) be a non-negative integer. How many integers from \(0\) to \(999\) inclusive have the sum of their digits equal to \(n\)? Give your answer in terms of \(n\).
Hint: Try first for integers from \(0\) to \(99\). Related topics Warm-up Questions
How many \(3\) digit numbers, whose digits consist solely of even numbers, exist?
A ternary number consists of only \(2\)s, \(1\)s and \(0\)s. How many values can be represented by a \(7\) digit ternary number?
Hints Hint 1You can use inspection (there are only 10 cases) for the case of max 2 digits to get an expression in terms of \(n\). Hint 2For the case of max 3 digits, what if you fix one digit to some value \(p\le n\)? Hint 3What must be the sum of the remaining max 2 digits in terms of \(p\) and \(n\)? Hint 4How many max 2 digits numbers have their digit sum equal to \(n-p\)? Hint 5How should we incorporate that for all values of \(p\)? Solution
For max \(99\) (max 2 digits) case one can observe, by inspection, that there are \(n+1\) numbers whose digit sum is \(n\).
When there are max 3 digits, let's fix one of the digits to \(p\le n\). The sum of the remaining max 2 digits must thus equal \(n-p,\) and we know there are \(n-p+1\) numbers with that property. Taking all possible values of \(p\) we get \[\begin{align}\sum_{p=0}^n (n-p+1)&=(n+1)\sum_{p=0}^n 1-\sum_{p=0}^n p\\&=\frac{(n+1)(n+2)}{2}.\end{align}\]
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
This is joint work with Thomas Hulse, Chan Ieong Kuan, and Alex Walker, and is a sequel to our previous paper.
We have just uploaded a paper to the arXiv on estimating the average size of sums of Fourier coefficients of cusp forms over short intervals. (And by “just” I mean before the holidays). This is the second in a trio of papers that we will be uploading and submitting in the near future.
Suppose $latex {f(z)}$ is a weight $latex {k}$ holomorphic cusp form on $\text{GL}_2$ with Fourier expansion
$$f(z) = \sum_{n \geq 1} a(n) e(nz).$$
Denote the sum of the first $n$ coefficients of $f$ by $$S_f(n) := \sum_{m \leq n} a(m). \tag{1}$$
We consider upper bounds for the second moment of $latex {S_f(n)}$ over short intervals.
In our earlier work, we mentioned the conjectured bound $$ S_f(X) \ll X^{\frac{k-1}{2} + \frac{1}{4} + \epsilon}, \tag{2}$$
which we call the “Classical Conjecture.” There has been some minor progress towards the classical conjecture in recent years, but ignoring subpolynomial bounds the best known result is of shape $$ S_f(X) \ll X^{\frac{k-1}{2} + \frac{1}{3}}. \tag{3}$$
One can also consider how $latex {S_f(n)}$ behaves on average. Chandrasekharan and Narasimhan [CN] proved that the Classical Conjecture is true on average by showing that $$ \sum_{n \leq X} \lvert S_f(n) \rvert^2 = CX^{k- 1 + \frac{3}{2}} + B(X), \tag{4}$$
where $latex {B(x)}$ is an error term. Later, Jutila [Ju] improved this result to show that the Classical Conjecture is true on average over short intervals of length $latex {X^{\frac{3}{4} + \epsilon}}$ around $latex {X}$ by showing $$ X^{-(\frac{3}{4} + \epsilon)}\sum_{\lvert n – X \rvert < X^{3/4} + \epsilon} \lvert S_f(n) \rvert^2 \ll X^{\frac{k-1}{2} + \frac{1}{4}}. \tag{5}$$ In fact, Jutila proved a much more complicated set of bounds, but this bound can be read off from his work.
In our previous paper, we introduced the Dirichlet series $$ D(s, S_f \times S_f) := \sum_{n \geq 1} \frac{S_f(n) \overline{S_f(n)}}{n^{s + k – 1}} \tag{6}$$
and provided its meromorphic continuation In this paper, we use the analytic properties of $latex {D(s, S_f \times S_f)}$ to prove a short-intervals result that improves upon the results of Jutila and Chandrasekharan and Narasimhan. In short, we show the Classical Conjecture holds on average over short intervals of width $latex {X^{\frac{2}{3}} (\log X)^{\frac{2}{3}}}$. More formally, we prove the following. Theorem 1 Suppose either that $latex {f}$ is a Hecke eigenform or that $latex {f}$ has real coefficients. Then \begin{equation*} \frac{1}{X^{\frac{2}{3}} (\log X)^{\frac{2}{3}}} \sum_{\lvert n – X \rvert < X^{\frac{2}{3}} (\log X)^{\frac{2}{3}}} \lvert S_f(n) \rvert^2 \ll X^{\frac{k-1}{2} + \frac{1}{4}}. \end{equation*}
We actually prove an ever so slightly stronger statement. Suppose $latex {y}$ is the solution to $latex {y (\log y)^2 = X}$. Then we prove that the Classical Conjecture holds on average over intervals of width $latex {X/y}$ around $latex {X}$.
We also demonstrate improved bounds for short-interval estimates of width as low as $latex {X^\frac{1}{2}}$.
There are two major obstructions to improving our result. Firstly, we morally use the convexity result in the $latex {t}$-aspect for the size of $latex {L(\frac{1}{2} + it, f\times f)}$. If we insert the bound from the Lindel\”{o}f Hypothesis into our methodology, the corresponding bounds are consistent with the Classical Conjecture.
Secondly, we struggle with bounds for the spectral component $$ \sum_j \rho_j(1) \langle \lvert f \rvert^2 y^k, \mu_j \rangle \frac{\Gamma(s – \frac{3}{2} – it_j) \Gamma(s – \frac{3}{2} + it_j)}{\Gamma(s-1) \Gamma(s + k – 1)} L(s – \frac{3}{2}, \mu_j) V(X, s) \tag{7}$$
where $latex {\mu_j}$ are a basis of Maass forms and $latex {V(X,s)}$ is a term of rapid decay. For our analysis, we end up bounding by absolute values and are unable to understand cancellation from spin. An argument successfully capturing some sort of stationary phase could significantly improve our bound.
Supposing these two obstructions were handled, the limit of our methodology would be to show the Classical Conjecture in short-intervals of width $latex {X^{\frac{1}{2}}}$ around $latex {X}$. This would lead to better bounds on individual $latex {S_f(X)}$ as well, but requires significant improvement.
For more details and specific references, see the paper on the arXiv.
|
Practice Paper 1 Question 13
On a grid of \(m\times n\) squares, how many ways exist to get from the top-left corner to the bottom-right corner if you can only move right or down on an edge?
Related topics Warm-up Questions
16 tennis players participate in a
doublestournament and are paired at random. How many ways are there to select a pair?
How many different ways can a football team's manager choose 5 penalty takers from 11 players?
Hints Hint 1How many steps in total must be taken? Hint 2From a total of \(m+n\) steps, how many are rightward steps and how many are downward steps? Hint 3If we have \(m\) rightward steps, how many combinations of them are there? Hint 4For each combination of rightward steps, how many combinations of downward steps are there? Solution
Regardless of the path taken, you must travel \(m\) steps right and \(n\) steps down, i.e. you must always travel \(m+n\) steps. Of these, we consider the number of different ways we may choose \(m\) to be rightward steps. This is \(\binom{m+n}{m}=\frac{(m+n)!}{n!m!}\). For each of these combinations, the remaining \(n\) steps down are uniquely determined i.e. there is only 1 combination since \(\binom{m+n-m}{n} = \binom{n}{n} = 1\), so \(\binom{m+n}{m}\) is our final answer.
Similarly, we may first find the combinations of \(n\) downward steps and fix rightward steps to obtain \(\binom{m+n}{n}\) as our answer.
Another way to look at this is to consider the permutations of \(m+n\) steps, where \(m\) rightward steps and \(n\) downward steps are both indistinguishable types of steps. We find the number of permutations for \(m+n\) things, and remove (by dividing by) the number of permutations specifically for just the rightwards and downward steps. This gives us the answer \(\frac{(m+n)!}{m! \cdot n!}\).
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
Practice Paper 3 Question 4
A hiker starts on a path at time \(t=0\) and reaches destination after 1 hour. During the hike, his velocity in km/h varies according to the function \(v(t) = \cos{\big(\frac{t\pi}{2}\big)}.\) Find the time in hours at which the hiker reaches the halfway distance between start and destination.
Hint: \(\sin{\big(\frac{\pi}{6}\big)} = \frac{1}{2}.\) Related topics Hints Hint 1How does one compute the distance travelled between two time moments given an arbitrary equation for the velocity? Hint 2... and if the time ranged between 0 and the generic time \(\tau?\) Hint 3You should now have a generic equation that you can solve for time, knowing the previous total distance has now halved. Solution
The distance travelled in time \(\tau\) can be calculated as follows: \[ \begin{align} \int_{0}^{\tau}{\cos{\Big(\frac{t\pi}{2}\Big)}} \,dt &= \left[\frac{2}{\pi}\sin{\Big(\frac{t\pi}{2}\Big)}\right]_0^\tau \\&= \frac{2}{\pi}\sin{\Big(\frac{\tau \pi}{2}\Big)} \end{align} \] Therefore, the distance travelled in an hour is \(\frac{2}{\pi}\sin{\big(\frac{\pi}{2}\big)} = \frac{2}{\pi}.\) Denoting the time in hours to travel half the distance as \(x,\) we get \(\frac{2}{\pi}\sin{\big(\frac{x\pi}{2}\big)} = \frac{1}{2}\cdot\frac{2}{\pi}\) which gives us \(\sin{\big(\frac{x\pi}{2}\big)} = \frac{1}{2}.\) Hence, using the hint, we can deduce \(x = \frac{1}{3}.\)
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
Category:Examples of Euclidean Algorithm
Jump to navigation Jump to search
This category contains examples of Euclidean Algorithm.
This category contains examples of Euclidean Algorithm.
Let $a, b \in \Z$ and $a \ne 0 \lor b \ne 0$.
The steps are:
$(1): \quad$ Start with $\tuple {a, b}$ such that $\size a \ge \size b$. If $b = 0$ then the task is complete and the GCD is $a$. $(2): \quad$ If $b \ne 0$ then you take the remainder $r$ of $\dfrac a b$. $(3): \quad$ Set $a \gets b, b \gets r$ (and thus $\size a \ge \size b$ again). $(4): \quad$ Repeat these steps until $b = 0$. Pages in category "Examples of Euclidean Algorithm"
The following 17 pages are in this category, out of 17 total.
E Euclidean Algorithm/Examples Euclidean Algorithm/Examples/108 and 243 Euclidean Algorithm/Examples/12321 and 8658 Euclidean Algorithm/Examples/129 and 301 Euclidean Algorithm/Examples/132 and 473 Euclidean Algorithm/Examples/156 and 1740 Euclidean Algorithm/Examples/2145 and 1274 Euclidean Algorithm/Examples/2190 and 465 Euclidean Algorithm/Examples/299 and 481 Euclidean Algorithm/Examples/31x = 1 mod 56 Euclidean Algorithm/Examples/341 and 527 Euclidean Algorithm/Examples/341 and 527/Integer Combination Euclidean Algorithm/Examples/361 and 1178 Euclidean Algorithm/Examples/527 and 765 Euclidean Algorithm/Examples/9n+8 and 6n+5 Euclidean Domain/Euclidean Algorithm/Examples Euclidean Domain/Euclidean Algorithm/Examples/5 i and 3 + i in Gaussian Integers
|
In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
byProf.Maarten Golterman(San Francisco State University)
→Europe/Rome
Aula Seminari (LNF)
Aula Seminari
LNF
Via Enrico Fermi, 4000044 Frascati (Roma)
Description
We review our recent determination of the strong coupling $\alpha_s$ from the OPAL data for non-strange hadronic tau decays. We find that $\alpha_s(m^2_\tau) =0.325\pm 0.018$ using fixed-order perturbation theory, and $\alpha_s(m^2_\tau)=0.347\pm 0.025$ using contour-improved perturbation theory. These errors are larger than those claimed in previous determinations of $\alpha_s$ from tau decays. The reasons for this are several: (1) The spectral-data moments used in the standard analysis are likely to have an unreliable perturbative expansion, and OPE corrections have not been treated systematically; (2) Violations of quark-hadron duality have been ignored; (3) The nominally more precise ALEPH data are flawed because the correlation matrix is incomplete. We thus consider our determination to supersede all previous values.
|
Practice Paper 1 Question 16
Dividing \(x\) by a small annual \(r\)-percent cumulative interest rate approximates the number of years needed to double your investment with a bank. Find \(x\).
Hint: The word "small" may be important. Related topics Warm-up Questions For which real values of \(a\) does \((1+a)^x=e\) yield real values of \(x\)? If the number of bacteria in a sample doubles every hour, how many are there after 10 hours, if the initial population was 1? Using Taylor expansion about 0 (Maclaurin), find \(\sin(0.1)\) correct to 2 decimal places. Hints Hint 1For \(r\)-percent cumulative interest every year, how much money will you have after \(n\) years? Hint 2If the amount of money doubles after \(n\) years, find an expression relating \(n\) to \(r\). Hint 3Can you use the fact that \(r\) is small and the Taylor series for \(\ln(1+x)\) to simplify your expression? Solution [Trivia: Luca Pacioli in 1494 said "72" (for slightly larger \(r\)), and this result was apparently already well known at that time.]
The question asks us to find \(x\), which when divided by \(r\) approximates the number of years needed to double the investment. This translates to \(\frac{x}{r} = n.\) Let the initial investment amount be \(a_0\), and the amount in the bank after \(n\) years be \(a_n\). Each year, the amount increases by \(r\) percent, which gives us the recurrence \(a_{n+1} = a_n\big(1+\frac{r}{100}\big).\) Unwinding this recursion, one finds that \(a_n = a_0 \big(1+\frac{r}{100}\big)^n.\)
We wish to find \(n\) such that \(a_n = 2a_0.\) Substitute above to obtain \(2 = \big(1+\frac{r}{100}\big)^n,\) and apply log to get \(n = \frac{\ln 2}{\ln(1+\frac{r}{100})}.\) Since \(r\) is very small, we can approximate the denominator by using only the first term of the Taylor expansion of \(\ln(1+\frac{r}{100}) \approx \frac{r}{100},\) which gives us \(n = \frac{100 \ln 2} {r}.\) We now substitute into the previous equation to get \(x = 100 \ln 2 \approx 69.3.\)
The Taylor expansion of \(\ln(1+x)\) around \(x=0\) is \(\ln(1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\cdots\).
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
Is there a closed-form expression for this nested sum?
$$s(n)=\sum_{i=1}^n\;\; \sum_{j=i+1}^n \sum_{k=i+j-1}^n1$$
If yes, what is it and how can it be derived?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Is there a closed-form expression for this nested sum?
$$s(n)=\sum_{i=1}^n\;\; \sum_{j=i+1}^n \sum_{k=i+j-1}^n1$$
If yes, what is it and how can it be derived?
The usual convention with the sum sign $\sum$ is that $\sum_{k=p}^q a_k:=0$ if $q<p$. This has the following effect: Given an $i$ with $1\leq i\leq n$ the next index variable $j$ has to satisfy $$i+1\leq j\leq\min\{n, n+1-i\}=n+1-i\ .$$ This then implies that the variable $i$ in fact has to satisfy $2i\leq n$, or $$1\leq i\leq\left\lfloor{n\over2}\right\rfloor\ .$$ Given $i$ and $j$ with these constraints the innermost sum becomes $$\sum_{k=i+j-1}^n 1=n+2-i-j\ .$$ The next sum (over $j$) then becomes $$A_i:=\sum_{j=i+1}^{n+1-i}(n+2-i-j)$$ and has $n+1-2i$ terms. It follows that $$A_i=(n+1-2i)\>{1\over2}\>(n+2-2i)\qquad\bigl(1\leq i\leq\lfloor n/2\rfloor\bigr)\ .$$ Here we have used that the sum of a finite arithmetic series is the number of terms times the arithmetic mean of its outermost terms. From now on we have to distinguish the cases of even and odd $n$.
If $n=2m$ then Mathematica produces $$s(n)=\sum_{i=1}^m A_i={4m^3+3m^2-m\over6}={n(2n^2+3n-2)\over 24}\ .$$ If $n=2m+1$ we similarly obtain $$s(n)=\sum_{i=1}^m A_i={4m^3+9m^2+5m\over6}={2n^3+3n^2-2n-3\over 24}\ .$$
By choosing experimentally a small value of $n$ and writing out by hand in a simple $i-j$ grid the values of the innermost summation, it becomes clear that:
$$\begin{align}S(n)=S(2m)&=\sum_{s=1}^m \sum_{r=1}^{2s-1}r=\sum_{s=1}^m\binom {2s}2\\ &=\frac 16m(m+1)(4m-1)\end{align}$$
$$\begin{align}S(n)=S(2m+1)&=\sum_{s=1}^m \sum_{r=1}^{2s}r=\sum_{s=1}^m\binom {2s+1}2\\ &=\frac 16m(m+1)(4m+5)\end{align}$$
The above can also be derived as follows:
By considering each $i,j$ combination in turn and the corresponding limits on $k$ (or as pointed out by Christian Blatter in his solution), it is clear that the applicable limits of $i,j$ are narrower $\color{red}{\text{(shown below in red})}$ than in the original question, as the innermost summation cannot be negative, as specified by the condition in the Iverson brackets $\color{lightblue}{\text{(shown below in light blue)}}$.
The case for $n=2m+1$ can be shown using a similar method.
Hint: the general solution strategy is to solve first the innermost sum, expand that, and repeat the procedure.
For solving the sums use
$\sum_{k=1}^n k = (n(n+1))/2$,
$\sum_{k=1}^n k^2 = (n(n+1)(2n+1))/6$,
$\sum_{k=1}^n k^3 = (n^2(n+1)^2)/4$.
|
I recently stumbled across a paper entitled, “Universal Re-Encryption for mix nets,”by P. Golle et al. [1] that I felt is worth exploring. Universal re-encryption (URE), or, perhaps better called
univeral re-randomization, is a primitive that allows some ciphertext to be re-encryptedwithout knowledge of the corresponding public key. The applicability of this techniqueto Chaum-like mix nets [2] is intriguing: untrusted mix nodes can, forexample, re-encrypt messages from senders to receivers to furtherhide the relation between input and output messages. The figure below illustrates this process. $R$ is a receiver of some message $Y$ sent by $S$. $R$ initiates this transfer by issuing a query $X$. The mix node ensures that the forwarded query does not match the input query and that forwarded result does not match the input result. In other words, it does some operation so that $X’ \not= X$ and $Y’ \not= Y$.
Generally, a mix node receives messages from many nodes and then shuffles and re-encrypts them as needed before transmitting them to their intended recipient (in this case, $S$). The shuffle procedure step serves to hide the mapping from the mix node’s outputs to its inputs. This helps keep receivers anonymous.
In the original mix net proposal, many mix nodes existed to transfer messages between senders and receivers. Each mix node in a mix net had a pubic key for message encryption. Senders wrapped messages in layers of concentric encryption to form an onion. Each layer included at least two pieces of information: (1) some encrypted payload or plaintext message and (2) information about where to forward the message after removing a layer of encryption. The sender creates the onion based on the path of mix nodes through which the message traverses. Upon receipt of a message, a mix node removes a layer of encryption (via decryption with its private key) and forwards the inner payload to the destination address at some future time. A mix node may batch messages to prevent timing correlation attacks.
After a recipient processes a message it typically generates a response. The recipient encrypts this response with a one-time ephemeral key provided by the sender. The sender must then transfer the encrypted response to the sender. To remain anonymous, it is important for the recipient to not know the address of the sender. Thus, to route the message, senders provide anonymous return addresses for recipients. These anonymous addresses are also onions containing a path of addresses and one-time keys. When a mix node receives a response it removes one layer of decryption to uncover the next-hop address and a key, re-encrypts the response payload with this key, and then forwards the result to the next hop. This process continues until the sender receives the response, at which point it can remove each layer of encryption on the message to uncover the response.
The mechanics for creating messages and responses sent in a mix net are not important. The takeaway is that it relies on layers of concentric encryption and well-defined paths through which messages propagate between senders and receivers. URE is interesting because it enables a completely different type of mix net protocol that is not based on these properties. It works as follows. Senders publish messages encrypted for recipients to a public bulletin board. Messages are encrypted under the same “group information”, e.g., the same ElGamal group. Periodically, a mix node will retrieve all messages from the bulletin board, re-encrypt each of them, and then place them back on the board in random order. To the passive eavesdropper, this has the effect of both shuffling the messages and their contents. Recipients check for messages by attempting to decrypt each message and only keeping the ones that succeed. Decryption will fail on messages that are not encrypted for them, i.e., that were not encrypted with their public key.
The interesting aspect of this protocol is that mix nodes only need information about the encryption group and not the public key of each recipient. This allows the nodes to blindly re-encrypt each message.
In this post I will describe the cryptographic details of the URE scheme presented in [1]. I will also present a simple Python implementation that you can run to see it in action. I will also review the basics of ElGamal encryption as a primer for the main material.
Functionally, ElGamal encryption works by masking some plaintext element with a random value, or ephemeral key, and then encrypting the value so that the recipient can decrypt it and unmask the ciphertext. To understand the details, we need some notation. Let $G$ be some group of order $q$ with generator $g$. The public and private key pair for a user is $h = g^x$ and $x$, respectively, where $x \in {1,\dots,q}$. To encrypt a message $m$, which is assumed to be an element of $G$ (or there exists a mapping to one element), a sender generates a random element $y \in {1,\dots,q}$, computes $s = h^y$ and then $c_2 = ms$, and outputs $(c_1, c_2) = (g^y, mh^y)$. Decryption then works by computing $m = c_2 / c_1^x = mh^y / g^{xy}$. This equality holds since $h^y = g^{xy}$. Simple enough.
A universal re-randomizable encryption (URE) scheme is a tuple of algorithms $(\mathsf{UKG}, \mathsf{UE}, \mathsf{UD}, \mathsf{UR_e})$ which are loosely described below.
The authors of [1] used the term “universal” to mean that the re-randomization does notneed knowledge of the public key under which the some ciphertext was encryption (onlythe system parameters, e.g., the ElGamal group generator and prime). This enables there-randomization process to be
transparent and thus something that any intermediate entity can do any number of times before the intended recipient (i.e., the one holding the private key) decrypts the result.
The URE scheme based on ElGamal from [1] is presented below.
Key Generation $\mathsf{UKG}$: Output $(PK, SK) = (y = g^x, x)$ for $x \in_U Z_q$ Encryption $\mathsf{UE}$: On input message $m$ and public key $y$, compute and output the ciphertext
$CT = [(\alpha_0, \beta_0), (\alpha_1, \beta_1)] = [(my^{k_0}, g^{k_0}), (y^{k_1}, g^{k_1})]$,
where $r = (k_0, k_1) \in Z_q^2$.
$m_0 = \alpha_0/\beta_0^x$ and $m_1 = \alpha_1/\beta_1^x$,
and output $m_0$ if $m_1 = 1$.
$CT’ = [(\alpha_0’, \beta_0’), (\alpha_1’, \beta_1’)] = [(\alpha_0\alpha_1^{k_0’}, \beta_0\beta_1^{k_0’}),(\alpha_1^{k_1’}, \beta_1^{k_1’})]$,
where $r’ = (k_0’, k_1’) \in Z_q^r$
Altogether, this scheme is quite straightforward. The only input is a generator $g$ and prime $q$ forming a group over $Z_q$. Encryption and decryption are similar to the original ElGamal scheme presented in [3]. The only difference is that the ciphertext carries both an encrypted message and a type of signature. When the ciphertext is re-randomized in the $\mathsf{UR_e}$ scheme, the encrypted message is re-blinded and the signature is updated accordingly. Decryption only returns the original message if the signature is valid. To show this, let’s compute $m_0$.
Now, to verify that $m_1 = 1$, let’s compute that too.
If a message or the signature is tampered with then the verification, and decryption will fail. The security of this scheme (as presented) boils down to the Decisional Diffie-Hellman problem [4] over the group $Z_q$, much in the same way that the semantic security of the original ElGamal encryption scheme does. (I hope to talk about some of these hardness problems in a future post.)
An implementation of this procedure is shown below.
Observe that the input to the re-encryption algorithm is
only the ciphertext CT andthe group parameter $q$. Different ElGamal ciphertexts are free to re-use this parameter sinceit only determines the size of the ElGamal elements and does not have any affect on thechoice of elements within (all samples are done uniformly at random from $Z_q$.
|
When modeling scenarios with a linear function and solving problems involving quantities changing linearly, we typically follow the same problem solving strategies that we would use for any type of function:
problem solving strategy
Identify changing quantities, and then carefully and clearly define descriptive variables to represent those quantities. When appropriate, sketch a picture or define a coordinate system.
Carefully read the problem to identify important information. Look for information giving values for the variables, or values for parts of the functional model, like slope and initial value.
Carefully read the problem to identify what we are trying to find, identify, solve, or interpret.
Identify a solution pathway from the provided information to what we are trying to find. Often this will involve checking and tracking units, building a table or even finding a formula for the function being used to model the problem.
When needed, find a formula for the function.
Solve or evaluate using the formula you found for the desired quantities.
Reflect on whether your answer is reasonable for the given situation and whether it makes sense mathematically.
Clearly convey your result using appropriate units, and answer in full sentences when appropriate.
Example 1
Emily saved up $3500 for her summer visit to Seattle. She anticipates spending $400 each week on rent, food, and fun. Find and interpret the horizontal intercept and determine a reasonable domain and range for this function.
Solution
In the problem, there are two changing quantities: time and money. The amount of money she has remaining while on vacation depends on how long she stays. We can define our variables, including units.
Output: \(M\), money remaining, in dollars
Input: \(t\), time, in weeks
Reading the problem, we identify two important values. The first, $3500, is the initial value for \(M\). The other value appears to be a rate of change – the units of dollars per week match the units of our output variable divided by our input variable. She is spending money each week, so you should recognize that the amount of money remaining is decreasing each week and the slope is negative.
To answer the first question, looking for the horizontal intercept, it would be helpful to have an equation modeling this scenario. Using the intercept and slope provided in the problem, we can write the equation: \(M(t)=3500-400t\).
To find the horizontal intercept, we set the output to zero, and solve for the input:
\(\begin{array} {rcl} {0} &= & {3500 - 400t} \\ {t} &= & { \dfrac{3500}{400} = 8.75} \end{array}\)
The horizontal intercept is 8.75 weeks. Since this represents the input value where the output will be zero, interpreting this, we could say: Emily will have no money left after 8.75 weeks.
When modeling any real life scenario with functions, there is typically a limited domain over which that model will be valid – almost no trend continues indefinitely. In this case, it certainly doesn’t make sense to talk about input values less than zero. It is also likely that this model is not valid after the horizontal intercept (unless Emily’s going to start using a credit card and go into debt).
The domain represents the set of input values and so the reasonable domain for this function is \(0 \le t \le 8.75\).
However, in a real world scenario, the rental might be weekly or nightly. She may not be able to stay a partial week and so all options should be considered. Emily could stay in Seattle for 0 to 8 full weeks (and a couple of days), but would have to go into debt to stay 9 full weeks, so restricted to whole weeks, a reasonable domain without going in to debt would be \(0 \le t \le 8\), or \(0 \le t \le 9\) if she went into debt to finish out the last week.
The range represents the set of output values and she starts with $3500 and ends with $0 after 8.75 weeks so the corresponding range is \(0 \le M(t) \le 3500\). If we limit the rental to whole weeks, however, the range would change. If she left after 8 weeks because she didn’t have enough to stay for a full 9 weeks, she would have \(M(8) = 3500 - 400 (8) = $300\) dollars left after 8 weeks, giving a range of \(300 \le M(t) \le 3500\). If she wanted to stay the full 9 weeks she would be $100 in debt giving a range of \(-100 \le M(t) \le 3500\).
Most importantly remember that domain and range are tied together, and what ever you decide is most appropriate for the domain (the independent variable) will dictate the requirements for the range (the dependent variable).
Exercise
A database manager is loading a large table from backups. Getting impatient, she notices 1.2 million rows had been loaded. Ten minutes later, 2.5 million rows had been loaded. How much longer will she have to wait for all 80 million rows to load?
Answer
Letting \(t\) be the number of minutes since she got impatient, and N be the number rows loaded, in millions, we have two points: (0, 1.2) and (10, 2.5).
The slope is \(m=\dfrac{2.5-1.2}{10-0} =\dfrac{1.3}{10} =0.13\) million rows per minute.
We know the \(N\) intercept, so we can write the equation:
\(N=0.13t+1.2\)
To determine how long she will have to wait, we need to solve for when \(N = 80\).
\(N = 0.13t + 1.2 = 80\)
\(0.13t = 78.8\)
\(t = \dfrac{78.8}{0.13} = 606\). She’ll have to wait another 606 minutes, about 10 hours.
Example 2
Jamal is choosing between two moving companies. The first, U-Haul, charges an up-front fee of $20, then 59 cents a mile. The second, Budget, charges an up-front fee of $16, then 63 cents a mile(Rates retrieved Aug 2, 2010 from http://www.budgettruck.com and http://www.uhaul.com/). When will U-Haul be the better choice for Jamal?
Solution
The two important quantities in this problem are the cost, and the number of miles that are driven. Since we have two companies to consider, we will define two functions:
Input: \(m\), miles driven
Outputs:
\(Y(m)\): cost, in dollars, for renting from U-Haul
\(B(m)\): cost, in dollars, for renting from Budget
Reading the problem carefully, it appears that we were given an initial cost and a rate of change for each company. Since our outputs are measured in dollars but the costs per mile given in the problem are in cents, we will need to convert these quantities to match our desired units: $0.59 a mile for U-Haul, and $0.63 a mile for Budget.
Looking to what we’re trying to find, we want to know when U-Haul will be the better choice. Since all we have to make that decision from is the costs, we are looking for when U-Haul will cost less, or when \(Y(m) < B(m)\). The solution pathway will lead us to find the equations for the two functions, find the intersection, then look to see where the \(Y(m)\) function is smaller. Using the rates of change and initial charges, we can write the equations:
\(Y(m) = 20 + 0.59m\)
\(B(m) = 16 + 0.63m\)
These graphs are sketched to the right, with Y(m) drawn dashed.
To find the intersection, we set the equations equal and solve:
\(\begin{array} {rcl} {Y(m)} &= & {B(m)} \\ {20 + 0.59m} &= & {16 + 0.63m} \\ {4} &= & {0.04m} \\ {m} &= & {100} \end{array}\)
This tells us that the cost from the two companies will be the same if 100 miles are driven. Either by looking at the graph, or noting that \(Y(m)\) is growing at a slower rate, we can conclude that U-Haul will be the cheaper price when more than 100 miles are driven.
Example 3
A town’s population has been growing linearly. In 2004 the population was 6,200. By 2009 the population had grown to 8,100. If this trend continues,
a. Predict the population in 2013
b. When will the population reach 15000? Solution
The two changing quantities are the population and time. While we could use the actual year value as the input quantity, doing so tends to lead to very ugly equations, since the vertical intercept would correspond to the year 0, more than 2000 years ago!
To make things a little nicer, and to make our lives easier too, we will define our input as years since 2004:
Input: \(t\), years since 2004
Output: \(P(t)\), the town’s population
The problem gives us two input-output pairs. Converting them to match our defined variables, the year 2004 would correspond to \(t = 0\), giving the point (0, 6200). Notice that through our clever choice of variable definition, we have “given” ourselves the vertical intercept of the function. The year 2009 would correspond to \(t = 5\), giving the point (5, 8100).
To predict the population in 2013 (\(t = 9\)), we would need an equation for the population. Likewise, to find when the population would reach 15000, we would need to solve for the input that would provide an output of 15000. Either way, we need an equation. To find it, we start by calculating the rate of change:
\(m=\dfrac{8100-6200}{5-0} =\dfrac{1900}{5} =380\) people per year
Since we already know the vertical intercept of the line, we can immediately write the equation:
\(P(t)=6200+380t\)
To predict the population in 2013, we evaluate our function at \(t = 9\)
\(P(9)=6200+380(9)=9620\)
If the trend continues, our model predicts a population of 9,620 in 2013.
To find when the population will reach 15,000, we can set \(P(t) = 15000\) and solve for \(t\).
\(\begin{array} {rcl} {15000} &= & { 6200 + 380t} \\ {8800} &= & {380t} \\ {t} &\approx & {23.158} \end{array}\)
Our model predicts the population will reach 15,000 in a little more than 23 years after 2004, or somewhere around the year 2027.
Example 4
Anna and Emanuel start at the same intersection. Anna walks east at 4 miles per hour while Emanuel walks south at 3 miles per hour. They are communicating with a two-way radio with a range of 2 miles. How long after they start walking will they fall out of radio contact?
Solution
In essence, we can partially answer this question by saying they will fall out of radio contact when they are 2 miles apart, which leads us to ask a new question: how long will it take them to be 2 miles apart?
In this problem, our changing quantities are time and the two peoples’ positions, but ultimately we need to know how long will it take for them to be 2 miles apart. We can see that time will be our input variable, so we’ll define
Input: \(t\), time in hours.
Since it is not obvious how to define our output variables, we’ll start by drawing a picture.
Because of the complexity of this question, it may be helpful to introduce some intermediary variables. These are quantities that we aren’t directly interested in, but seem important to the problem. For this problem, Anna’s and Emanuel’s distances from the starting point seem important. To notate these, we are going to define a coordinate system, putting the “starting point” at the intersection where they both started, then we’re going to introduce a variable, \(A\), to represent Anna’s position, and define it to be a measurement from the starting point, in the eastward direction. Likewise, we’ll introduce a variable, \(E\), to represent Emanuel’s position, measured from the starting point in the southward direction. Note that in defining the coordinate system we specified both the origin, or starting point, of the measurement, as well as the direction of measure.
While we’re at it, we’ll define a third variable, \(D\), to be the measurement of the distance between Anna and Emanuel. Showing the variables on the picture is often helpful:
Looking at the variables on the picture, we remember we need to know how long it takes for \(D\), the distance between them, to equal 2 miles.
Seeing this picture we remember that in order to find the distance between the two, we can use the Pythagorean Theorem, a property of right triangles.
From here, we can now look back at the problem for relevant information. Anna is walking 4 miles per hour, and Emanuel is walking 3 miles per hour, which are rates of change. Using those, we can write formulas for the distance each has walked.
They both start at the same intersection and so when \(t = 0\), the distance travelled by each person should also be 0, so given the rate for each, and the initial value for each, we get:
\(A(t) = 4t\)
\(E(t) = 3t\)
Using the Pythagorean theorem we get:
\(D(t)^{2} =A(t)^{2} +E(t)^{2}\) substitute in the function formulas
\(D(t)^{2} =(4t)^{2} +(3t)^{2} =16t^{2} +9t^{2} =25t^{2}\) solve for \(D(t)\) using the square root
\(D(t)=\pm \sqrt{25t^{2} } =\pm 5|t|\)
Since in this scenario we are only considering positive values of t and our distance \(D(t)\) will always be positive, we can simplify this answer to \(D(t)=5t\)
Interestingly, the distance between them is also a linear function. Using it, we can now answer the question of when the distance between them will reach 2 miles:
\(\begin{array} {rcl} {D(t)} &= & {2} \\ {5t} &= & {2} \\ {t} &= & {\dfrac{2}{5} = 0.4} \end{array}\)
They will fall out of radio contact in 0.4 hours, or 24 minutes.
Example 5
There is currently a straight road leading from the town of Westborough to a town 30 miles east and 10 miles north. Partway down this road, it junctions with a second road, perpendicular to the first, leading to the town of Eastborough. If the town of Eastborough is located 20 miles directly east of the town of Westborough, how far is the road junction from Westborough?
Solution
It might help here to draw a picture of the situation. It would then be helpful to introduce a coordinate system. While we could place the origin anywhere, placing it at Westborough seems convenient. This puts the other town at coordinates (30, 10), and Eastborough at (20, 0).
Using this point along with the origin, we can find the slope of the line from Westborough to the other town: \(m=\dfrac{10-0}{30-0} =\dfrac{1}{3}\) . This gives the equation of the road from Westborough to the other town to be \(W(x)=\dfrac{1}{3} x\).
From this, we can determine the perpendicular road to Eastborough will have slope \(m=-3\). Since the town of Eastborough is at the point (20, 0), we can find the equation:
\(E(x) = -3x + b\) plug in the point (20, 0)
\(0 = -3(20) + b\) \(b = 60\) \(E(x) = -3x + 60\)
We can now find the coordinates of the junction of the roads by finding the intersection of these lines. Setting them equal,
\(\dfrac{1}{3} x = -3x + 60\)
\(\dfrac{10}{3} x= 60\) \(10x = 180\) \(x = 18\) Substituting this back into \(W(x)\) \(y = W(18) = \dfrac{1}{3} (18) = 6\)
The roads intersect at the point (18, 6). Using the distance formula, we can now find the distance from Westborough to the junction:
\(dist=\sqrt{(18-0)^{2} +(6-0)^{2} } \approx 18.934\) miles.
important topics of this section
The problem solving process
Identify changing quantities, and then carefully and clearly define descriptive variables to represent those quantities. When appropriate, sketch a picture or define a coordinate system.
Carefully read the problem to identify important information. Look for information giving values for the variables, or values for parts of the functional model, like slope and initial value.
Carefully read the problem to identify what we are trying to find, identify, solve, or interpret.
Identify a solution pathway from the provided information to what we are trying to find. Often this will involve checking and tracking units, building a table or even finding a formula for the function being used to model the problem.
When needed, find a formula for the function.
Solve or evaluate using the formula you found for the desired quantities.
Reflect on whether your answer is reasonable for the given situation and whether it makes sense mathematically.
Clearly convey your result using appropriate units, and answer in full sentences when appropriate.
|
More specifically, I was wondering if there are well-known conditions to put on $X$ in order to make $K_0(X)\simeq K^0(X)$. Wikipedia says they are the same if $X$ is smooth. It seems to me that you get a nice map from the coherent sheaves side to the vector bundle side (the hard direction in my opinion) if you impose some condition like "projective over a Noetherian ring". Is this enough? In other words, is the idea to impose enough conditions to be able to resolve a coherent sheaf, $M$, by two locally free ones $0\to \mathcal{F}\to\mathcal{G}\to M\to 0$?
Imposing that you can resolve by a length $2$ sequence of vector bundles is too strong. What you want is that there is some $N$ so that you can resolve by a length $N$ sequence of vector bundles. By Hilbert's syzygy theorem, this follows from requiring that the scheme be regular. (Specifically, if the scheme is regular of dimension $d$, then every coherent sheaf has a resolution by projectives of length $d+1$.)
Here is a simple example of what goes wrong on singular schemes. Let $X = \mathrm{Spec} \ A$ where $A$ is the ring $k[x,y,z]/(xz-y^2)$. Let $k$ be the $A$-module on which $x$, $y$ and $z$ act by $0$. I claim that $k$ has no finite free resolution. I will actually only show that $A$ has no
graded finite free resolution. Proof: The hilbert series of $A$ is $(1-t^2)/(1-t)^3 = (1+t)/(1-t)^2$. So every graded free $A$-module has hilbert series of the form $p(t) (1+t)/(1-t)^2$ for some polynomial $p$; and the hilbert series of anything which has a finite resolution by such modules also has hilbert series of the form $p(t) (1+t)/(1-t)^2$. In particular, it must vanish at $t=-1$. But $k$ has hilbert series $1$, which does not.
There is, of course, a resolution of $k$ which is not finite. If I am not mistaken, it looks like
$$\cdots \to a[-4]^4 \to A[-3]^4 \to A[-2]^4 \to A[-1]^3 \to A \to k$$
You want coherent sheaves to have finite global resolutions by locally free sheaves. So definitely you need the regularity of $X$ to ensure that a locally free resolution stops at a finite stage. You also need a global condition such as quasiprojectivity over an affine base to guarantee that you can start the process. (The last condition is not optimal.)
Edit: In reading the follow up comments, I realize my answer was a bit cryptic. The inverse map $K_0(X)\to K^0(X)$ would send the class of a coherent sheaf to the alternating sum of the classes in a resolution. In general, these groups behave quite differently. $K^0(X)$ is contravariant like cohomology and $K_0(X)$ is covariant for proper maps like (Borel-Moore) homology. That they coincide for regular schemes is reminiscent of Poincaré duality.
Asking $K^0(X)$ to be isomorphic to $K_0(X)$ is not always "good enough". Of course, it will allow you to carry over constructions for $K_0(X)$ to $K^0(X)$, but not canonically. And it can happen that $K^0(X)\cong K_0(X)$ without $X$ being regular. For example, take $X= \textrm{Spec} A$, $A=k[x]/(x^n)$ with $n\geq 2$. Then you have an infinite resolution as given in David's answer for $k$. Computing $Tor^A_i(k,k)$ shows that $k$ has no finite resolution. (In fact, $Tor_i^A(k,k) = k^2$ for all $i>0$.) Now, although the above "existence of finite resolution" fails, it is not hard to see that $K^0(X)\cong \mathbf{Z}\cong K_0(X)$ in this case. (Use that $A$ is a local ring and the length map on $A$.) Of course, the natural map $K^0(X) \longrightarrow K_0(X)$ is not an isomorphism. (It is given by $1\mapsto n$.)
[Edit: I added another example]
[Edit 2: There was something wrong with the example below as noted by Michael. I fixed the problem]
Let me also add to my answer the following "snake in the grass". If you work with general schemes, even if regular, one requires the extra assumption of "finite-dimensionality". For example, take the scheme $X=\textrm{Spec} (k \times k[t_1]\times k[t_1,t_2] \times \ldots)$. Now, even though $A = k\times k[t_1]\times\ldots$ is regular, there is an infinite resolution for $k$ of the form $$\ldots \longrightarrow A\longrightarrow A\longrightarrow A \longrightarrow k \longrightarrow 0$$ which corresponds geometrically to taking a point, then adding a line, then adding a plane, etc. Again, take the Tor's to see that $k$ has no finite resolution. Do note that $X$ is not noetherian.
[Edit 3: I added the following for completeness]
Let $X$ be a regular finite-dimensional scheme. Assume that $X$ has enough locally frees. (This notion also arose in Are schemes that "have enough locally frees" necessarily separated ). Then the canonical morphism $K^0(X) \longrightarrow K_0(X)$ is an isomorphism. In the second example, $X=\textrm{Spec} \ A$ is regular, but not finite-dimensional. Does $X$ have enough locally frees?
|
Search
Now showing items 21-30 of 116
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
|
I was asked to prove that $\lim\limits_{x\to\infty}\frac{x^n}{a^x}=0$ when $n$ is some natural number and $a>1$. However, taking second and third derivatives according to L'Hôpital's rule didn't bring any fresh insights nor did it clarify anything. How can this be proven?
Here's a hint: after doing successive applications of L'Hospital's rule, what you get in the numerator is $n(n-1)\cdots(n-m+1)x^{n-m}$. What you get in the denominator is $(\ln a)^m a^m$. If you differentiate a polynomial of degree $n$ $n$ times, what do you get?
Hint: Use $a^x = e^{x\ln a} > \dfrac{x^{n+1}\left(\ln a\right)^{n+1}}{(n+1)!}$. Thus:
$0 < \dfrac{x^n}{a^x} < \dfrac{(n+1)!}{x\left(\ln a\right)^{n+1}}$. Now by Squeeze theorem we get the limit $0$.
|
I am reading the following proof of a proposition from Royden+Fitzpatrick, 4th edition, and need help in understanding the last half of the proof.
(My comments in italics.) Proposition: Let $A$ be a countable subset of the open interval $(a,b).$ Then there is an increasing function on $(a,b)$ that is continuous only at points in $(a,b)$ ~ $A.$ Proof: If $A$ is finite the proof is clear. Assume $A$ is countably infinite. Let $\{q_n\}_{n=1}^{\infty}$ be an enumeration of $A.$ Define the function $f$ on $(a,b)$ by setting $$f(x) = \sum\limits_{\{n|q_n \leq x\}} \frac{1}{2^n} \ \mathrm{for \ all} \ a<x<b,$$ where the sum over the empty set is zero.
Since a geometric series with a ratio less than $1$ converges, $f$ is properly defined. Moreover,
\begin{equation} \mathrm{if} \ a<u<v<b, \ \mathrm{then} \ f(v)-f(u) = \sum\limits_{\{n|u<q_n \leq v\}} \frac{1}{2^n}. \ \ \ \ \ \ \ \ \ \ \ (1) \end{equation}
Thus $f$ is increasing.
I follow so far.
Let $x_0 = q_k$ belong to $A.$ Then by (1), $$f(x_0)-f(x) \geq \frac{1}{2^k} \ \mathrm{for \ all} \ x<x_0.$$ Therefore $f$ fails to be continuous at $x_0.$ Now let $x_0$ belong to $(a,b)$ ~ $A.$ Let $n \in \mathbb{N}.$ There is an open interval $J$ containing $x_0$ for which $q_n$ does not belong to $J$ for $1 \leq k \leq n.$ We infer from (1) that $|f(x)-f(x_0)|<1/2^n$ for all $x \in J.$ Therefore $f$ is continuous at $x_0.$
Why are the claims of $f$ continuous/discontinuous true? I need some clarification.
|
[exer:7.1.1] For each power series use Theorem [thmtype:7.1.3} to find the radius of convergence \(R\). If \(R>0\), find the open interval of convergence.
\({\displaystyle \sum_{n=0}^\infty {(-1)^n\over2^nn}(x-1)^n}\) \({\displaystyle \sum_{n=0}^\infty 2^nn(x-2)^n}\) \({\displaystyle \sum_{n=0}^\infty {n!\over9^n}x^n}\) \({\displaystyle \sum_{n=0}^\infty{n(n+1)\over16^n}(x-2)^n}\) \({\displaystyle \sum_{n=0}^\infty (-1)^n{7^n\over n!}x^n}\) \({\displaystyle \sum_{n=0}^\infty {3^n\over4^{n+1}(n+1)^2}(x+7)^n}\)
[exer:7.1.2] Suppose there’s an integer \(M\) such that \(b_m\ne0\) for \(m\ge M\), and \[\lim_{m\to\infty}\left|b_{m+1}\over b_m\right|=L,\nonumber \] where \(0\le L\le\infty\). Show that the radius of convergence of \[\displaystyle \sum_{m=0}^\infty b_m(x-x_0)^{2m}\nonumber \] is \(R=1/\sqrt L\), which is interpreted to mean that \(R=0\) if \(L=\infty\) or \(R=\infty\) if \(L=0\).
[exer:7.1.3] For each power series, use the result of Exercise [exer:7.1.2} to find the radius of convergence \(R\). If \(R>0\), find the open interval of convergence.
\({\displaystyle \sum_{m=0}^\infty (-1)^m(3m+1)(x-1)^{2m+1}}\) \({\displaystyle \sum_{m=0}^\infty (-1)^m{m(2m+1)\over2^m}(x+2)^{2m}}\) \({\displaystyle \sum_{m=0}^\infty {m!\over(2m)!}(x-1)^{2m}}\) \({\displaystyle \sum_{m=0}^\infty (-1)^m{m!\over9^m}(x+8)^{2m}}\) \({\displaystyle \sum_{m=0}^\infty(-1)^m{(2m-1)\over3^m}x^{2m+1}}\) \({\displaystyle \sum_{m=0}^\infty(x-1)^{2m}}\)
[exer:7.1.4] Suppose there’s an integer \(M\) such that \(b_m\ne0\) for \(m\ge M\), and \[\lim_{m\to\infty}\left|b_{m+1}\over b_m\right|=L,\nonumber \] where \(0\le L\le\infty\). Let \(k\) be a positive integer. Show that the radius of convergence of \[\displaystyle \sum_{m=0}^\infty b_m(x-x_0)^{km}\nonumber \] is \(R=1/\sqrt[k]L\), which is interpreted to mean that \(R=0\) if \(L=\infty\) or \(R=\infty\) if \(L=0\).
[exer:7.1.5] For each power series use the result of Exercise [exer:7.1.4} to find the radius of convergence \(R\). If \(R>0\), find the open interval of convergence.
\({\displaystyle \sum_{m=0}^\infty{(-1)^m\over(27)^m}(x-3)^{3m+2}}\) \({\displaystyle \sum_{m=0}^\infty{x^{7m+6}\over m}}\) \({\displaystyle \sum_{m=0}^\infty{9^m(m+1)\over(m+2)}(x-3)^{4m+2}}\) \({\displaystyle \sum_{m=0}^\infty(-1)^m{2^m\over m!}x^{4m+3}}\) \({\displaystyle \sum_{m=0}^\infty{m!\over(26)^m}(x+1)^{4m+3}}\) \({\displaystyle \sum_{m=0}^\infty{(-1)^m\over8^mm(m+1)}(x-1)^{3m+1}}\)
[exer:7.1.6] Graph \(y=\sin x\) and the Taylor polynomial \[T_{2M+1}(x)=\displaystyle \sum_{n=0}^M{(-1)^nx^{2n+1}\over(2n+1)!}\nonumber \] on the interval \((-2\pi,2\pi)\) for \(M=1\), \(2\), \(3\), …, until you find a value of \(M\) for which there’s no perceptible difference between the two graphs.
[exer:7.1.7] Graph \(y=\cos x\) and the Taylor polynomial \[T_{2M}(x)=\displaystyle \sum_{n=0}^M{(-1)^nx^{2n}\over(2n)!}\nonumber \] on the interval \((-2\pi,2\pi)\) for \(M=1\), \(2\), \(3\), …, until you find a value of \(M\) for which there’s no perceptible difference between the two graphs.
[exer:7.1.8] Graph \(y=1/(1-x)\) and the Taylor polynomial \[T_N(x)=\displaystyle \sum_{n=0}^Nx^n\nonumber \] on the interval \([0,.95]\) for \(N=1\), \(2\), \(3\), …, until you find a value of \(N\) for which there’s no perceptible difference between the two graphs. Choose the scale on the \(y\)-axis so that \(0\le y\le20\).
[exer:7.1.9] Graph \(y=\cosh x\) and the Taylor polynomial \[T_{2M}(x)=\displaystyle \sum_{n=0}^M{x^{2n}\over(2n)!}\nonumber \] on the interval \((-5,5)\) for \(M=1\), \(2\), \(3\), …, until you find a value of \(M\) for which there’s no perceptible difference between the two graphs. Choose the scale on the \(y\)-axis so that \(0\le y\le75\).
[exer:7.1.10] Graph \(y=\sinh x\) and the Taylor polynomial \[T_{2M+1}(x)=\displaystyle \sum_{n=0}^M{x^{2n+1}\over(2n+1)!}\nonumber \] on the interval \((-5,5)\) for \(M=0\), \(1\), \(2\), …, until you find a value of \(M\) for which there’s no perceptible difference between the two graphs. Choose the scale on the \(y\)-axis so that \(-75~\le~y\le~75\).
[exer:7.1.11] \(\vspace*{-5pt}(2+x)y''+xy'+3y\)
[exer:7.1.12] \((1+3x^2)y''+3x^2y'-2y\)
[exer:7.1.13] \((1+2x^2)y''+(2-3x)y'+4y\)
[exer:7.1.14] \((1+x^2)y''+(2-x)y'+3y\)
[exer:7.1.15] \((1+3x^2)y''-2xy'+4y\)
[exer:7.1.16] Suppose \(y(x)=\displaystyle \sum_{n=0}^\infty a_n(x+1)^n\) on an open interval that contains \(x_0~=~-1\). Find a power series in \(x+1\) for \[xy''+(4+2x)y'+(2+x)y.\nonumber \]
[exer:7.1.17] Suppose \(y(x)=\displaystyle \sum_{n=0}^\infty a_n(x-2)^n\) on an open interval that contains \(x_0~=~2\). Find a power series in \(x-2\) for \[x^2y''+2xy'-3xy.\nonumber \]
[exer:7.1.18] Do the following experiment for various choices of real numbers \(a_0\) and \(a_1\).
Use differential equations software to solve the initial value problem
\[(2-x)y''+2y=0,\quad y(0)=a_0,\quad y'(0)=a_1,\nonumber \]
numerically on \((-1.95,1.95)\). Choose the most accurate method your software package provides. (See Section 10.1 for a brief discussion of one such method.)
For \(N=2\), \(3\), \(4\), …, compute \(a_2\), …, \(a_N\) from Eqn.Equation \ref{eq:7.1.18} and graph \[T_N(x)=\displaystyle \sum_{n=0}^N a_nx^n\nonumber \] and the solution obtained in (a) on the same axes. Continue increasing \(N\) until it is obvious that there’s no point in continuing. (This sounds vague, but you’ll know when to stop.)
[exer:7.1.19] Follow the directions of Exercise [exer:7.1.18} for the initial value problem \[(1+x)y''+2(x-1)^2y'+3y=0,\quad y(1)=a_0,\quad y'(1)=a_1,\nonumber \] on the interval \((0,2)\). Use Eqns. Equation \ref{eq:7.1.24} and Equation \ref{eq:7.1.25} to compute \(\{a_n\}\).
[exer:7.1.20] Suppose the series \(\displaystyle \sum_{n=0}^\infty a_nx^n\) converges on an open interval \((-R,R)\), let \(r\) be an arbitrary real number, and define \[y(x)=x^r\displaystyle \sum_{n=0}^\infty a_nx^n=\displaystyle \sum_{n=0}^\infty a_nx^{n+r}\nonumber \] on \((0,R)\). Use Theorem [thmtype:7.1.4} and the rule for differentiating the product of two functions to show that \[\begin{aligned} y'(x)&=&{\displaystyle \sum_{n=0}^\infty (n+r)a_nx^{n+r-1}},\\[10pt] y''(x)&=&{\displaystyle \sum_{n=0}^\infty(n+r)(n+r-1)a_nx^{n+r-2}},\\ &\vdots&\\ y^{(k)}(x)&=&{\displaystyle \sum_{n=0}^\infty(n+r)(n+r-1)\cdots(n+r-k)a_nx^{n+r-k}}\end{aligned}\nonumber \] on \((0,R)\)
[exer:7.1.21] \(x^2(1-x)y''+x(4+x)y'+(2-x)y\)
[exer:7.1.22] \(x^2(1+x)y''+x(1+2x)y'-(4+6x)y\)
[exer:7.1.23] \(x^2(1+x)y''-x(1-6x-x^2)y'+(1+6x+x^2)y\)
[exer:7.1.24] \(x^2(1+3x)y''+x(2+12x+x^2)y'+2x(3+x)y\)
[exer:7.1.25] \(x^2(1+2x^2)y''+x(4+2x^2)y'+2(1-x^2)y\)
[exer:7.1.26] \(x^2(2+x^2)y''+2x(5+x^2)y'+2(3-x^2)y\)
|
Answer
False. $\sqrt {f(x)}\ne\frac{1}{2}f(x)$
Work Step by Step
Use the Power Property: $\frac{1}{2}f(x)=\frac{1}{2}\ln x=\ln x^{\frac{1}{2}}=\ln\sqrt x=f(\sqrt x)\ne\sqrt {f(x)}=\sqrt {\ln x}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
Tutte matrix
In graph theory, the
Tutte matrix $A$ of a graph $G = (V,E)$ is a matrix used to determine the existence of a perfect matching: that is, a set of edges which is incident with each vertex exactly once.
If the set of vertices $V$ has $2n$ elements then the Tutte matrix is a $2n \times 2n$ matrix $A$ with entries $$ A_{ij} = \begin{cases} x_{ij}\;\;\mbox{if}\;(i,j) \in E \mbox{ and } i<j\\ -x_{ji}\;\;\mbox{if}\;(i,j) \in E \mbox{ and } i>j\\ 0\;\;\;\;\mbox{otherwise} \end{cases} $$ where the $x_{ij}$ are indeterminates. The determinant of this skew-symmetric matrix is then a polynomial (in the variables $x_{ij}$, $i<j$): this coincides with the square of the pfaffian of the matrix $A$ and is non-zero (as a polynomial) if and only if a perfect matching exists. (It should be noted that this is not the Tutte polynomial of $G$.)
References Rajeev Motwani, Prabhakar Raghavan, "Randomized Algorithms", Cambridge University Press (1995) ISBN 978-0-521-47465-8 Zbl 0849.68039 Allen B. Tucker, "Computer Science Handbook", 2nd ed. CRC Press (2004) ISBN 158488360X W.T. Tutte, "The factorization of linear graphs", J. London Math. Soc. 22(1947) 107-111 DOI 10.1112/jlms/s1-22.2.107 How to Cite This Entry:
Tutte matrix.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Tutte_matrix&oldid=39570
|
Transients
A transient is used to refer to any signal or wave that is short lived. Transient detection has applications in power line analysis, speech and image processing, turbulent flow applications, to name just a few. For instance, in a power system signal, which is highly complicated nowadays because of its constantly changing loads and its dynamic nature, transients can be caused by lightnings, equipment faults, switching operations and so on. Transients are then observed as short lived high-frequency oscillations superimposed on the voltages or currents of fundamental frequency, which is 50/60 Hz, as well as exponential components.
A signal model incorporating these discontinuities is given by
\[ f(x) = \sum_{i=1}^t \alpha_i \exp(-\beta_i (x-a_i)) \cos(2\pi\sigma_i x + \tau_i) \mathbf{1}_{[a_i,z_i[}, \]
where \( \alpha_i \) is the amplitude, \( \sigma_i \) is the frequency (an integer multiple of the fundamental frequency), \( \tau_i \) is the phase angle, \( \beta_i \) is the damping factor and \( a_i \) and \( z_i \) are the starting and ending times of the component.
It is known to be difficult to take the discontinuities from \( \mathbf{1}_{[a_i,z_i[} \) into account, meaning the damped signals to start at different instants. Our sparsity based method can detect at each instant how many and which components are present in the signal.
As an illustration we model the synthetic signal with \(t=3\), \( [a_1, z_1[=[0, 0.0308[ \), \( [a_2, z_2[= [0.0308, 0.0625[ \), \( [a_3,z_3[= [0.0625, 0.1058[ \), or expressed in multiples of the sampling rate \( M = 1200 \) Hz, \( [a_1, z_1[ = [0/M,37/M[ \), \( [a_2, z_2[ = [37/M,75/M[ \), \( [a_3, z_3[ =[75/M,127/M[ \). So at every moment only one term is present in the signal, but the characteristics of that term may change. We furthermore have all \( \alpha_i=1 \), all \( \beta_i=0 \), all \( \sigma_i=60 \) and \( \tau_1=–\pi/2 \), \( \tau_2=–\pi/2 \), \( \tau_3=3\pi/4 \). In addition, uniformly distributed noise in \([−0.05,0.05]\) is added to the samples of the synthetic signal.
In the figure we merely show the result of the discontinuity detection (the actual values of the parameters \( \alpha_i \), \( \beta_i \), \( \sigma_i \), \( \tau_i \) can of course be determined simultaneously). Our test, based on a singular value decomposition, clearly reveals two nonzero and two (mostly) numerically zero singular values. The nonzero values represent the single constant 60 Hz background signal. This behaviour is disrupted at each phase change because then samples from two different signals are present in the monitored matrix. This happens at \(z_i = a_{i+1} \) or \( x = 37/M\) , \(75/M\), \(127/M\). The disruption is visible from \(x = (37-2t)/M \), \( (75-2t)/M \), \( (127-2t)/M \) on. The disruption disappears as soon as the samples are all in line again with the same underlying component signal.
|
555851 results
Contributors: Russell, Niamh, Murphy, Thomas Brendan, Raftery, Adrian E
Date: 2015-06-30
... We propose Bayesian model averaging (BMA) as a method for postprocessing the results of model-based clustering. Given a number of competing models, appropriate model summaries are averaged, using the posterior model probabilities, instead of being taken from a single "best" model. We demonstrate the use of BMA in model-based clustering for a number of datasets. We show that BMA provides a useful summary of the clustering of observations while taking model uncertainty into account. Further, we show that BMA in conjunction with model-based clustering gives a competitive method for density estimation in a multivariate setting. Applying BMA in the model-based context is fast and can give enhanced modeling performance.
Files:
Contributors: Padovani, P., Petropoulou, M., Giommi, P., Resconi, E.
Date: 2015-06-30
... Blazars have been suggested as possible neutrino sources long before the recent IceCube discovery of high-energy neutrinos. We re-examine this possibility within a new framework built upon the blazar simplified view and a self-consistent modelling of neutrino emission from individual sources. The former is a recently proposed paradigm that explains the diverse statistical properties of blazars adopting minimal assumptions on blazars' physical and geometrical properties. This view, tested through detailed Monte Carlo simulations, reproduces the main features of radio, X-ray, and gamma-ray blazar surveys and also the extragalactic gamma-ray background at energies > 10 GeV. Here we add a hadronic component for neutrino production and estimate the neutrino emission from BL Lacs as a class, "calibrated" by fitting the spectral energy distributions of a preselected sample of BL Lac objects and their (putative) neutrino spectra. Unlike all previous papers on this topic, the neutrino background is then derived by summing up at a given energy the fluxes of each BL Lac in the simulation, all characterised by their own redshift, synchrotron peak energy, gamma-ray flux, etc. Our main result is that BL Lacs as a class can explain the neutrino background seen by IceCube above ~ 0.5 PeV while they only contribute ~ 10% at lower energies, leaving room to some other population(s)/physical mechanism. However, one cannot also exclude the possibility that individual BL Lacs still make a contribution at the ~ 20% level to the IceCube low-energy events. Our scenario makes specific predictions testable in the next few years.
Files:
Contributors: Evers, Martin, Müller, Cord A., Nowak, Ulrich
Date: 2015-06-30
... The effect of disorder on magnonic transport in low-dimensional magnetic materials is studied in the framework of a classical spin model. Numerical investigations give insight into scattering properties of the systems and show the existence of Anderson localization in 1D and weak localization in 2D, potentially affecting the functionality of magnonic devices.
Files:
Contributors: Bannister, Michael J., Devanny, William E., Dujmović, Vida, Eppstein, David, Wood, David R.
Date: 2015-06-30
... We investigate two types of graph layouts, track layouts and layered path decompositions, and the relations between their associated parameters track-number and layered pathwidth. We use these two types of layouts to characterize leveled planar graphs, the graphs with planar layered drawings with no dummy vertices. It follows from the known NP-completeness of leveled planarity that track-number and layered pathwidth are also NP-complete, even for the smallest constant parameter values that make these parameters nontrivial. We prove that the graphs with bounded layered pathwidth include outerplanar graphs, Halin graphs, and squaregraphs, but that (despite having bounded track-number) series-parallel graphs do not have bounded layered pathwidth. Finally, we investigate the parameterized complexity of these layouts, showing that past methods used for book layouts don't work to parameterize the problem by treewidth or almost-tree number but that the problem is (non-uniformly) fixed-parameter tractable for tree-depth.
Files:
Contributors: Hoq, Q. E., Kevrekidis, P. G., Bishop, A. R.
Date: 2015-06-30
... In the present work, we consider the self-focusing discrete nonlinear Schrodinger equation on hexagonal and honeycomb lattice geometries. Our emphasis is on the study of the effects of anisotropy, motivated by the tunability afforded in recent optical and atomic physics experiments. We find that important classes of solutions, such as the so-called discrete vortices, undergo destabilizing bifurcations as the relevant anisotropy control parameter is varied. We quantify these bifurcations by means of explicit analytical calculations of the solutions, as well as of their spectral linearization eigenvalues. Finally, we corroborate the relevant stability picture through direct numerical computations. In the latter, we observe the prototypical manifestation of these instabilities to be the spontaneous rearrangement of the solution, for larger values of the coupling, into localized waveforms typically centered over fewer sites than the original unstable structure. For weak coupling, the instability appears to result in a robust breathing of the relevant waveforms.
Files:
Contributors: Guazzini, Andrea, Vilone, Daniele, Donati, Camillo, Nardi, Annalisa, Levnajic, Zoran
Date: 2015-06-30
... Crowdsourcing is a process of accumulating the ideas, thoughts or information from many independent participants, with aim to find the best solution for a given challenge. Modern information technologies allow for massive number of subjects to be involved in a more or less spontaneous way. Still, the full potentials of crowdsourcing are yet to be reached. We introduce a modeling framework through which we study the effectiveness of crowdsourcing in relation to the level of collectivism in facing the problem. Our findings reveal an intricate relationship between the number of participants and the difficulty of the problem, indicating the optimal size of the crowdsourced group. We discuss our results in the context of modern utilization of crowdsourcing.
Files:
Contributors: Chithrabhanu, P., Reddy, Salla Gangi, Anwar, Ali, Aadhi, A., Prabhakar, Shashi, Singh, R. P.
Date: 2015-06-30
... We experimentally show that the non-separability of polarization and orbital angular momentum present in a light beam remains preserved under scattering through a random medium like rotating ground glass. We verify this by measuring the degree of polarization and observing the intensity distribution of the beam when projected to different polarization states, before as well as after the scattering. We extend our study to the non-maximally non-separable states also.
Files:
New observations and models of circumstellar CO line emission of AGB stars in the Herschel SUCCESS programme
Contributors: Danilovich, Taissa, Teyssier, D., Justtanont, K., Olofsson, H., Cerrigone, L., Bujarrabal, V., Alcolea, J., Cernicharo, J., Castro-Carrizo, A., Garcia-Lario, P.
Date: 2015-06-30
... CONTEXT: Asymptotic giant branch (AGB) stars are in one of the latest evolutionary stages of low to intermediate-mass stars. Their vigorous mass loss has a significant effect on the stellar evolution, and is a significant source of heavy elements and dust grains for the interstellar medium. The mass-loss rate can be well traced by carbon monoxide (CO) line emission. AIMS: We present new Herschel HIFI and IRAM 30m telescope CO line data for a sample of 53 galactic AGB stars. The lines cover a fairly large range of excitation energy from the $J=1\to0$ line to the $J=9\to8$ line, and even the $J=14\to13$ line in a few cases. We perform radiative transfer modelling for 38 of these sources to estimate their mass-loss rates. METHODS: We used a radiative transfer code based on the Monte Carlo method to model the CO line emission. We assume spherically symmetric circumstellar envelopes that are formed by a constant mass-loss rate through a smoothly accelerating wind. RESULTS: We find models that are consistent across a broad range of CO lines for most of the stars in our sample, i.e., a large number of the circumstellar envelopes can be described with a constant mass-loss rate. We also find that an accelerating wind is required to fit, in particular, the higher-J lines and that a velocity law will have a significant effect on the model line intensities. The results cover a wide range of mass-loss rates ($\sim 10^{-8}$ to $2\times 10^{-5}~\mathrm{M}_\odot~\mathrm{ yr}^{-1}$) and gas expansion velocities (2 to $21.5$ km s$^{-1}$), and include M-, S-, and C-type AGB stars. Our results generally agree with those of earlier studies, although we tend to find slightly lower mass-loss rates by about 40%, on average. We also present "bonus" lines detected during our CO observations.
Files:
Manipulation of quasiparticle trapping and electron cooling in Meissner and vortex states of mesoscopic superconductors
Contributors: Taupin, M., Khaymovich, I. M., Meschke, M., Mel'nikov, A. S., Pekola, J. P.
Date: 2015-06-30
... Nowadays superconductors serve in numerous applications, from high-field magnets to ultra-sensitive detectors of radiation. Mesoscopic superconducting devices, i.e. those with nanoscale dimensions, are in a special position as they are easily driven out of equilibrium under typical operating conditions. The out-of-equilibrium superconductors are characterized by non-equilibrium quasiparticles. These extra excitations can compromise the performance of mesoscopic devices by introducing, e.g., leakage currents or decreased coherence times in quantum devices. By applying an external magnetic field, one can conveniently suppress or redistribute the population of excess quasiparticles. In this article we present an experimental demonstration and a theoretical analysis of such effective control of quasiparticles, resulting in electron cooling both in the Meissner and vortex states of a mesoscopic superconductor. We introduce a theoretical model of quasiparticle dynamics which is in quantitative agreement with the experimental data.
Files:
Resonant Absorption of Transverse Oscillations and Associated Heating in a Solar Prominence. I- Observational aspects
Contributors: Okamoto, Takenori J., Antolin, Patrick, De Pontieu, Bart, Uitenbroek, Han, Van Doorsselaere, Tom, Yokoyama, Takaaki
Date: 2015-06-30
... Transverse magnetohydrodynamic (MHD) waves have been shown to be ubiquitous in the solar atmosphere and can in principle carry sufficient energy to generate and maintain the Sun's million-degree outer atmosphere or corona. However, direct evidence of the dissipation process of these waves and subsequent heating has not yet been directly observed. Here we report on high spatial, temporal, and spectral resolution observations of a solar prominence that show a compelling signature of so-called resonant absorption, a long hypothesized mechanism to efficiently convert and dissipate transverse wave energy into heat. Aside from coherence in the transverse direction, our observations show telltale phase differences around 180 degrees between transverse motions in the plane-of-sky and line-of-sight velocities of the oscillating fine structures or threads, and also suggest significant heating from chromospheric to higher temperatures. Comparison with advanced numerical simulations support a scenario in which transverse oscillations trigger a Kelvin-Helmholtz instability (KHI) at the boundaries of oscillating threads via resonant absorption. This instability leads to numerous thin current sheets in which wave energy is dissipated and plasma is heated. Our results provide direct evidence for wave-related heating in action, one of the candidate coronal heating mechanisms.
Files:
|
Allow me to brief the problem here.
In the graph, assume every edge has weight 1 that the distance can be calculated accordingly. We also define the distance between edges and vertices. For $e = (u,v) \in E$, $d(e,w) = 0.5 + min(d(u,w), d(v,w))$ and the distance between edges immediately follows. It is clear that the distance measurement is
metric. (why?)
For a graph $G = (V,E)$ with two disjoint sets of vertices $V_s, V_c$, define
scoreto be the number of edges closer to $V_c$. i.e. $d(e, V_c) \leq d(e, V_s)$. If the distances are equal it counts as 0.5.
Let's have a function to describe the score: $f(G, V_s, V_c) =~score$.
Instance: $G = (V,E)$ and two disjoint sets of vertices $V_s, V_c \subseteq V$, with some further constraint specified upon the challenge.
Output: Another set $V_{c'}\subseteq V$ that is mutually disjoint with $V_s, V_c$.
Goal: Maximize $f(G,V_s, V_c \cup V_{c'})$.
Maybe you have already heard it before, that
Greedy algorithmworks really wellfor this problem. Is it the best solution? It is clear that the greedy algorithm is a polynomial time algorithm. Then is this problem in P? Is it NP? This will be an exciting review of the old problem.
|
Unit of Ring of Mappings iff Image is Subset of Ring Units/Image is Subset of Ring Units implies Unit of Ring of Mappings Theorem
Let $S$ be a set.
Let $\Img f \subseteq U_R$ where $\Img f$ is the image of $f$.
Then: $f \in R^S$ is a unit of $R^S$ $\forall x \in S : \map {f^{-1}} {x} = \map f x^{-1}$ Proof
From Structure Induced by Ring with Unity Operations is Ring with Unity, $\struct {R^S, +', \circ'}$ has a unity $f_{1_R}$ defined by:
$\forall x \in S: \map {f_{1_R}} x = 1_R$ By assumption: $\forall x \in S: \exists \map f x^{-1} : \map f x \circ \map f x^{-1} = \map f x^{-1} \circ \map f x = 1_R$
Let $f^{-1} : S \to U_R$ be defined by:
$\forall x \in S : \map {f^{-1}} {x} = \map f x^{-1}$ Consider the mapping $f \circ’ f^{-1}$.
For all $x \in S$:
\(\displaystyle \map {\paren {f \circ’ f^{-1} } } x\) \(=\) \(\displaystyle \map f x \circ’ \map {f^{-1} } x\) \(\displaystyle \) \(=\) \(\displaystyle \map f x \circ \map f x^{-1}\) \(\displaystyle \) \(=\) \(\displaystyle 1_R\)
Hence $f \circ’ f^{-1} = f_{1_R}$.
Similarly, $f^{-1} \circ’ f = f_{1_R}$.
$\blacksquare$
|
Tsukuba Journal of Mathematics Tsukuba J. Math. Volume 31, Number 1 (2007), 205-215. On double coverings of a pointed non-singular curve with any Weierstrass semigroup Abstract
Let $H$ be a Weierstrass semigroup, i.e., the set $H(P)$ of integers which are pole orders at $P$ of regular functions on $C \setminus \{P\}$ for some pointed non-singular curve $(C, P)$. In this paper for any Weierstrass semi group $H$ we construct a double covering $\pi:\tilde{C} \to C$ with a ramification point $\tilde{P}$ such that $H(\pi(\tilde{P})) = H$. We also determine the semigroup $H(\tidle{P})$. Moreover, in the case where $H$ starts with 3 we investigate the relation between the semigroup $H(\tilde{P})$ and the Weierstrass semigroup of a total ramification point on a cyclic covering of the projective line with degree 6.
Article information Source Tsukuba J. Math., Volume 31, Number 1 (2007), 205-215. Dates First available in Project Euclid: 30 May 2017 Permanent link to this document https://projecteuclid.org/euclid.tkbjm/1496165122 Digital Object Identifier doi:10.21099/tkbjm/1496165122 Mathematical Reviews number (MathSciNet) MR2337127 Zentralblatt MATH identifier 1154.14023 Citation
Komeda, Jiryo; Ohbuchi, Akira. On double coverings of a pointed non-singular curve with any Weierstrass semigroup. Tsukuba J. Math. 31 (2007), no. 1, 205--215. doi:10.21099/tkbjm/1496165122. https://projecteuclid.org/euclid.tkbjm/1496165122
|
Roughly speaking there are two ways for a series to converge: As in the case of $\sum 1/n^2$, the individual terms get small very quickly, so that the sum of all of them stays finite, or, as in the case of $\ds \sum (-1)^{n-1}/n$, the terms don't get small fast enough ($\sum 1/n$ diverges), but a mixture of positive and negative terms provides enough cancellation to keep the sum finite. You might guess from what we've seen that if the terms get small fast enough to do the job, then whether or not some terms are negative and some positive the series converges.
Theorem 11.6.1 If $\ds\sum_{n=0}^\infty |a_n|$ converges, then $\ds\sum_{n=0}^\infty a_n$ converges.
Proof. Note that $\ds 0\le a_n+|a_n|\le 2|a_n|$ so by the comparison test $\ds\sum_{n=0}^\infty (a_n+|a_n|)$ converges. Now $$ \sum_{n=0}^\infty (a_n+|a_n|) -\sum_{n=0}^\infty |a_n| = \sum_{n=0}^\infty a_n+|a_n|-|a_n| = \sum_{n=0}^\infty a_n $$ converges by theorem 11.2.2.
So given a series $\sum a_n$ with both positive and negative terms, you should first ask whether $\sum |a_n|$ converges. This may be an easier question to answer, because we have tests that apply specifically to series with non-negative terms. If $\sum |a_n|$ converges then you know that $\sum a_n$ converges as well. If $\sum |a_n|$ diverges then it still may be true that $\sum a_n$ converges—you will have to do more work to decide the question. Another way to think of this result is: it is (potentially) easier for $\sum a_n$ to converge than for $\sum |a_n|$ to converge, because the latter series cannot take advantage of cancellation.
If $\sum |a_n|$ converges we say that $\sum a_n$ converges
absolutely; to say that $\suma_n$ converges absolutely is to say that any cancellation that happensto come along is not really needed, as the terms already get small sofast that convergence is guaranteed by that alone. If $\sum a_n$converges but $\sum |a_n|$ does not, we say that $\sum a_n$ converges conditionally. Forexample $\ds\sum_{n=1}^\infty (-1)^{n-1} {1\over n^2}$ convergesabsolutely, while $\ds\sum_{n=1}^\infty (-1)^{n-1} {1\over n}$converges conditionally.
Example 11.6.2 Does $\ds\sum_{n=2}^\infty {\sin n\over n^2}$ converge?
In example 11.5.2 we saw that $\ds\sum_{n=2}^\infty {|\sin n|\over n^2}$ converges, so the given series converges absolutely.
Example 11.6.3 Does $\ds\sum_{n=0}^\infty (-1)^{n}{3n+4\over 2n^2+3n+5}$ converge?
Taking the absolute value, $\ds\sum_{n=0}^\infty {3n+4\over 2n^2+3n+5}$ diverges by comparison to $\ds\sum_{n=1}^\infty {3\over 10n}$, so if the series converges it does so conditionally. It is true that $\ds\lim_{n\to\infty}(3n+4)/(2n^2+3n+5)=0$, so to apply the alternating series test we need to know whether the terms are decreasing. If we let $\ds f(x)=(3x+4)/(2x^2+3x+5)$ then $\ds f'(x)=-(6x^2+16x-3)/(2x^2+3x+5)^2$, and it is not hard to see that this is negative for $x\ge1$, so the series is decreasing and by the alternating series test it converges.
Exercises 11.6
Determine whether each series converges absolutely, converges conditionally, or diverges.
Ex 11.6.1$\ds\sum_{n=1}^\infty (-1)^{n-1}{1\over 2n^2+3n+5}$(answer)
Ex 11.6.2$\ds\sum_{n=1}^\infty (-1)^{n-1}{3n^2+4\over 2n^2+3n+5}$(answer)
Ex 11.6.3$\ds\sum_{n=1}^\infty (-1)^{n-1}{\ln n\over n}$(answer)
Ex 11.6.4$\ds\sum_{n=1}^\infty (-1)^{n-1} {\ln n\over n^3}$(answer)
Ex 11.6.5$\ds\sum_{n=2}^\infty (-1)^n{1\over \ln n}$(answer)
Ex 11.6.6$\ds\sum_{n=0}^\infty (-1)^{n} {3^n\over 2^n+5^n}$(answer)
Ex 11.6.7$\ds\sum_{n=0}^\infty (-1)^{n} {3^n\over 2^n+3^n}$(answer)
Ex 11.6.8$\ds\sum_{n=1}^\infty (-1)^{n-1} {\arctan n\over n}$(answer)
|
Let $E$ be a smooth vector bundle equipped with an affine connection $\nabla$.
Suppose that $(E,\nabla)$ admits a non-zero parallel section. I think that $(\bigwedge^k E,\bigwedge^k \nabla)$ does not need to admit a non-zero parallel section
even locally. How to construct such an example?
This seems especially interesting when $k < \text{rank}(E)$.
Moreover generally , are there any non-trivial relations between the dimension of the space of parallel sections of $(E,\nabla)$, and that of $(\bigwedge^k E, \bigwedge^k\nabla)$ (locally)?
If there are $k$ independent parallel sections of $E$, then $\bigwedge^k E$ has at least one parallel section; $\sigma_1,\dots,\sigma_k$ are parallel $\Rightarrow \sigma_1 \wedge \dots \wedge \sigma_k$ is parallel.
What happens if $E$ has $r<k$ parallel sections? Does $\bigwedge^k E$ still admit (locally) a non-zero parallel section?
Edit: We can probably use the relation between the curvatures: Let $X,Y \in \Gamma(TM)$. Then $R^{ \bigwedge^k\nabla}(X,Y)=d\psi_{\operatorname{Id}}(R^{\nabla}(X,Y)) $, where $\psi:\text{End}(E) \to \text{End}(\bigwedge^k E)$ is the exterior power map, $\psi(A)=\bigwedge^k A$.
Since the dimension of the space of local parallel sections around $p \in M$ equals $\ker R(\cdot,\cdot)$, it suffices to construct an example where $R^{\nabla}(\cdot,\cdot)$ is singular, but $R^{ \bigwedge^k\nabla}(\cdot,\cdot)$ is invertible. This is certainly possible on the algebraic level, see e.g. example 1, in this question, with $k=2, \text{rank}E=3$.
|
I searched and wasn't able to find a question similar enough to mine. Here's the problem:
Show that $\sigma=c_1*c_1$, where $c_1$ is the constant function $1$.
Here is my attempt. My argument makes sense to me, but it seems kind of short. I'm wondering if there is anything I need to add to my argument or if I'm sort of using circular reasoning.
The operation $*$ is the convolution product defined as: \begin{equation} \begin{split} (c_1 * c_1)(n) &= \sum_{d \mid n}c_1(d)c_1\left(\frac nd \right) \\ \end{split} \end{equation}
The formula above is clearly true if $n=1$. Assume that $n > 1$ and write $n=p_1^{e_1} \dots p_s^{e_s}$. In this sum, all terms, $c_1(d)c_1(\frac nd)$, are equal to $1$ for all $d \mid n$. So we will end up multiplying $1$ by the number of divisors, which is exactly $\sigma(n)$.
As always, thank you all for your help.
|
Recall that a function \(y=f(x)\) is said to be
one to one if it passes the horizontal line test; that is, for two different \(x\) values \(x_1\) and \(x_2\), we do not have \(f(x_1)=f(x_2)\). In some cases the domain of \(f\) must be restricted so that it is one to one. For instance, consider \(f(x)=x^2\). Clearly, \(f(-1)= f(1)\), so \(f\) is not one to one on its regular domain, but by restricting \(f\) to \((0,\infty)\), \(f\) is one to one.
Now recall that one to one functions have
inverses. That is, if \(f\) is one to one, it has an inverse function, denoted by \(f^{-1}\), such that if \(f(a)=b\), then \(f^{-1}(b) = a\). The domain of \(f^{-1}\) is the range of \(f\), and vice-versa. For ease of notation, we set \(g=f^{-1}\) and treat \(g\) as a function of \(x\).
Since \(f(a)=b\) implies \(g(b)=a\), when we compose \(f\) and \(g\) we get a nice result: \[f\big(g(b)\big) = f(a) = b.\] In general, \(f\big(g(x)\big) =x\) and \(g\big(f(x)\big) = x\). This gives us a convenient way to check if two functions are inverses of each other: compose them and if the result is \(x\), then they are inverses (on the appropriate domains.)
When the point \((a,b)\) lies on the graph of \(f\), the point \((b,a)\) lies on the graph of \(g\). This leads us to discover that the graph of \(g\) is the reflection of \(f\) across the line \(y=x\). In Figure 2.29 we see a function graphed along with its inverse. See how the point \((1,1.5)\) lies on one graph, whereas \((1.5,1)\) lies on the other. Because of this relationship, whatever we know about \(f\) can quickly be transferred into knowledge about \(g\).
Figure 2.29: A function \(f\) along with its inverse \(f^{-1}\) . (Note how it does not matter which function we refer to as \(f\) ; the other is \(f^{-1}\) .)
For example, consider Figure 2.30 where the tangent line to \(f\) at the point \((a,b)\) is drawn. That line has slope \(f^\prime(a)\). Through reflection across \(y=x\), we can see that the tangent line to \(g\) at the point \((b,a)\) should have slope \( \frac{1}{f^\prime(a)}\). This then tells us that \( g^\prime(b) = \frac{1}{f^\prime(a)}.\)
Figure 2.30: Corresponding tangent lines drawn to \(f\text{ and }f^{-1}\).
Consider:
We have discovered a relationship between \(f^\prime\) and \(g^\prime\) in a mostly graphical way. We can realize this relationship analytically as well. Let \(y = g(x)\), where again \(g = f^{-1}\). We want to find \( y^\prime\). Since \(y = g(x)\), we know that \(f(y) = x\). Using the Chain Rule and Implicit Differentiation, take the derivative of both sides of this last equality.
\[\begin{align*}\frac{d}{dx}\Big(f(y)\Big) &= \frac{d}{dx}\Big(x\Big) \\ f^\prime(y)\cdot y^\prime &= 1\\ y^\prime &= \frac{1}{f^\prime(y)}\\ y^\prime &= \frac{1}{f^\prime(g(x))} \end{align*}\]
This leads us to the following theorem.
Theorem 22: Derivatives of Inverse Functions
Let \(f\) be differentiable and one to one on an open interval \(I\), where \(f^\prime(x) \neq 0\) for all \(x\) in \(I\), let \(J\) be the range of \(f\) on \(I\), let \(g\) be the inverse function of \(f\), and let \(f(a) = b\) for some \(a\) in \(I\). Then \(g\) is a differentiable function on \(J\), and in particular,
\(1. \left(f^{-1}\right)^\prime (b)=g^\prime (b) = \frac{1}{f^\prime(a)}\)\quad \text{ and }\quad 2. \left(f^{-1}\right)^\prime (x)=g^\prime (x) = \frac{1}{f^\prime(g(x))}\)
The results of Theorem 22 are not trivial; the notation may seem confusing at first. Careful consideration, along with examples, should earn understanding.
In the next example we apply Theorem 22 to the arcsine function.
Example 75: Finding the derivative of an inverse trigonometric function
Let \(y = \arcsin x = \sin^{-1} x\). Find \(y^\prime\) using Theorem 22.
Figure 2.32: A right triangle defined by \(y=\sin^{-1}(x/1)\) with the length of the third leg found using the Pythagorean Theorem. Solution:
Adopting our previously defined notation, let \(g(x) = \arcsin x\) and \(f(x) = \sin x\). Thus \(f^\prime(x) = \cos x\). Applying the theorem, we have
\[\begin{align*} g^\prime (x) &= \frac{1}{f^\prime(g(x))} \\ &= \frac{1}{\cos(\arcsin x)}. \end{align*}\]
This last expression is not immediately illuminating. Drawing a figure will help, as shown in Figure 2.32. Recall that the sine function can be viewed as taking in an angle and returning a ratio of sides of a right triangle, specifically, the ratio "opposite over hypotenuse.'' This means that the arcsine function takes as input a ratio of sides and returns an angle. The equation \(y=\arcsin x\) can be rewritten as \(y=\arcsin (x/1)\); that is, consider a right triangle where the hypotenuse has length 1 and the side opposite of the angle with measure \(y\) has length \(x\). This means the final side has length \(\sqrt{1-x^2}\), using the Pythagorean Theorem.
\[\text{Therefore }\cos (\sin^{-1} x) = \cos y = \sqrt{1-x^2}/1 = \sqrt{1-x^2},\text{ resulting in }\]\[\frac{d}{dx}\big(\arcsin x\big) = g^\prime (x) = \frac{1}{\sqrt{1-x^2}}.\]
Remember that the input \(x\) of the arcsine function is a ratio of a side of a right triangle to its hypotenuse; the absolute value of this ratio will never be greater than 1. Therefore the inside of the square root will never be negative.
In order to make \(y=\sin x\) one to one, we restrict its domain to \([-\pi/2,\pi/2]\); on this domain, the range is \([-1,1]\). Therefore the domain of \(y=\arcsin x\) is \([-1,1]\) and the range is \([-\pi/2,\pi/2]\). When \(x=\pm 1\), note how the derivative of the arcsine function is undefined; this corresponds to the fact that as \(x\to \pm1\), the tangent lines to arcsine approach vertical lines with undefined slopes.
In Figure 2.33 we see \(f(x) = \sin x\) and \(f^{-1} = \sin^{-1} x\) graphed on their respective domains. The line tangent to \(\sin x\) at the point \((\pi/3, \sqrt{3}/2)\) has slope \(\cos \pi/3 = 1/2\). The slope of the corresponding point on \(\sin^{-1}x\), the point \((\sqrt{3}/2,\pi/3)\), is \[\frac{1}{\sqrt{1-(\sqrt{3}/2)^2}} = \frac{1}{\sqrt{1-3/4}} = \frac{1}{\sqrt{1/4}} = \frac{1}{1/2}=2,\]verifying yet again that at corresponding points, a function and its inverse have reciprocal slopes.
Figure 2.33: Graphs of \(\sin x \text{ and }\sin^{-1} x\) along with corresponding tangent lines.
Using similar techniques, we can find the derivatives of all the inverse trigonometric functions. In Figure 2.31 we show the restrictions of the domains of the standard trigonometric functions that allow them to be invertible.
Figure 2.31: Domains and ranges of the trigonometric and inverse trigonometric functions.
Theorem 23: Derivatives of Inverse Trigonometric Functions
The inverse trigonometric functions are differentiable on all open sets contained in their domains (as listed in Figure 2.31) and their derivatives are as follows:
\[\begin{align} &1. \frac{d}{dx}\big(\sin^{-1}(x)\big) = \frac{1}{\sqrt{1-x^2}} \qquad &&4.\frac{d}{dx}\big(\cos^{-1}(x)\big) = -\frac{1}{\sqrt{1-x^2}} \\ &2.\frac{d}{dx}\big(\sec^{-1}(x)\big) = \frac{1}{|x|\sqrt{x^2-1}} &&5.\frac{d}{dx}\big(\csc^{-1}(x)\big) = -\frac{1}{|x|\sqrt{x^2-1}} \\ &3.\frac{d}{dx}\big(\tan^{-1}(x)\big) = \frac{1}{1+x^2} &&6.\frac{d}{dx}\big(\cot^{-1}(x)\big) = -\frac{1}{1+x^2} \end{align}\]
Note how the last three derivatives are merely the opposites of the first three, respectively. Because of this, the first three are used almost exclusively throughout this text.
In Section 2.3, we stated without proof or explanation that \( \frac{d}{dx}\big(\ln x\big) = \frac{1}{x}\). We can justify that now using Theorem 22, as shown in the example.
Example 76: Finding the derivative of y = ln x
Use Theorem 22 to compute \( \frac{d}{dx}\big(\ln x\big)\).
Solution:
View \(y= \ln x\) as the inverse of \(y = e^x\). Therefore, using our standard notation, let \(f(x) = e^x\) and \(g(x) = \ln x\). We wish to find \(g^\prime (x)\). Theorem 22 gives:
\[\begin{align*} g^\prime (x) &= \frac{1}{f^\prime(g(x))} \\ &= \frac{1}{e^{\ln x}} \\ &= \frac{1}{x}. \end{align*}\]
In this chapter we have defined the derivative, given rules to facilitate its computation, and given the derivatives of a number of standard functions. We restate the most important of these in the following theorem, intended to be a reference for further work.
Theorem 24: Glossary of Derivatives of Elementary Functions
Let \(u\) and \(v\) be differentiable functions, and let \(a\), \(c\) and \(n\) be real numbers, \(a>0\), \(n\neq 0\).
|
I have discovered that the following two tags are too similar to each other:
riemann-zeta-function, with 299 questions tagged, has the description
The Riemann zeta function is the function of one complex variable $s$ defined by the series $\zeta(s) = \sum_{n \geq 1} \frac{1}{n^s}$ when $\operatorname{Re}(s) > 1$. It admits a meromorphic continuation to $\mathbb{C}$ with only a simple pole at $1$. This function satisfies a functional equation relating the values at $s$ and $1-s$. This is the most simple example of an $L$-function and a central object of number theory.
and no tag wiki;
zeta-functions, with 195 questions tagged, has the description
The Riemann zeta function is defined as the analytic continuation of the function defined for $\sigma > 1$ by the sum of the preceding series.
and tag wiki
The Riemann zeta function, $\zeta(s)$, is a function of a complex variable $s$ that analytically continues the sum of the infinite series
$$\zeta(s) =\sum_{n=1}^\infty\frac{1}{n^s}$$
which converges when the real part of $s$ is greater than $1$.
There are two problems here:
does it make sense to have one tag dedicated to just the Riemann $\zeta$, and a single other one for the rest of the $\zeta$ functions? A single tag for all these functions should suffice.
if we still want tags to distinguish between Riemann and "non-Riemann" $\zeta$ functions, then the latter class of functions should be correctly described in the tag description and tag wiki - a thing that does not currently happen with the latter tag.
My suggestion is to just melt the former tag into the latter, and automatically retag all the questions.
|
This is a short note written for my students in Math 170, talking about partial fraction decomposition and some potentially confusing topics that have come up. We’ll remind ourselves what partial fraction decomposition is, and unlike the text, we’ll prove it. Finally, we’ll look at some pitfalls in particular. All this after the fold.
1. The Result Itself
We are interested in
rational functions and their integrals. Recall that a polynomial $latex {f(x)}$ is a function of the form $latex {f(x) = a_nx^n + a_{n-1}x^{n-1} + \cdots + a_1x + a_0}$, where the $latex {a_i}$ are constants and $latex {x}$ is our “intederminate” — and which we commonly imagine standing for a number (but this is not necessary).
Then a rational function $latex {R(x)}$ is a ratio of two polynomials $latex {p(x)}$ and $latex {q(x)}$, $$ R(x) = \frac{p(x)}{q(x)}. $$
Then the big result concerning partial fractions is the following:
If $latex {R(x) = \dfrac{p(x)}{q(x)}}$ is a rational function and the degree of $latex {p(x)}$ is less than the degree of $latex {q(x)}$, and if $latex {q(x)}$ factors into $$q(x) = (x-r_1)^{k_1}(x-r_2)^{k_2} \dots (x-r_l)^{k_l} (x^2 + a_{1,1}x + a_{1,2})^{v_1} \ldots (x^2 + a_{m,1}x + a_{m,2})^{v_m}, $$
then $latex {R(x)}$ can be written as a sum of fractions of the form $latex {\dfrac{A}{(x-r)^k}}$ or $latex {\dfrac{Ax + B}{(x^2 + a_1x + a_2)^v}}$, where in particular If $latex {(x-r)}$ appears in the denominator of $latex {R(x)}$, then there is a term $latex {\dfrac{A}{x – r}}$ If $latex {(x-r)^k}$ appears in the denominator of $latex {R(x)}$, then there is a collection of terms $$ \frac{A_1}{x-r} + \frac{A_2}{(x-r)^2} + \dots + \frac{A_k}{(x-r)^k} $$ If $latex {x^2 + ax + b}$ appears in the denominator of $latex {R(x)}$, then there is a term $latex {\dfrac{Ax + B}{x^2 + ax + b}}$ If $latex {(x^2 + ax + b)^v}$ appears in the denominator of $latex {R(x)}$, then there is a collection of terms $$ \frac{A_1x + B_1}{x^2 + ax + b} + \frac{A_2 x + B_2}{(x^2 + ax + b)^2} + \dots \frac{A_v x + B_v}{(x^2 + ax + b)^v} $$
where in each of these, the capital $latex {A}$ and $latex {B}$ represent some constants that can be solved for through basic algebra.
I state this result this way because it is the one that leads to integrals that we can evaluate. But in principle, this theorem can be restated in a couple different ways.
Let’s parse this theorem through an example – the classic example, after the fold.
Consider the rational function $latex {\frac{1}{x(x+1)^2}}$. The terms that appear in the denominator are $latex {x}$ and $latex {(x + 1)^2}$. The $latex {x}$ part contributes an $latex {\dfrac{A}{x}}$ term. The $latex {(x + 1)^2}$ part contributes a $latex {\dfrac{B}{x+1} + \dfrac{C}{(x+1)^2}}$ pair of terms. So we know that $$\frac{1}{x(x+1)^2} = \frac{A}{x} + \frac{B}{x+1} + \frac{C}{(x+1)^2},$$
and we want to find out what $latex {A, B, C}$ are. Clearing denominators yields $$ 1 = A(x+1)^2 + Bx(x+1) + Cx = (A + B)x^2 + (2A + B + C)x + A,$$ and comparing coefficients of the polynomial $latex {1}$ and $latex {(A + B)x^2 + (2A + B + C)x + A}$ gives immediately that $latex {A = 1, B = -1, \text{and} C = -1}$. So $$ \frac{1}{x(x+1)^2} = \frac{1}{x} + \frac{-1}{x+1} + \frac{-1}{(x+1)^2}. $$ It is easy (and recommended!) to check these by adding up the terms on the right and making sure you get the term on the left. 2. Common Pitfalls
Very often in math classes, students are “lied to” in one of two ways: either results are stated that are far weaker than normal, or things are said about the impossibility to do something\ldots when it’s actually possible. For example, middle school teachers might often say that taking the square root of negative numbers “isn’t allowed” or “doesn’t mean anything,” when really there is a several hundred year tradition of doing just that. (On the other hand, things are much more complicated in some ways once we allow $latex {\sqrt{-1}}$, so it makes sense to defer its treatment).
Perhaps because of this, students often try to generalize the statement of partial fractions, which applies to
rational functions, to other types of functions. But it is very important to remember that partial functions works for rational functions, i.e. for ratios of polynomials. So if you have $latex {\dfrac{1}{x\sqrt{x-1}}}$, you cannot naively apply the partial fractions algorithm, as $latex {x\sqrt{x – 1}}$ is not a polynomial.
As an aside, we can be a bit clever. If you call $latex {y = \sqrt{x – 1}}$, so that $latex {y^2 + 1 = x}$, then we see that $latex {\dfrac{1}{x\sqrt{x – 1}} = \dfrac{1}{y(y^2+1)}}$, which you
can approach with partial fractions. You should check that $$ \dfrac{1}{y(y^2 + 1)} = \dfrac{1}{y} – \frac{y}{y^2 + 1}, $$ so that $$ \dfrac{1}{x\sqrt{x – 1}} = \dfrac{1}{\sqrt{x – 1}} – \dfrac{\sqrt{x-1}}{x}.$$ So while something is possible here, it’s not a naive application of partial fractions.
Similarly, if you have something like $latex {\dfrac{\sin \theta}{\cos^2 \theta + \cos \theta}}$, you cannot apply partial fractions because you are not looking at a rational function.
There’s another common danger, which has to do with what you assume is true. For example, if you assume that you
can use partial fractions on $latex {\dfrac{1}{x\sqrt{x-1}}}$ (which you cannot!), then you might do something like $$ \dfrac{1}{x\sqrt{x-1}} = \frac{A}{x} + \frac{B}{\sqrt{x-1}}, \tag{1}$$ so that clearing denominators gives $$ 1 = A\sqrt{x – 1} + B x $$ You might then thing that setting $latex {x = 1}$ shows that $latex {B = 1}$, and setting $latex {x = 10}$ gives $latex {3A + B = 3A + 1 + 1}$, meaning that $latex {A = 0}$. And so $latex {Bx = 1}$. But this is clearly nonsense. And the issue here is that the initial equation~1 is not true – starting with faulty assumptions gets you no where.
A key thing to remember is that you can always check your work by just adding together the final decomposition after finding a common denominator! And if you have a good feel for functions, you should be able to realize that no linear combination of $latex {\sqrt{x – 1}}$ and $latex {x}$ will ever be the constant $latex {1}$ – so that final equality will never be possible.
3. A Proof (more or less)
Giving the proof for the repeated factor part is annoying, but very similar to the non-repeated root case. Suppose that we have a number $latex {r}$ and a polynomial $latex {q(x)}$ such that $latex {q(r) \neq 0}$. Under these assumptions, we will show that there is a polynomial $latex {p(x)}$ of degree less than $latex {q(x)}$ and a number $latex {A}$ such that $$ \frac{1}{q(x) (x-r)} = \frac{p(x)}{q(x)} + \frac{A}{x-r}. $$
This is clearly equivalent to finding a polynomial $latex {p(x)}$ and $latex {A}$ such that $$ 1 = p(x)(x-r) + Aq(x) $$ We want this as an equality of polynomials, meaning it holds for all $latex {x}$. So in particular, it should hold when $latex {x = r}$, leading us to the equality $$ 1 = Aq(r), $$ which can be rewritten as $$ A = \frac{1}{q(r)} $$ as $latex {q(r) \neq 0}$. So we have found $latex {A}$.
We are left with $latex {p(x)(x-r) = 1 – Aq(x)}$. By our choice of $latex {A}$, we see that the right hand side is $latex {0}$ when $latex {x = r}$, so that the right hand side has $latex {x – r}$ as a factor. So $latex {1 – Aq(x) = N(x)(x-r)}$ for some polynomial $latex {N}$ of degree smaller than the degree of $latex {q(x)}$. (We have used the Factor Theorem here, which says that if $latex {a}$ is a root of $latex {p(x)}$, then $latex {p(x) = p_1(x)(x-a)}$ for a smaller degree polynomial $latex {p_1(x)}$). Choosing $latex {p(x)}$ to be this $latex {N(x)}$ gives us this equality as well, so that we have found a satisfactory $latex {A}$ and $latex {p(x)}$.
This lets us peel off the (non-repeating) factors of the denominator one at a time, one after the other, to prove the theorem for cases without repeated roots. The case with repeated roots is essentially the exact same, and would be a reasonable thing to try to prove on your own. (Hint: there will be a point when you might want to divide everything by $latex {x – r}$).
4. Conclusion
So that’s that about partial fractions. If there are any questions, feel free to let me know. This post was typeset in the \LaTeX typeset language, hosted on a WordPress site davidlowryduda.com, and displayed there with MathJax. This can also be found in pdf note form, and the conversion from note to WordPress is done using a customized version of latex2wp that I call mse2wp, located at github.com/davidlowryduda/mse2wp.
Thank you, and I’ll see you in class.
|
Type checking and the inference rules for type theory are
not an algorithm. They do not tell you how to type check. They tell you what is allowed when you perform type checking.
The rule for application should be read as:
If you can prove $\Gamma \vdash t_1 : \forall T . T_{12}$ (and we're not telling you how to do that, you'll have to figure it out) then $\Gamma \vdash t_1[T_2] : [X \to T_2]T_{12}$ for whatever $T_2$ you choose to use (and of course we do not know what it is that you're trying to do here, the choice of $T_2$ is entirely up to you).
So, the answer to your question is:
you have to figure out how to find the value of $X$ and you have to worry about the subtitution. The rules are not meant to solve that problem. They are just rules, telling you what you may do.
The
correct question that you should have asked is: "How do I make an algorithm that will be able to perform type-checking automatically?" This is an important question, and people put a lot of thought into it. I can give you an initial answer: try out all possibilities until you find one that works. But that's a lousy aglorithm and it's possible to do better.
|
We know that 0-1 integer linear programming is NP-Hard. What about 0-1 integer linear programming with only equality constraints? If so, how to prove it
$$\min c^T x \text{ s.t. } Ax = b \quad x_i \in \{0,1\}\quad (i=1,2,...,n)$$
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
We know that 0-1 integer linear programming is NP-Hard. What about 0-1 integer linear programming with only equality constraints? If so, how to prove it
$$\min c^T x \text{ s.t. } Ax = b \quad x_i \in \{0,1\}\quad (i=1,2,...,n)$$
Consider the Maximum Independent Set problem ($\mathcal{NP}$-hard): given a graph $G=(V,E)$, find the maximum independent set in $G$, i.e., the subset of vertices $I \subseteq V$ such that every two vertices in $I$ are not adjacent $\forall v_1,v_2 \in I, (v_1,v_2) \notin E$.
This problem has a well known Integer programming formulation: let $x_i$ be a binary variable for every $i \in V$ and equal one if $i \in I$. \begin{align} \max & \sum\limits_{i=1}^n x_i \\ \text{subject to} & \\ & x_i + x_j \leq 1 \text{ for all } (i,j) \in E \\ & x_i \in \{0,1\} \text{ for all } i \in V \end{align}
If $|V| = n$, this problem has $n$ variables and $O(n^2)$ constraints.
To transform it to equality form (a.k.a. standard form) one might introduce slack variables for every constraint. Generally, if you have $Ax <= b$ you can write $Ax + x_s = b$ where $x_s \geq 0$ and $x_s$ are called slack variables.
It appears that for Independent Set slack variables are actually binary! Thus, you have the formulation \begin{align} \max & \sum\limits_{i=1}^n x_i \\ \text{subject to} & \\ & x_i + x_j +y_{ij} = 1 \text{ for all } (i,j) \in E \\ & x_i \in \{0,1\} \text{ for all } i \in V \\ & y_{ij} \in \{0,1\} \text{ for all } i,j \in V \end{align}
This formulation exactly satisfies yours
0-1 integer linear programming with only equality constraints (ILPEQ)
Now, you have a basic reduction argument. Suppose, you can solve ILPEQ fast, that means you can solve independent set fast, but it is known to be $\mathcal{NP}$-hard, so ILPEQ is also $\mathcal{NP}$-hard. The formal argument as well as checking that this is in fact a polynomial reduction I leave to the reader.
|
Is there a "nice" way to think about the quotient group $\mathbb{C} / \mathbb{Z}$?
Bonus points for $\mathbb{C}/2\mathbb{Z}$ (or even $\mathbb{C}/n\mathbb{Z}$ for $n$ an integer) and how it relates to $\mathbb{C} / \mathbb{Z}$.
By "nice" I mean something like:
$\mathbb{R}/\mathbb{Z}$ is isomorphic to the circle group via the exponential map $\theta \mapsto e^{i\theta}$, and $\mathbb{C}/\Lambda$ is a complex torus for $\Lambda$ an integer lattice (an integer lattice is a discrete subgroup of the form $\alpha\mathbb{Z} + \beta\mathbb{Z}$ where $\alpha,\beta$ are linearly independent over $\mathbb{R}$.)
Intuitively, it seems like it should be something like a circle or elliptic curve.
|
I've read so much about it but none of it makes a lot of sense. Also, what's so unsolvable about it?
The prime number theorem states that the number of primes less than or equal to $x$ is approximately equal to $\int_2^x \dfrac{dt}{\log t}.$ The Riemann hypothesis gives a precise answer to how good this approximation is; namely, it states that the difference between the exact number of primes below $x$, and the given integral, is (essentially) $\sqrt{x} \log x$.
(Here "essentially" means that one should actually take the absolute value of the difference, and also that one might have to multiply $\sqrt{x} \log x$ by some positive constant. Also, I should note that the Riemann hypothesis is more usually stated in terms of the location of the zeroes of the Riemann zeta function; the previous paragraph is giving an equivalent form, which may be easier to understand, and also may help to explain the interest of the statement. See the wikipedia entry for the formulation in terms of counting primes, as well as various other formlations.)
The difficulty of the problem is (it seems to me) as follows: there is no approach currently known to understanding the distribution of prime numbers well enough to establish the desired approximation, other than by studying the Riemann zeta function and its zeroes. (The information about the primes comes from information about the zeta function via a kind of Fourier transform.) On the other hand, the zeta function is not easy to understand; there is no straightforward formula for it that allows one to study its zeroes, and because of this any such study ends up being somewhat indirect. So far, among the various possible such indirect approaches, no-one has found one that is powerful enough to control all the zeroes.
A very naive comment, that nevertheless might give some flavour of the problem, is that there are an infinite number of zeroes that one must contend with, so there is no obvious finite computation that one can make to solve the problem; ingenuity of some kind is necessarily required.
Finally, one can remark that the Riemann hypothesis, when phrased in terms of the location of the zeroes, is very simple (to state!) and very beautiful: it says that all the non-trivial zeros have real part $1/2$. This suggests that perhaps there is some secret symmetry underlying the Riemann zeta function that would "explain" the Riemann hypothesis. Mathematicians have had, and continue to have, various ideas about what this secret symmetery might be (in this they are inspired by an analogy with what is called "the function field case" and the deep and beautiful theory of the Weil conjectures), but so far they haven't managed to establish any underlying phenonemon which implies the Riemann hypothesis.
A direct translation of RH (Riemann Hypothesis) would be very baffling in layman's terms. But, there are many problems that are equivalent to RH and hence, defining them would be actually indirectly stating RH. Some of the equivalent forms of RH are much easier to understand than RH itself. I give what I think is the most easy equivalent form that I have encountered:
The Riemann hypothesis is equivalent to the statement that an integer has an equal probability of having an odd number or an even number of distinct prime factors. (Borwein page. 46)
You have to...have to....read this friendly introduction.
(Making it CW since it's just a link.)
In very layman's terms it states that there is some order in the distribution of the primes (which seem to occur totally chaotic at first sight). Or to say it like Shakespeare: "Though this be madness, yet there is method in 't."
If you want to know more there is a new trilogy about that topic where the first volume has just arrived: http://www.secretsofcreation.com/volume1.html
It is a marvelous and easy to understand book from a number theorist who knows his stuff!
Let $H_n$ be the nth harmonic number, i.e. $ H_n = 1 + \frac12 + \frac13 + \dots + \frac1n.$ Then, the Riemann hypothesis is true if and only if
$$ \sum_{d | n}{d} \le H_n + \exp(H_n)\log(H_n)$$
Here is a very simply description of the Riemann Hypothesis that requires nothing more than a 3rd grade education to understand:
There is also a beautiful proof linking the Farey sequence of fractions to the Riemann hypothesis by Jerome Franel. It's only three pages long and should be able to be understood by any undergraduate mathematics major.
It is straight forward. We have a Zeta function, 'analytically continued' that says
$\zeta(s)[1-2/2^s] = 1 - 1/2^s + 1/3^s - 1/4^s + ......$ Here s is a complex variable. Thus $s=\Re(\sigma) + \Im(\omega).$ (Where $\Re$ indicates the real part and $\Im$ indicates the imaginary part). The above series converges in the region of our interest which is $0 < \sigma < 1.$
To find the zeroes of $\zeta(s)$ we need to first substitute $\zeta(s) = 0$ and solve for $s$ and the sigmas and omegas.
That is
Solve $0= 1 - 1/2^s + 1/3^s -1/4^s + ...$, for sigmas and omegas. Riemann hypothesized that the zeros will have their sigmas equal to 1/2 while the omegas are distinct. To this date, after 150 years, no one has any clue why sigma takes a single value of 1/2 in the critical strip $0 < \sigma < 1.$ Apart from the consequences I hope I explained it well. Wikipedia on Riemann Hypothesis is a good source for reading up.
protected by davidlowryduda♦ Dec 16 '14 at 20:56
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
I'm having some beginner problems understanding / proving simple facts about higher inductive type paths.
If you take this higher inductive type for natural numbers modulo 1:
hit N1 := | 0 : N1 | S : N1 -> N1 | mod : 0 = 1
..I have the strong feeling that this corresponds exactly to (is equivalent to) the circle $S^1$ (with $b:S^1$ and $l:b=b$). Where the circle has $\pi_1(S¹, b) \cong (\mathbb{Z}, +, 0)$ by the isomorphism $n \mapsto l^n$, I'd think that
N1 has $\pi_1($
N1$, 0) \cong (\mathbb{Z}, +, 0)$ too, by the isomorphism $n \mapsto \textrm{mod}^n$.
However, I have to take into account $\textrm{ap}_S$ as well. If it doesn't hold that $\textrm{ap}_S(\textrm{mod})= \textrm{mod}$, then things are obviously more complicated. I feel that this must hold, because of course $\textrm{ap}_S$ doesn't create new paths out of the blue, and thus the only paths that it could map $\textrm{mod}$ to are the $\textrm{mod}^n$, of which only $\textrm{mod}$ seems reasonable.
Does this hold, and if so, how would I prove it, and if not, what mistakes am I making?
(I know that I'm glossing over details, because, for one, $\textrm{mod}:0=1$, and therefore literal $\textrm{mod}\cdot\textrm{mod}$ doesn't even typecheck. But I don't believe this is relevant, because substituting all these $\textrm{mod}$'s by $(\textrm{mod}^{-1})_\star(\textrm{mod}) : 0=0$, for the family $P(n) := 0=n$, would solve these superficial issues.)
|
Symbols:Greek/Pi/Product Notation $\displaystyle \prod_{j \mathop = 1}^n a_j$
Let $\tuple {a_1, a_2, \ldots, a_n} \in S^n$ be an ordered $n$-tuple in $S$.
The composite is called the product of $\tuple {a_1, a_2, \ldots, a_n}$, and is written: $\displaystyle \prod_{j \mathop = 1}^n a_j = \paren {a_1 \times a_2 \times \cdots \times a_n}$ The $\LaTeX$ code for \(\displaystyle \prod_{j \mathop = 1}^n a_j\) is
\displaystyle \prod_{j \mathop = 1}^n a_j .
The $\LaTeX$ code for \(\displaystyle \prod_{1 \mathop \le j \mathop \le n} a_j\) is
\displaystyle \prod_{1 \mathop \le j \mathop \le n} a_j .
The $\LaTeX$ code for \(\displaystyle \prod_{\map \Phi j} a_j\) is
\displaystyle \prod_{\map \Phi j} a_j .
|
No, on a general manifold $M$:
there is no preferred linear connection, there is always at least one connection $\nabla$, and the connections on $M$ are precisely the maps $$\bar{\nabla}_X Y := \nabla_X Y + A(X, Y),$$ where $A \in \Gamma(TM \otimes T^*M \otimes T^*M)$ is a $(2, 1)$-tensor on $M$.
As mentioned, given a metric $g$ on $M$, the torsion-freeness and metric compatibility conditions together pick out a unique connection. (This result wis sometimes called
The Fundamental Lemma of Riemannian Geometry.)
On the other hand, on a general smooth manifold one can pick out some preferred classes of connections:
The most important one is one already mentioned: The torsion tensor$$T(X, Y) := \nabla_X Y - \nabla_Y X - [X, Y]$$is an invariant for the connection, so we can (and often do) restrict attention to
torsion-free connections, that is, those for which the torsion tensor is $0$. In fact, any connection $\nabla$ determines a unique torsion-free connection $\widetilde{\nabla}$, namely$$\widetilde{\nabla}_X Y:= \nabla_X Y - \tfrac{1}{2} T(X, Y) = \tfrac{1}{2} \nabla_X Y + \tfrac{1}{2} \nabla_Y X - \tfrac{1}{2}[X, Y].$$In particular, any smooth manifold admits a torsion-free connection.
Substituting the coordinate vector fields of any coordinate chart in the above formula shows that the construction $\nabla \rightsquigarrow \widetilde{\nabla}$ is given by symmetrizing over the lower indices of the Christoffel symbols, that is, the Christoffel symbols of $\nabla$ and $\widetilde{\nabla}$ are related by$$\widetilde{\Gamma}\!{}^c_{ab} = \tfrac{1}{2}(\Gamma^c_{ab} + \Gamma^c_{ba}) .$$ Since the geodesic equation,$$\ddot{\gamma}^c + \Gamma^c_{ab} \dot{\gamma}^a \dot{\gamma}^b = 0 ,$$is symmetric in these indices, the geodesics of $\nabla$ and those of $\widetilde{\nabla}$ coincide, and so if we're only interested the behavior of geodesics of a given connection $\nabla$, there's no harm in passing to the corresponding torsion-free connection $\nabla$, which is sometimes easier to work with.
One can further specialize to so-called
special torsion-free connections, that is those that (locally) preserve some volume form on the manifold; more precisely, a connection $\nabla$ on a smooth manifold $M$ is special if for all $p \in M$ there is a neighborhood $U$ of $p$ and a volume form $\omega \in \Gamma(\Lambda^n T^* U)$ such that $(\nabla \vert_U) \omega = 0$. One can always pick a special connection locally, and, i.i.r.c., under some mild conditions, globally.
|
First of all, the formulation doesn't make much sense; surely you want to ask for an extension $L/\mathbf{Q}$ such that the inclusion $K \subset L$ induces a
surjection
$$G = \mathrm{Gal}(L/\mathbf{Q}) \rightarrow \mathrm{Gal}(K/\mathbf{Q}) = H.$$
There is then a second ambiguity as to whether you want to fix the choice of this map (note that the last isomorphism is canonical so this makes a difference: if $K = \mathbf{Q}(\sqrt{-1},\sqrt{5})$ so $\mathrm{Gal}(K/\mathbf{Q}) = (\mathbf{Z}/2 \mathbf{Z})^2$ and if$$G = (\mathbf{Z}/2) \oplus (\mathbf{Z}/4)$$ then two of the three surjective maps will not correspond to fields $L$ but the third does. Of course the question with the map unspecified is just the question ranging over the finitely many choices of map.
The typical way to study (central) extensions is via class field theory and the Brauer Group, but since everything is in the abelian context that is not really necessary. Instead, make some reductions:
Because abelian groups are canonically direct sums of their $p$-Sylow subgroups, you can immediately reduce to the case where $H$ and $G$ have $p$-power order.
If $G = G' \oplus \Gamma$ where the map $G \rightarrow H$ factors through $G'$, you can replace $G$ by $G'$. That is because if $L'/K$ is an extension with Galois group $G'$, you can always find a disjoint abelian extension with Galois group $\Gamma$ and then take the compositum. From basic facts about $p$-groups (or even more basic facts about abelian groups), this means that we can assume that $G$ and $H$ have the same number of generators, and that $G/p=H/p$.
Explicitly, one should think about the case
$$G = \prod \mathbf{Z}/p^{a_i + b_i} \mathbf{Z}, H = \prod \mathbf{Z}/p^{a_i} \mathbf{Z}.$$
In the version of the problem where the map is specified, we can even reduce to the case where $G$ and $H$ are cyclic. (This requires a small lemma: if $L_i/K_i$ are the extensions, then one wants to make sure that the $L_i$ are all disjoint, since otherwise they only give rise to a group $G'$ surjecting onto $H$ with $G' \subset G$. But any non-trivial intersection of the $L_i$ would contain a field of degree $p$, but the degree $p$ subfields of their compositum correspond to quotients of $G'/p = H/p$ (since $G/p = H/p$), and thus one is OK since the $K_i$ are disjoint.
This paragraph now contains the key claim: If an $L$ exists (with $G/p = H/p$) then an $L$ exists which has the following property:
$L/\mathbf{Q}$ is only ramified at the (finite) primes where $K/\mathbf{Q}$ is ramified.
We now prove this. It suffices to consider the cyclic case $L = L_i$ and $K = K_i$, since if all the $L_i$ are unramified at $q$ then so is their compositum. So we have a surjection:
$$\mathbf{Z}/p^{a+b} \mathbf{Z} = \mathrm{Gal}(L/\mathbf{Q})\rightarrow \mathrm{Gal}(K/\mathbf{Q}) = \mathbf{Z}/p^b \mathbf{Z}.$$
Suppose that the image of inertia at the prime $q$ has order $p^r$. That means that, for a prime $v$ above $q$, the extension $K_v/\mathbf{Q}_q$ is ramified of degree $p^r$. But local class field theory implies that every unramified extension of $\mathbf{Q}_q$ is contained in extension of $\mathbf{Q}_q$ containing all $q$-power roots of unity. Moreover, this extension can be "globalized," that is, there exists an extension $E/\mathbf{Q}$ of the same degree $p^r$ which is contained in the $q^{\infty}$ roots of unity and such that $\mathrm{Gal}(E/\mathbf{Q})$ is cyclic of order $p^r$. So now consider the compositum of $L$ and $E$. Note that $L$ and $E$ are disjoint, because any intersection would contain a degree $p$ extension, but (since $L$ is cyclic) it has a unique such extension which is contained in $K$ and $K$ is unramified at $q$.So now
$$\mathrm{Gal}(L.E/\mathbf{Q}) = \Gamma = \mathbf{Z}/p^{a+b} \mathbf{Z} \oplus\mathbf{Z}/p^{r} \mathbf{Z},$$
and the inertia group $I$ at $q$ is still cyclic of order $p^r$, because $L$ and $E$ locally give the same type of ramified extension. Moreover, the map to $\mathrm{Gal}(K/\mathbf{Q})$ is projection onto the second factor. But this map factors through the quotient by $I$ because $K$ is unramified. And, by construction, the quotient by $I$ is cyclic of order $p^{a+b}$ (generated by $[(1,0)]$). Thus we can replace $L$ by an extension $L'$ which is ramified at exactly the same primes
except for $q$. Repeat with all primes where $L$ is ramified by $K$ is not. This is the crossing with a cyclic extension which gets used in many places, in particular in the proof of the Kronecker-Weber theorem.
Now using the Kronecker-Weber theorem, if $K$ is ramified only at primes dividing $N$, then $K \subset \mathbf{Q}(\zeta_N)$ if $(N,p) = 1$ or $\mathbf{Q}(\zeta_M,\zeta_{p^{\infty}})$ if $N=Mp$. Hence $H$ is canonically a quotient of$$\mathbf{Z}^{\times}_p \times \prod_{q|M} (\mathbf{Z}/q \mathbf{Z})^{\times}$$if $N = pM$ or$$\prod_{q|M} (\mathbf{Z}/q \mathbf{Z})^{\times}$$ otherwise.
So now for $G$ (with the same number of generators) to arise, it is sufficient and necessary that $G$ is also a quotient of this group. (Clearly you can specify the map to $H$ or not if you like.) Of course, you need to understand cyclotomic fields enough to understand what the specific map from the group above to $H$ is (slightly annoying in the case when $p = q = 2$.
Examples 1: $H$ cyclic of order $p$, and $G$ cyclic of order $p^{2}$.
Then such an $L$ exists if and only if all of the following are satisfied.
If $K$ is ramified at a prime $q \ne p$, then $q \equiv 1 \mod p^2$. If $K$ is ramified at $p$, then $K \otimes \mathbf{Q}_2$ is a ramified quadratic extension of $\mathbf{Q}_2$. The complete list of quadratic extensions is $\mathbf{Q}_2(\sqrt{d})$ with
$$d = -1, -5, 2, 10, -2, -10, 5.$$
The last one is unramified and so doesn't occur in this context, and otherwise the condition is that the localization of $K$ at the prime $2$ does not contain (equal) the cases $d = -1, -5, -2, -10$, i.e. it is $d = 2$ or $d = 10$.
Example 2: If $H$ is cyclic of order $p^a$ and $G$ is cyclic of order $p^{a+b}$ with $b > 0$ (pretty much the general case) then you win if and only if:
If $K$ is ramified at a primes $q \ne p$ with $e_q = p^{r}$, then $q \equiv 1 \mod p^{r+b}$. If $K$ is ramified at $p$, and $p = 2$, then the localization of $K$ at the prime $2$ does not contain the quadratic fields corresponding to $d = -1,-5, -2, -10$.
|
You have to prove $2$ things here.
Not path connected: To prove that $A\cup B$ is not path connected, you must find $2$ points for which a path between them does not exist. Since it is easy to see that $A$ and $B$ are path connected, it is best to take one point from $A$ and one from $B$. For example, take $$a=(0,0)\in A,\\b=(1, 0)\in B.$$ Now take any path $$\gamma:[0,1]\to A\cup B$$ with $\gamma(0)=a, \gamma(1)=b$ and try to show that $\gamma$ cannot be continuous. For this, take a look at the last point $t\in[0,1]$ for which $\gamma(t)$ is still on $A$, and show that there exists a $\delta>0$ such that for any $\epsilon>0$, the point $\gamma(t+\epsilon)$ is not $\delta$-close to $\gamma(t)$.
Connected: To prove that $A\cup B$ is connected, you must show that open and disjoint nontrivial sets $U,V$ for which $U\cup V=A\cup B$ do not exist, or, equivalently, that there does not exist a nontrivial set $W\subset A\cup B$ which is both open and closed. I suggest you use this second definition and takt a set $W$ which is closed and opened in $A\cup B$.
Without loss of generality, you can assume that it contains at least one point in $A$ (otherwise, take the complement of $W$). Now, because $A$ is connected, you can show that $A\subseteq W$. Using the fact that $W$ is closed, meaning that $W$ contains the closure of $A$ in $A\cup B$, should tell you that $W$ contains $B$ as well, meaning $W=A\cup B$.
|
In the integral for surface area, $$\int_a^b\int_c^d |{\bf r}_u\times{\bf r}_v|\,du\,dv,$$ the integrand $|{\bf r}_u\times{\bf r}_v|\,du\,dv$ is the area of a tiny parallelogram, that is, a very small surface area, so it is reasonable to abbreviate it $dS$; then a shortened version of the integral is $$\dint{D} 1\cdot dS.$$ We have already seen that if $D$ is a region in the plane, the area of $D$ may be computed with $$\dint{D} 1\cdot dA,$$ so this is really quite familiar, but the $dS$ hides a little more detail than does $dA$.
Just as we can integrate functions $f(x,y)$ over regions in the plane, using $$\dint{D} f(x,y)\, dA,$$ so we can compute integrals over surfaces in space, using $$\dint{D} f(x,y,z)\, dS.$$ In practice this means that we have a vector function ${\bf r}(u,v)=\langle x(u,v),y(u,v),z(u,v)\rangle$ for the surface, and the integral we compute is $$\int_a^b\int_c^d f(x(u,v),y(u,v),z(u,v))|{\bf r}_u\times{\bf r}_v|\,du\,dv.$$ That is, we express everything in terms of $u$ and $v$, and then we can do an ordinary double integral.
Example 18.7.1 Suppose a thin object occupies the upper hemisphere of $x^2+y^2+z^2=1$ and has density $\sigma(x,y,z)=z$. Find the mass and center of mass of the object. (Note that the object is just a thin shell; it does not occupy the interior of the hemisphere.)
We write the hemisphere as ${\bf r}(\phi,\theta)= \langle \cos\theta\sin\phi, \sin\theta\sin\phi, \cos\phi\rangle$, $0\le\phi\le \pi/2$ and $0\le\theta\le 2\pi$. So ${\bf r}_\theta = \langle -\sin\theta\sin\phi, \cos\theta\sin\phi, 0\rangle$ and ${\bf r}_\phi =\langle \cos\theta\cos\phi, \sin\theta\cos\phi, -\sin\phi\rangle$. Then $${\bf r}_\theta\times{\bf r}_\phi = \langle -\cos\theta\sin^2\phi,-\sin\theta\sin^2\phi,-\cos\phi\sin\phi\rangle$$ and $$ |{\bf r}_\theta\times{\bf r}_\phi| = |\sin\phi| = \sin\phi,$$ since we are interested only in $0\le\phi\le \pi/2$. Finally, the density is $z=\cos\phi$ and the integral for mass is $$\int_0^{2\pi}\int_0^{\pi/2} \cos\phi\sin\phi\,d\phi\,d\theta=\pi.$$
By symmetry, the center of mass is clearly on the $z$-axis, so we only need to find the $z$-coordinate of the center of mass. The moment around the $x$-$y$ plane is $$\int_0^{2\pi}\int_0^{\pi/2} z\cos\phi\sin\phi\,d\phi\,d\theta =\int_0^{2\pi}\int_0^{\pi/2} \cos^2\phi\sin\phi\,d\phi\,d\theta ={2\pi\over 3},$$ so the center of mass is at $(0,0,2/3)$.
Now suppose that ${\bf F}$ is a vector field; imagine that itrepresents the velocity of some fluid at each point in space. We wouldlike to measure how much fluid is passing through a surface $D$, the
flux across $D$. As usual, we imagine computingthe flux across a very small section of the surface, with area $dS$,and then adding up all such small fluxes over $D$ with anintegral. Suppose that vector $\bf N$ is a unit normal to the surfaceat a point; ${\bf F}\cdot{\bf N}$ is the scalar projection of $\bf F$onto the direction of $\bf N$, so it measures how fast the fluid ismoving across the surface. In one unit of time the fluid moving acrossthe surface will fill a volume of ${\bf F}\cdot{\bf N}\,dS$, which istherefore the rate at which the fluid is moving across a small patchof the surface. Thus, the total flux across $D$ is$$\dint{D} {\bf F}\cdot{\bf N}\,dS=\dint{D} {\bf F}\cdot\,d{\bf S},$$defining $d{\bf S}={\bf N}\,dS$.As usual, certain conditions must be met for this to work out; chiefamong them is the nature of the surface. As we integrate over thesurface, we must choose the normal vectors $\bf N$ in such a way thatthey point "the same way'' through the surface. For example, if thesurface is roughly horizontal in orientation, we might want to measurethe flux in the "upwards'' direction, or if the surface is closed,like a sphere, we might want to measure the flux "outwards'' acrossthe surface. In the first case we would choose $\bf N$ to havepositive $z$ component, in the second we would make sure that $\bf N$points away from the origin. Unfortunately, there are surfaces thatare not orientable: they haveonly one side, so that it is not possible to choose the normal vectorsto point in the "same way'' through the surface. The most famous suchsurface is the Möbius strip shown in figure 18.7.1. Itis quite easy to make such a strip with a piece of paper and sometape. If you have never done this, it is quite instructive; inparticular, you should draw a line down the center of the strip untilyou return to your starting point. No matter how unit normal vectorsare assigned to the points of the Möbius strip, there will be normalvectors very close to each other pointing in opposite directions.
Assuming that the quantities involved are well behaved, however, the flux of the vector field across the surface ${\bf r}(u,v)$ is $$\dint{D} {\bf F}\cdot{\bf N}\,dS =\dint{D}{\bf F}\cdot {{\bf r}_u\times{\bf r}_v\over|{\bf r}_u\times{\bf r}_v|} |{\bf r}_u\times{\bf r}_v|\,dA =\dint{D}{\bf F}\cdot ({\bf r}_u\times{\bf r}_v)\,dA.$$ In practice, we may have to use ${\bf r}_v\times{\bf r}_u$ or even something a bit more complicated to make sure that the normal vector points in the desired direction.
Example 18.7.2 Compute the flux of ${\bf F}=\langle x,y,z^4\rangle$ across the cone $z=\sqrt{x^2+y^2}$, $0\le z\le 1$, in the downward direction.
We write the cone as a vector function: ${\bf r}=\langle v\cos u, v\sin u, v\rangle$, $0\le u\le 2\pi$ and $0\le v\le 1$. Then ${\bf r}_u=\langle -v\sin u, v\cos u,0\rangle$ and ${\bf r}_v=\langle \cos u, \sin u, 1\rangle$ and ${\bf r}_u\times{\bf r}_v=\langle v\cos u,v\sin u,-v\rangle$. The third coordinate $-v$ is negative, which is exactly what we desire, that is, the normal vector points down through the surface. Then $$\eqalign{ \int_0^{2\pi}\int_0^1 \langle x,y,z^4\rangle\cdot\langle v\cos u,v\sin u,-v\rangle \,dv\,du &=\int_0^{2\pi}\int_0^1 xv\cos u+yv\sin u-z^4v\,dv\,du\cr &=\int_0^{2\pi}\int_0^1 v^2\cos^2 u+ v^2\sin^2 u-v^5\,dv\,du\cr &=\int_0^{2\pi}\int_0^1 v^2-v^5\,dv\,du={\pi\over3}.\cr }$$
Exercises 18.7
Ex 18.7.1Find the center of mass of an object that occupies the upperhemisphere of $x^2+y^2+z^2=1$ and has density $x^2+y^2$.(answer)
Ex 18.7.2Find the center of mass of an object that occupies thesurface $z=xy$, $0\le x\le1$, $0\le y\le 1$ and has density $\sqrt{1+x^2+y^2}$.(answer)
Ex 18.7.3Find the center of mass of an object that occupies thesurface $\ds z=\sqrt{x^2+y^2}$, $1\le z\le4$ and has density $x^2z$.(answer)
Ex 18.7.4Find the centroid of the surface of a right circular cone ofheight $h$ and base radius $r$, not including the base.(answer)
Ex 18.7.5Evaluate $\ds \dint{D} \langle 2,-3,4\rangle\cdot {\bf N}\,dS$, where $D$ is given by $z=x^2+y^2$, $-1\le x\le 1$, $-1\ley\le 1$, oriented up.(answer)
Ex 18.7.6Evaluate $\ds \dint{D} \langle x,y,3\rangle\cdot {\bf N}\,dS$, where $D$ is given by $z=3x-5y$, $1\le x\le 2$, $0\ley\le 2$, oriented up.(answer)
Ex 18.7.7Evaluate $\ds \dint{D} \langle x,y,-2\rangle\cdot {\bf N}\,dS$, where $D$ is given by $z=1-x^2-y^2$, $x^2+y^2\le1$,oriented up.(answer)
Ex 18.7.8Evaluate $\ds \dint{D} \langle xy, yz,zx\rangle\cdot {\bf N}\,dS$, where $D$ is given by $z=x+y^2+2$, $0\le x\le 1$, $x\ley\le 1$, oriented up.(answer)
Ex 18.7.9Evaluate $\ds \dint{D} \langle e^x, e^y,z\rangle\cdot {\bf N}\,dS$, where $D$ is given by $z=xy$, $0\le x\le 1$, $-x\ley\le x$, oriented up.(answer)
Ex 18.7.10Evaluate $\ds \dint{D} \langle xz,yz,z\rangle\cdot {\bfN}\,dS$, where $D$ is given by $z=a^2-x^2-y^2$, $x^2+y^2\le b^2$, oriented up.(answer)
Ex 18.7.11A fluid has density 870 kg/m$^3$ and flows with velocity ${\bf v} = \langle z,y^2,x^2\rangle$, where distances are in meters and the components of ${\bf v}$ are in meters per second. Find the rate of flow outward through the portion of the cylinder $x^2+y^2 = 4$, $0\leq z\leq 1$ for which $y\ge 0$.(answer)
Ex 18.7.12Gauss's Law says that the net charge, $Q$,enclosed by a closed surface, $S$, is $$Q=\epsilon_0 \dint{} {\bf E}\cdot {\bf N}\,dS$$ where ${\bf E}$ is an electric field and $\epsilon_0$ (thepermittivity of free space) is a known constant; $\bf N$ is orientedoutward. Use Gauss's Law to find the charge contained in the cube with vertices$(\pm 1, \pm 1, \pm 1)$ if the electric field is ${\bf E} = \langle x,y,z\rangle$.(answer)
|
In Exercises 1–11 a direction field is drawn for the given equation. Sketch some integral curves. In Exercises [exer:1.3.12}-[exer:1.3.22} construct a direction field and plot some integral curves in the indicated rectangular region.
[exer:1.3.12] \(y'=y(y-1); \quad \{-1\le x\le 2,\ -2\le y\le2\}\)
[exer:1.3.13] \(y'=2-3xy; \quad \{-1\le x\le 4,\ -4\le y\le4\}\)
[exer:1.3.14] \(y'=xy(y-1); \quad \{-2\le x\le2,\ -4\le y\le 4\}\)
[exer:1.3.15] \(y'=3x+y; \quad \{-2\le x\le2,\ 0\le y\le 4\}\)
[exer:1.3.16] \(y'=y-x^3; \quad \{-2\le x\le2,\ -2\le y\le 2\}\)
[exer:1.3.17] \(y'=1-x^2-y^2; \quad \{-2\le x\le2,\ -2\le y\le 2\}\)
[exer:1.3.18] \(y'=x(y^2-1); \quad \{-3\le x\le3,\ -3\le y\le 2\}\)
[exer:1.3.19] \(y'= {x\over y(y^2-1)}; \quad \{-2\le x\le2,\ -2\le y\le 2\}\)
[exer:1.3.20] \(y'= {xy^2\over y-1}; \quad \{-2\le x\le2,\ -1\le y\le 4\}\)
[exer:1.3.21] \(y'= {x(y^2-1)\over y}; \quad \{-1\le x\le1,\ -2\le y\le 2\}\)
[exer:1.3.22] \(y'=- {x^2+y^2\over1-x^2-y^2}; \quad \{-2\le x\le2,\ -2\le y\le 2\}\)
[exer:1.3.23] By suitably renaming the constants and dependent variables in the equations \[T' = -k(T-T_m) \eqno{\rm(A)}\]
and \[G'=-\lambda G+r \eqno{\rm(B)}\] discussed in Section 1.2 in connection with Newton’s law of cooling and absorption of glucose in the body, we can write both as \[y'=- ay+b, \eqno{\rm(C)}\] where \(a\) is a positive constant and \(b\) is an arbitrary constant. Thus, (A) is of the form (C) with \(y=T\), \(a=k\), and \(b=kT_m\), and (B) is of the form (C) with \(y=G\), \(a=\lambda\), and \(b=r\). We’ll encounter equations of the form (C) in many other applications in Chapter 2.
Choose a positive \(a\) and an arbitrary \(b\). Construct a direction field and plot some integral curves for (C) in a rectangular region of the form \[\{0\le t\le T,\ c\le y\le d\}\]
of the \(ty\)-plane. Vary \(T\), \(c\), and \(d\) until you discover a common property of all the solutions of (C). Repeat this experiment with various choices of \(a\) and \(b\) until you can state this property precisely in terms of \(a\) and \(b\).
[exer:1.3.24] By suitably renaming the constants and dependent variables in the equations \[P'=aP(1-\alpha P) \eqno{\rm(A)}\]
and \[I'=rI(S-I) \eqno{\rm(B)}\] discussed in Section 1.1 in connection with Verhulst’s population model and the spread of an epidemic, we can write both in the form \[y'=ay-by^2, \eqno{\rm(C)}\] where \(a\) and \(b\) are positive constants. Thus, (A) is of the form (C) with \(y=P\), \(a=a\), and \(b=a\alpha\), and (B) is of the form (C) with \(y=I\), \(a=rS\), and \(b=r\). In Chapter 2 we’ll encounter equations of the form (C) in other applications..
Choose positive numbers \(a\) and \(b\). Construct a direction field and plot some integral curves for (C) in a rectangular region of the form \[\{0\le t\le T,\ 0\le y\le d\}\]
of the \(ty\)-plane. Vary \(T\) and \(d\) until you discover a common property of all solutions of (C) with \(y(0)>0\). Repeat this experiment with various choices of \(a\) and \(b\) until you can state this property precisely in terms of \(a\) and \(b\).
Choose positive numbers \(a\) and \(b\). Construct a direction field and plot some integral curves for (C) in a rectangular region of the form \[\{0\le t\le T,\ c\le y\le 0\}\]
of the \(ty\)-plane. Vary \(a\), \(b\), \(T\) and \(c\) until you discover a common property of all solutions of (C) with \(y(0)<0\).
You can verify your results later by doing Exercise 2.2. [exer:2.2.27}.
IN THIS CHAPTER we study first order equations for which there are general methods of solution.
SECTION 2.1 deals with linear equations, the simplest kind of first order equations. In this section we introduce the method of variation of parameters. The idea underlying this method will be a unifying theme for our approach to solving many different kinds of differential equations throughout the book.
SECTION 2.2 deals with separable equations, the simplest nonlinear equations. In this section we introduce the idea of implicit and constant solutions of differential equations, and we point out some differences between the properties of linear and nonlinear equations.
SECTION 2.3 discusses existence and uniqueness of solutions of nonlinear equations. Although it may seem logical to place this section before Section 2.2, we presented Section 2.2 first so we could have illustrative examples in Section 2.3.
SECTION 2.4 deals with nonlinear equations that are not separable, but can be transformed into separable equations by a procedure similar to variation of parameters.
SECTION 2.5 covers exact differential equations, which are given this name because the method for solving them uses the idea of an exact differential from calculus.
SECTION 2.6 deals with equations that are not exact, but can made exact by multiplying them by a function known called
integrating factor.
29
A first order differential equation is said to be
linear if it can be written as
\[\label{eq:2.1.1} y'+p(x)y=f(x).\]
A first order differential equation that can’t be written like this is
nonlinear. We say that Equation \ref{eq:2.1.1} is homogeneous if \(f\equiv0\); otherwise it is nonhomogeneous. Since \(y\equiv0\) is obviously a solution of the homgeneous equation \[y'+p(x)y=0,\]
we call it the
trivial solution. Any other solution is nontrivial.
[example:2.1.1] The first order equations \[\begin{aligned} x^2y'+3y&=&x^2,\\[2\jot] xy'-8x^2y&=&\sin x,\\ xy'+(\ln x)y&=&0,\\ y'&=&x^2y - 2,\end{aligned}\]
are not in the form Equation \ref{eq:2.1.1}, but they are linear, since they can be rewritten as \[\begin{aligned} y'+{3\over x^2}y&=&1,\\ y'-8xy&=&{\sin x\over x},\\ y'+{\ln x\over x}y&=&0,\\ y'-x^2y&=&-2.\end{aligned}\]
[example:2.1.2] Here are some nonlinear first order equations:
|
I'm trying to learn the basics of homotopy theory by studying from Strom's Modern Classical Homotopy Theory, a book in which the proof of every proposition presented is given as a guided exercise to the reader.
Right now I'm working with homotopy colimits, and I've been stuck with Project 6.26 at page 154 since this morning.
In this problem it is asked to study the set $\mathcal{S}(f)$ of homotopy classes of maps $\Sigma X\rightarrow\Sigma Y$ induced on homotopy colimits by the natural transformation $$ \require{AMScd} \begin{CD} \ast @<<< X @>>> \ast\\ @VVV @V{f}VV @VVV \\ \ast @<<< Y @>>>\ast \end{CD} $$ As suggested by the author, the two maps $\Sigma f$ and $-\Sigma f$ can be obviously found just by taking the cofibrant replacements $$ \require{AMScd} \begin{CD} C_{-}X @<<< X @>>> C_{+}X\\ @V{Cf}VV @V{f}VV @VV{Cf}V \\ C_{-}Y @<<< Y @>>> C_{+}Y \end{CD} $$ and $$ \require{AMScd} \begin{CD} C_{+}X @<<< X @>>> C_{-}X\\ @V{Cf}VV @V{f}VV @VV{Cf}V \\ C_{-}Y @<<< Y @>>> C_{+}Y \end{CD} $$ which may even be in the same homotopy class.
My question is: is there any other element in $\mathcal{S}(f)$? If not, how can I prove that?
|
In March, ATLAS has moderately excited us with a 3-sigma SUSY-like excess in final states with a lepton pair, jets, and MET. At the end of another blog post, I mentioned a paper explaining it in terms of a light sbottom.
There's a different explanation on the arXiv now, extending the proposal by Ullwanger 3 weeks ago. It's been a lot of time after the Big Bang so Shang, Yang, Yang Zhang, and Cao (no kidding) have hung a new
As I mentioned in the context of the sub-10-GeV, (after LUX) no longer convincing dark matter particle, the NMSSM is the "second simplest" realistic supersymmetric quantum field theory.
The acronym stands for the Next to Minimal Supersymmetric Standard Model – next to MSSM. Recall that the MSSM adds superpartners to all known particles. On top of that, it has to double the number of Higgs doublets – two doublets instead of one doublet – to cancel the new anomalies from the higgsinos and to allow masses for both types of quarks (in agreement with SUSY).
Consequently, there are (eight minus three) five Higgs scalars instead of one (four minus three) Higgs scalar we know from the Standard Model. Also, there has to be a new interaction between the two Higgs doublets, namely the quadratic superpotential\[
W = \mu H_u H_d
\] which couples the up-type and down-type Higgs doublets. Superpotentials in \(d=4\) have the units of "cubic mass" so you see that each factor, including \(\mu\) itself, has to have the units of mass. For naturalness, this Higgsino \(\mu\) parameter should be comparable to the electroweak scale.
One may argue that such superrenormalizable interactions – multiplicative coefficients with the units of positive powers of mass – are unnatural if treated as perfect constants. For a similar example, note that \(m\bar\psi \psi\) are the mass terms for the leptons and quarks but we know that these mass terms actually arise from the cubic (Yukawa) interactions with the Higgs field, via \(m=y v=y\langle h\rangle\). The Higgs field "secretly" sits in the coefficient \(m\) for the mass.
You could say that in analogy with that, there should be a field hiding in the constant parameter \(\mu\), too. Well, that's exactly what the NMSSM does. It promotes \(\mu\) into a new electroweak "singlet" superfield called \(S\) which contains one complex scalar (bosonic) field (which may be split to one new real CP-even mode and one new real CP-odd mode) as well as a new fermionic field, the singlino. The boson in \(S\) gets a vev and it behaves like \(\mu\). But we also have the new particle species from the chiral multiplet with their new interactions.
The replacement of the constant \(\mu\) by the field \(S\) with a vev is also natural if you are attracted by the idea of some "scale/conformal invariance" of the right quantum field theory that is only broken dynamically. Note that similar explanations have been proposed for the observation that the Higgs boson mass is on the verge of the instability, too.
For some time, it's been pointed out by some people that the NMSSM may also make superpartners lighter – and therefore more natural – without contradicting the (almost completely) null results from the LHC so far. See e.g. this paper. SUSY is able to hide more easily mainly because the lightest superpartner, the LSP, is the singlino and there are longer decay chains which distribute the energy to various intermediate products in the chain and leave less energy for the singlino (for the missing transverse energy) at the end of the decay chain. Most search strategies for superpartners are based on assuming "lots of missing transverse energy" which is why they may be failing.
At any rate, their NMSSM explanation of the ATLAS excess seems simple. ATLAS is actually producing gluino pairs where the gluino mass is just \(650\GeV\) in the optimum case. And each gluino decays to\[
\tilde g \to q\bar q \tilde \chi^0_2
\] where the second lightest superpartner \(\tilde\chi^0_2\) is mostly bino and its mass is \(565\GeV\) in the optimum case. This bino usually decays to\[
\tilde\chi^0_2 \to Z \tilde \chi^0_1
\] the mostly singlino \(\tilde \chi^0_1\) lightest superpartner of (optimum) mass \(465\GeV\) and a Z-boson. That's why we get some additional final states with the Z-boson, and therefore the excess. One should better avoid a similar decay with a Higgs boson replacing Z above – because the correspondingly enhanced Higgs peak isn't seen. This unwanted decay of \(\tilde \chi^0_2\) to a Higgs boson may be avoided if the mass difference\[
m(\tilde \chi^0_2) - m(\tilde \chi^0_1)
\] is strictly between the Z-boson mass and the (larger) Higgs boson mass.
All these things are pretty cute and natural. The second simplest supersymmetric model – which may be the most natural one at low energies, for various reasons – may explain an excess and keep almost all the superpartners below \(1\TeV\), too.
Those readers who like numerology may be intrigued by the numbers (sligtly more than) \(560\GeV\) and \(650\GeV\) above. If you read all 2014 TRF articles, you will see that CMS saw a hint of \(650\GeV\) leptoquarks and a \(560\GeV\) CP-odd Higgs boson.
Such numerical agreements look cool but I believe that the NMSSM neutralinos and gluinos which would explain the ATLAS excess can't leave the same traces as the leptoquarks and CP-odd bosons from the CMS excesses so the agreements are coincidences. Well, most likely, all these excesses are coincidences by themselves. But we may always be surprised.
If you fall in love with the NMSSM, you may also want to hear that CMS has seen traces of a second Higgs boson at \(136.5\GeV\).
|
pandas.DataFrame.ewm¶
DataFrame.
ewm(
com=None, span=None, halflife=None, alpha=None, min_periods=0, freq=None, adjust=True, ignore_na=False, axis=0)¶
Provides exponential weighted functions
New in version 0.18.0.
Parameters: com: float, optional
Specify decay in terms of center of mass, \(\alpha = 1 / (1 + com),\text{ for } com \geq 0\)
span: float, optional
Specify decay in terms of span, \(\alpha = 2 / (span + 1),\text{ for } span \geq 1\)
halflife: float, optional
Specify decay in terms of half-life, \(\alpha = 1 - exp(log(0.5) / halflife),\text{ for } halflife > 0\)
alpha: float, optional
Specify smoothing factor \(\alpha\) directly, \(0 < \alpha \leq 1\)
New in version 0.18.0.
min_periods: int, default 0
Minimum number of observations in window required to have a value (otherwise result is NA).
freq: None or string alias / date offset object, default=None (DEPRECATED)
Frequency to conform to before computing statistic
adjust: boolean, default True
Divide by decaying adjustment factor in beginning periods to account for imbalance in relative weightings (viewing EWMA as a moving average)
ignore_na: boolean, default False
Ignore missing values when calculating weights; specify True to reproduce pre-0.15.0 behavior
Returns:
a Window sub-classed for the particular operation
Notes
Exactly one of center of mass, span, half-life, and alpha must be provided. Allowed values and relationship between the parameters are specified in the parameter descriptions above; see the link at the end of this section for a detailed explanation.
The freq keyword is used to conform time series data to a specified frequency by resampling the data. This is done with the default parameters of
resample()(i.e. using the mean).
When adjust is True (default), weighted averages are calculated using weights (1-alpha)**(n-1), (1-alpha)**(n-2), ..., 1-alpha, 1.
When adjust is False, weighted averages are calculated recursively as: weighted_average[0] = arg[0]; weighted_average[i] = (1-alpha)*weighted_average[i-1] + alpha*arg[i].
When ignore_na is False (default), weights are based on absolute positions. For example, the weights of x and y used in calculating the final weighted average of [x, None, y] are (1-alpha)**2 and 1 (if adjust is True), and (1-alpha)**2 and alpha (if adjust is False).
When ignore_na is True (reproducing pre-0.15.0 behavior), weights are based on relative positions. For example, the weights of x and y used in calculating the final weighted average of [x, None, y] are 1-alpha and 1 (if adjust is True), and 1-alpha and alpha (if adjust is False).
More details can be found at http://pandas.pydata.org/pandas-docs/stable/computation.html#exponentially-weighted-windows
Examples
>>> df = DataFrame({'B': [0, 1, 2, np.nan, 4]}) B 0 0.0 1 1.0 2 2.0 3 NaN 4 4.0 >>> df.ewm(com=0.5).mean() B 0 0.000000 1 0.750000 2 1.615385 3 1.615385 4 3.670213
|
Here I describe the discreet Laplace-Beltrami operator for a triangle mesh and how to derive the Laplacian matrix for that mesh. Then I provide [ C++ code ] to compute harmonic weights over a triangular mesh by solving the Laplace equation.
Motivation
The goal here is to present the Laplace operator with a concrete application, that is, computing harmonic weights over a triangle mesh. In geometry processing the Laplace operator \( \Delta \) is a extremely powerful tool. For instance it can be handy to compute or process weight maps.
A weight map is a function \( f(u, v) \) that associates a point of a surface to a real value \( f(u, v): \mathbb R^2 \rightarrow \mathbb R\). Usually the weight map is discrete, meaning if our surface is represented with triangles and vertices we can associate each vertex to a single scalar value (\(f_0=0.1\), \(f_1=-0.3\) etc ):
They are numerous occasion where you would want to associate some weight to your vertices, like setting a mass to each vertex for physic simulations, a height to create wrinkles like in bump maps, or joint influence strength like for skin weights etc. We can make use of the Laplace operator \( \Delta \) to generate weight maps \( f \). We can also smooth or process them in many ways. Here I present how to generate bounded harmonic weights by solving the Laplace equation.
Discrete formulation (per vertex)
Let's remember the discrete formulation of the Laplace operator over a triangle mesh at a single vertex \( i \):
$$ \Delta f( \vec v_i) = \frac{1} {2A_i} \sum\limits_{{j \in \mathcal N(i)}} { (\cot \alpha_{ij} + \cot \beta_{ij}) (f_j - f_i) } $$
\( {j \in \mathcal N(i)} \) is the list of vertices directly adjacent to the vertex \( i \) \(\cot \alpha_{ij} + \cot \beta_{ij}\) are the infamous cotangent weights (a real value) other type of weights exists but those are the most popular. \( f_i, f_j, ...\) the scalar weights at the vertices. \( A_i \) is the cell area (a real value). Several solution exists to measure this quantity
Barycentric cell area: Probably the simplest way to compute \( A_i \) just sum the triangles' areas adjacent to \(i\) times 3:
$$A_i = 3 \sum\limits_{{T_j \in \mathcal N(i)}} {area(T_j)}$$
float cell_area = 0 // A_i for( each triangle tri_j from the 1-ring neighborhood of vertex i ) if( tri_j is non-obtuse) // Voronoi safe cell_area += voronoi region of i in tri_j else // Voronoi inappropriate if( the angle of 'tri_j' at 'i' is obtuse) cell_area += area(tri_j)/2.0 else cell_area += area(tri_j)/4.0
The Voronoi region of a triangle with vertex \(\vec p\),\(\vec q\) and \(\vec r\):
$$\frac{1}{8} (\|\vec p - \vec r\|^2 \cot( \text{angle at } \vec q) + \|\vec p - \vec q\|^2 \cot( \text{angle at } \vec r) )$$
Matrix representation (development)
Here things get interesting, we will represent \( \Delta f( \vec v_i) \) at every vertices using matrices so that \( \Delta f = \mathbf A \mathbf x\) with \( \mathbf x = \{ f_0, f_1 \ldots f_i \ldots \} \). We will be able to re-use this matrix in many FEM applications (Finite Element Method) and solve awesome PDEs (Partial Differential Equations) For brevity let's define:
\( w_{ij} = \frac{1}{2} \cot \alpha_{ij} + \cot \beta_{ij}\) \( w_{sum_i} = \sum\limits_{{j \in \mathcal N(i)}} { w_{ij} } \)
We start from the per vertex definition refactor the expression:
$$
\begin{array}{lll} \Delta f( \vec v_i) & = & \frac{1} {A_i} \sum\limits_{{j \in \mathcal N(i)}} { w_{ij} (f_j - f_i) } \\ & = & \frac{1} {A_i} \sum\limits_{{j \in \mathcal N(i)}} { w_{ij} (f_j) } - \frac{1} {A_i} \left ( \sum\limits_{{j \in \mathcal N(i)}} { w_{ij} } \right ) (f_i) \\ & = & \frac{1} {A_i} \sum\limits_{{j \in \mathcal N(i)}} { w_{ij} (f_j) } - \frac{1} {A_i} w_{sum_i} (f_i) \\ \end{array} $$ Now that we properly isolated our \( f_i \) and \( f_j\) (really our \(\mathbf x\)) we can infer the general matrix representation for each vertices (each matrix has the same dimension \( \mathbb R^{n \times n} \).): $$ \mathbf A \mathbf x = \overbrace{ {\mathbf M}^{-1} }^{\frac{1} {A_i}} \quad \overbrace{ \mathbf L_{w_{ij}} \mathbf x }^{ \sum\limits_{{j \in \mathcal N(i)}} w_{ij} f_j } - {\mathbf M}^{-1} \overbrace{ \mathbf L_{w_{sum}} \mathbf x }^{ \sum\limits_{{j \in \mathcal N(i)}} w_{ij} f_i } $$ A example of expansion: $$ \begin{bmatrix} A_0 & 0 & 0 \\ 0 & A_1 & 0 \\ 0 & 0 & A_2\\ \end{bmatrix}^{-1} \begin{bmatrix} 0 & w_{01} & w_{02} \\ w_{10} & 0 & w_{12} \\ w_{20} & w_{21} & 0 \\ \end{bmatrix} \begin{bmatrix} f_0 \\ f_1 \\ f_2 \\ \end{bmatrix} - \begin{bmatrix} A_0 & 0 & 0 \\ 0 & A_1 & 0 \\ 0 & 0 & A_2\\ \end{bmatrix}^{-1} \begin{bmatrix} w_{sum_0} & 0 & 0 \\ 0 & w_{sum_1} & 0 \\ 0 & 0 &w_{sum_2}\\ \end{bmatrix} \begin{bmatrix} f_0 \\ f_1 \\ f_2 \\ \end{bmatrix} $$
The Mass matrix \( \mathbf M \) is a diagonal matrix that stores the cell area of each vertex; elements outside the diagonal are null:
$$
\mathbf M = \begin{bmatrix} A_0 & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & A_n\\ \end{bmatrix} $$ The matrix \( \mathbf L_{w_{sum}} \) is also a diagonal matrix that stores the sum of the cotangent weights at each vertex \( i \):
$$
\mathbf L_{w_{sum}} = \begin{bmatrix} w_{sum_0} & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 &w_{sum_n}\\ \end{bmatrix} $$ Lastly the Adjacency matrix \( \mathbf L_{w_{ij}} \) is symetric and each row \( i \) contains the list of adjacent vertex to weights \( w_{ij} \) to \( i \):
$$
\mathbf L_{w_{ij}} =
\begin{bmatrix}
0 & w_{01} & w_{02} & w_{03} & 0 & 0 \\
w_{10} & 0 & w_{12} & w_{13} & 0 & 0 \\
w_{20} & w_{21} & 0 & w_{23} & w_{24} & 0 \\
w_{30} & w_{31} & w_{32} & 0 & w_{34} & w_{35} \\
0 & 0 & w_{42} & w_{43} & 0 & w_{45} \\
0 & 0 & 0 & w_{53} & w_{54} & 0
\end{bmatrix}
$$
$$
\begin{array}{lll} \mathbf A \mathbf x & = & {\mathbf M}^{-1} \mathbf L_{w_{ij}} \mathbf x - {\mathbf M}^{-1} \mathbf L_{w_{sum}} \mathbf x \\ & = & ( {\mathbf M}^{-1} \mathbf L_{w_{ij}} - {\mathbf M}^{-1} \mathbf L_{w_{sum}}) \mathbf x \\ & = & {\mathbf M}^{-1} (\mathbf L_{w_{ij}} - \mathbf L_{w_{sum}}) \mathbf x \\ \end{array} $$ What is commonly called the Laplacian matrix in the literature of geometry processing is: $$\mathbf L = \mathbf L_{w_{ij}} - \mathbf L_{w_{sum}}$$
$$
\mathbf L_{ij} = \left \{ \begin{matrix} -w_{sum} & = & -\sum\limits_{{j \in \mathcal N(i)}} { w_{ij} } & \text{ when } i = j \\ w_{ij} & = & \frac{1}{2} \cot \alpha_{ij} + \cot \beta_{ij} & \text{ if i adjacent to j} \\ 0 & & & \text{ otherwise } \\ \end{matrix} \right . $$
On the other hand the
Laplacian *operator* is defined as: $$\mathbf {\Delta f} = {\mathbf M}^{-1} \mathbf L$$
Finally be aware that the general Laplacian matrix as defined in the graph theory (as opposed to geometry processing) is usually defined as \( -\mathbf L \)
Harmonic weights
To find the values \( \mathbf x = \{ \ldots f_i \ldots \} \) that describe an harmonic function (as linear as possible on x, y z coordinates. We must solve the Laplace equation:
$$
\begin{array}{lll} \mathbf {\Delta f} & = & 0 \\ {\mathbf M}^{-1} \mathbf L \mathbf x & = & [ 0 ] \\ \mathbf L \mathbf x & = & [ 0 ] \\ \end{array} $$ \( [ 0 ] \in \mathbb R^n \) is a column vector of zeros The nice thing is since \( \mathbf L \) is symmetric by construction when we multiply by \( \mathbf M \) we make the linear system of equations simpler. \( \mathbf L \) is also a sparse matrix so we can take advantage of sparse solver to solve the system even quicker.
Boundary conditions: you need to set a close region such as:
TODO image
To fix a value \( v \) of a specific vertex \( i \) simply set the row \( i \) to 0 and the diagonal element \( \mathbf L_{i,i} = 0\). then the in the column vector of zero we set \( v\) to the entry corresponding to the ith vertex.
Note: go here to find harmonic weights on a regularly spaced grid.
C++ Code
I provide [ code ] to solve the laplace equation with Eigen results are displayed with free GLUT. In
solve_laplace_equation.cpp you will find code similar to the snippet below, that builds the Laplacian matrix with cotan weights set the boundary conditions by zeroing out rows of the final matrix setting its diagonal element to 1 and right hand side of the equation to the desired value.
If your mesh is represented with an half-edge data structure (each vertex knows its direct neighboors) the pseudo code to compute \( \mathbf L \) is:
// angle(i,j,t) -> angle at the vertex opposite to the edge (i,j) for(int i : vertices) { for(int j : one_ring(i)) { sum = 0; for(int t : triangle_on_edge(i,j)) { w = cot(angle(i,j,t)); L(i,j) = w; sum += w; } L(i,i) = -sum; } }
On the other hand the Laplacian \( \mathbf L \) may be built by summing together contributions for each triangle, this way only the list of triangles is needed:
for(triangle t : triangles) { for(edge i,j : t) { w = cot(angle(i,j,t)); L(i,j) += w; L(j,i) += w; L(i,i) -= w; L(j,j) -= w; } } References Book: Geometry Mesh Processing, Mario Botsch & al Discrete Differential-Geometry Operators for Triangulated 2-Manifolds Meyer & al Laplace-Beltrami: The Swiss Army Knife of Geometry Processing
|
Is there a not identically zero, real-analytic function $f:\mathbb{R}\rightarrow\mathbb{R}$, which satisfies
$$f(n)=f^{(n)}(0),\quad n\in\mathbb{N} \text{ or } \mathbb N^+?$$
What I got so far:
Set
$$f(x)=\sum_{n=0}^\infty\frac{a_n}{n!}x^n,$$
then for $n=0$ this works anyway and else we have
$$a_n=f^{(n)}(0)=f(n)=\sum_{k=0}^\infty\frac{a_k}{k!}n^k.$$
Now $a_1=\sum_{k=0}^\infty\frac{a_k}{k!}1^k=a_0+a_1+\sum_{k=2}^\infty\frac{a_k}{k!},$ so
$$\sum_{k=2}^\infty\frac{a_k}{k!}=-a_0.$$
For $n=2$ we find
$$a_2=\sum_{k=0}^\infty\frac{a_k}{k!}2^k=a_0+a_1+2a_2+\sum_{k=3}^\infty\frac{a_k}{k!}2^k.$$
The first case was somehow special since $a_1$ cancelled out, but now I have to juggle around with more and more stuff.
I could express $a_1$ in terms of the higher $a's$, and then for $n=3$ search for $a_2$ and so on. I didn't get far, however. Is there a closed expression? My plan was to argue somehow, that if I find such an algorythm to express $a$'s in terms of higher $a$'s, that then, in the limit, the series of remaining sum or sums would go to $0$ and I'd eventually find my function.
Or maybe there is a better approach to such a problem.
|
If you design trading strategies in a $ \sigma $-algebra space: $ \mathcal {N}(\mu , \sigma^2) $, meaning using averages and standard deviations for data analysis and trading decisions. Then, all you will see will be confined within that space. It implies that you will be dealing most often with a normal distribution of something of your own making. This allows setting up stochastic processes with defined properties. Things like mean-reversion and quasi-martingales. But it also reduces the value of outliers in the equation. They will have been smoothed out over the chosen look-back period. It will also ignore any pre-look-back period data for the simple reason that none of it will be taken into account.
There is a big problem with this point of view when applied to stock prices as these do not quite stay within those boundaries. The data itself is more complicated and quite often moves out of the confines of that self-made box. For example, a Paretian distribution (which would better represent stock prices) will have fat tails (outliers) which can more than distort this $ \sigma $-algebra.
Stock prices, just as any derivative information from them, are not that neat! So, why would we treat them as if they were? The probability density function of a normal distribution has been known for quite some time:
$\quad \quad \displaystyle f(x\mid \mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}$
but does it really describe what we see in stock prices where $\mu$ is in itself a stochastic process, and $\sigma$ is another stochastic process scaling a Wiener process $\sigma dW$ of its own in an environment where skewness, kurtosis or fat tails are prevalent. It is like wanting to model, with everything else, some "black swans" which by their very nature are rare events, 10 to 20+ $\sigma$s away from their mean $\mu$. Consider something like the "Flash crash" of May 2010, for instance. There were price moves there that should not have happened in 200 million years, and yet, there they were. Those are things you do not see coming and for which you are not prepared in the sense that your program might not have been designed to handle such situations. Knowing some $\mu$ and $\sigma$ does not give predictability to tomorrow's price.
Some will simply remove the outliers from their datasets. Thereby building an unrealistic data environment where variance ($\sigma^2$) will be more subdued and produce smoother backtest equity curves until the real world "black swan" comes knocking, and it will. This will screw up all those aligned $\mu$s and $\sigma$s. On the other hand, leaving all the outliers in will result in averaging up all the non-outliers giving a false sense of their real values, and again give a distorted image of the very thing they are analyzing.
Randomness
Due to the quasi-randomness features of stock price movements, the sense of their forward probabilities will also be distorted due to the stochastic nature of those same $\mu$s and $\sigma$s. And since this stochastic process is still looking at a stochastically scaled Wiener process, you enter into the realm of uncertain predictability. It is like playing heads or tails with a randomly biased coin. You thought the probability was 0.50 and made your bets accordingly, but it was not, the probability was randomly distorted allowing longer winning and losing streaks of larger magnitude (fat tails).
You do some $\mu$s and $\sigma$s over a selected dataset, say some 200 stocks and get some numbers. But those numbers only applied to that particular dataset, and over that particular look-back period, not necessarily its future. Therefore, the numbers you obtained might not be that representative going forward. Yet, even if you know this, why do you still use those numbers to make predictions as to what is coming next and have any kind of confidence in those expected probabilities?
The structure of the data itself is made to help you win the game. And if you do not see the inner workings of these huge seemingly random-like data matrices, how will you be able to design trading systems on fat-tailed quasi-martingales or semi-martingales structures?
There is no single factor that will be universal for all stocks. Period. That you keep searching might be just a waste of time. If there ever was one, it has been arbitrage out a long time ago, even prior to the computer age. Why is it that studies show that results get worse when you go beyond linear regressions? Or that adding factors beyond 5 or 6 does not seem to improve future results that much? The real questions come in with: why keep on doing it if it does not work that good? Are we suppose to limit ourselves to the point of not even beating the expected returns of long-term index funds?
The Game
The game itself should help you beat the game. But you need to know how the data is organized and what you can do with it. To know that, you need to know what the data is, and there is a lot of it. It is not just the price matrix $P$ that you have to deal with, it is also all the information related to it. And that information matrix $\mathcal {I}$ is a much larger matrix. It includes all the information you can gather about all the stocks in your $P$ matrix. So, if you analyze some 30 factors in your score ranking composite, you get an information matrix $\mathcal {I}$ that is 30 times the size of $P$.
But, it is not all. Having the information is only part of the game. You now need to interpret all that data, make decisions on what is valuable and make projections on where that data is leading you, even within these randomly changing biases and expectations. All this makes even larger matrices to analyze.
You are also faced with the problem that all those data interpretations need to be quantifiable by conditionals and equations. Our programs do not have partial or scalable opinions, feelings or prejudices. They just execute the code they were given and oftentimes to the 12$^{th}$ decimal digit. This in itself should raise other problems. When you are at the 12$^{th}$ decimal digit to differentiate the z-scores, those last digits start to be close to random numbers. And the ranking you assign to those stocks also starts to exhibit rank positioning randomness which should affect their portfolio weights and reverberate in your overall results.
Another problem is the periodic rebalancing on those numbers. Say your portfolio has 200 stocks or more and when it rebalanced some 50 stocks were liquidated for whatever reason and replaced by 50 new ones. All the portfolio weights have changed. It is not only the new 50 stocks that have new weights, but it is also all the 200 stocks that see their weights moved up or down even if there might have been no need or reason to do so. The stock selection method itself is forcing the churning of the account. And there are monetary consequences to be paid on this.
If we ignore the randomness in price series, it does not make it go away! If we ignore outliers, they do not disappear, they will just eat your lunch, that you like it or not. It is why, I think, there is a need to strategize your trading strategy to make it do more even in the face of unreliable uncertainty.
If the inner working of your trading strategies do not address these issues, do you think they go away?
|
G.9 Linear Independence Worked Example
This video gives some more details behind the example for the following four vectors in \(\mathbb{R}^{3}\). Consider the following vectors in \(\Re^{3}\):
\[ v_{1}=\begin{pmatrix}4\\-1\\3\end{pmatrix}, \qquad v_{2}=\begin{pmatrix}-3\\7\\4\end{pmatrix}, \qquad v_{3}=\begin{pmatrix}5\\12\\17\end{pmatrix}, \qquad v_{4}=\begin{pmatrix}-1\\1\\0\end{pmatrix}. \] The example asks whether they are linearly independent, and the answer is immediate: NO, four vectors can never be linearly independent in \(\mathbb{R}^{3}\). This vector space is simply not big enough for that, but you need to understand the notion of the dimension of a vector space to see why. So we think the vectors \(v_{1}\), \(v_{2}\), \(v_{3}\) and \(v_{4}\) are linearly dependent, which means we need to show that there is a solution to $$ \alpha_{1} v_{1} + \alpha_{2} v_{2} + \alpha_{3} v_{3} + \alpha_{4} v_{4} = 0 $$ for the numbers \(\alpha_{1}\), \(\alpha_{2}\), \(\alpha_{3}\) and \(\alpha_{4}\) not all vanishing.
To find this solution we need to set up a linear system. Writing out the above linear combination gives
$$ \begin{array}{cccccc} 4\alpha_{1}&-3\alpha_{2}&+5\alpha_{3}&-\alpha_{4} &=&0\, ,\\ -\alpha_{1}&+7\alpha_{2}&+12\alpha_{3}&+\alpha_{4} &=&0\, ,\\ 3\alpha_{1}&+4\alpha_{2}&+17\alpha_{3}& &=&0\, .\\ \end{array} $$ This can be easily handled using an augmented matrix whose columns are just the vectors we started with $$ \left( \begin{array}{cccc|c} 4&-3&5&-1 &0\, ,\\ -1&7&12&1 &0\, ,\\ 3&4&17& 0&0\, .\\ \end{array}\right)\, . $$ Since there are only zeros on the right hand column, we can drop it. Now we perform row operations to achieve RREF $$ \begin{pmatrix} 4&-3&5&-1 \\ -1&7&12&1 \\ 3&4&17& 0\\ \end{pmatrix}\sim \begin{pmatrix} 1 & 0 & \frac{71}{25}& -\frac{4}{25}\\ 0&1&\frac{53}{25}&\frac{3}{25}\\ 0&0&0&0 \end{pmatrix}\, . $$ This says that \(\alpha_{3}\) and \(\alpha_{4}\) are not pivot variable so are arbitrary, we set them to \(\mu\) and \(\nu\), respectively. Thus $$ \alpha_{1}=\Big(-\frac{71}{25}\, \mu+\frac{4}{25}\, \nu\Big)\, ,\qquad \alpha_{2}=\Big(-\frac{53}{25}\, \mu-\frac{3}{25}\, \nu\Big)\, ,\qquad \alpha_{3}=\mu\, ,\qquad \alpha_{4}= \nu\, . $$ Thus we have found a relationship among our four vectors $$ \Big(-\frac{71}{25}\, \mu+\frac{4}{25}\, \nu\Big)\, v_{1}+\Big(-\frac{53}{25}\, \mu-\frac{3}{25}\, \nu\Big)\, v_{2} +\mu\, v_{3}+ \mu_{4}\, v_{4}=0\, . $$ In fact this is not just one relation, but infinitely many, for any choice of \(\mu,\nu\). The relationship quoted in the notes is just one of those choices.
Finally, since the vectors \(v_{1}\), \(v_{2}\), \(v_{3}\) and \(v_{4}\) are linearly dependent, we can try to eliminate some of them. The pattern here is to keep the vectors that correspond to columns with pivots. For example, setting \(\mu=-1\) (say) and \(\nu=0\) in the above allows us to solve for \(v_{3}\) while \(\mu=0\) and \(\nu=-1\) (say) gives \(v_{4}\), explicitly we get
$$ v_{3}=\frac{71}{25}\, v_{1} + \frac{53}{25}\, v_{2}\, ,\qquad v_{4}=-\frac{4}{25}\, v_{3} + \frac{3}{25} \, v_{4}\, . $$ This eliminates \(v_{3}\) and \(v_{4}\) and leaves a pair of linearly independent vectors \(v_{1}\) and \(v_{2}\). Worked Proof
Here we will work through a quick version of the proof of Theorem 10.1.1. Let \(\{ v_{i} \}\) denote a set of linearly dependent vectors, so \(\sum_{i} c^{i} v_{i} = 0\) where there exists some \(c^{k} \neq 0\). Now without loss of generality we order our vectors such that \(c^{1} \neq 0\), and we can do so since addition is commutative (i.e. \(a + b = b + a\)). Therefore we have
\begin{align*} c^{1} v_{1} & = -\sum_{i=2}^{n} c^{i} v_{i} \\ v_{1} & = -\sum_{i=2}^{n} \frac{c^i}{c^1} v_{i} \end{align*} and we note that this argument is completely reversible since every \(c^{i} \neq 0\) is invertible and \(0 / c^{i} = 0\). Hint for Review Problem 1
Lets first remember how \(\mathbb{Z}_{2}\) works. The only two elements are \(1\) and \(0\). Which means when you add \(1+1\) you get \(0\). It also means when you have a vector \(\vec{v} \in B^{n}\) and you want to multiply it by a scalar, your only choices are \(1\) and \(0\). This is kind of neat because it means that the possibilities are finite, so we can look at an entire vector space.
Now lets think about \(B^{3}\) there is choice you have to make for each coordinate, you can either put a \(1\) or a \(0\), there are three places where you have to make a decision between two things. This means that you have \(2^{3}= 8\) possibilities for vectors in \(B^{3}\).
When you want to think about finding a set \(S\) that will span \(B^{3}\) and is linearly independent, you want to think about how many vectors you need. You will need you have enough so that you can make every vector in $B^3$ using linear combinations of elements in \(S\) but you don't want too many so that some of them are linear combinations of each other. I suggest trying something really simple perhaps something that looks like the columns of the identity matrix
For part (c) you have to show that you can write every one of the elements as a linear combination of the elements in \(S\), this will check to make sure \(S\) actually spans \(B^{3}\).
For part (d) if you have two vectors that you think will span the space, you can prove that they do by repeating what you did in part (c), check that every vector can be written using only copies of of these two vectors. If you don't think it will work you should show why, perhaps using an argument that counts the number of possible vectors in the span of two vectors.
G.10 Basis and Dimension Proof Explanation
Lets walk } through the proof of theorem 11.0.1. We want to show that for \(S=\{v_{1}, \ldots, v_{n} \}\) a basis for a vector space \(V\), then every vector \(w \in V\) can be written \(\textit{uniquely}\) as a linear combination of vectors in the basis \(S\):
\[w=c^{1}v_{1}+\cdots + c^{n}v_{n}.\]
We should remember that since \(S\) is a basis for \(V\), we know two things
\(V = span S\) \(v_{1}, \ldots , v_{n}\) are linearly independent, which means that whenever we have \(a^{1}v_{1}+ \ldots + a^{n} v_{n} = 0\) this implies that \(a^{i} =0\) for all \(i=1, \ldots, n\).
This first fact makes it easy to say that there exist constants \(c^{i}\) such that \(w=c^{1}v_{1}+\cdots + c^{n}v_{n}\). What we don't yet know is that these \(c^{1}, \ldots c^{n}\) are unique.
In order to show that these are unique, we will suppose that they are not, and show that this causes a contradiction. So suppose there exists a second set of constants \(d^{i}\) such that
$$w=d^{1}v_{1}+\cdots + d^{n}v_{n}\, .$$
For this to be a contradiction we need to have \(c^{i} \neq d^{i}\) for some \(i\). Then look what happens when we take the difference of these two versions of \(w\):
\begin{eqnarray*}0_{V}&=&w-w\\&=&(c^{1}v_{1}+\cdots + c^{n}v_{n})-(d^{1}v_{1}+\cdots + d^{n}v_{n} )\\&=&(c^{1}-d^{1})v_{1}+\cdots + (c^{n}-d^{n})v_{n}. \\\end{eqnarray*}
Since the \(v_{i}\)'s are linearly independent this implies that \(c^{i} - d^{i} = 0\) for all \(i\), this means that we cannot have \(c^{i} \neq d^{i}\), which is a contradiction.
Worked Example
In this video we will work through an example of how to extend a set of linearly independent vectors to a basis. For fun, we will take the vector space
$$V=\{(x,y,z,w)|x,y,z,w\in \mathbb{Z}^{5}\}\, .$$
This is like four dimensional space \(\mathbb{R}^{4}\) except that the numbers can only be \(\{0,1,2,3,4\}\). This is like bits, but now the rule is
$$0=5\, .$$
Thus, for example, \(\frac{1}{4}=4\) because \(4\times 4=16=1+3\times 5=1\). Don't get too caught up on this aspect, its a choice of base field designed to make computations go quicker!
Now, here's the problem we will solve:
$$\bf{Find~ a~ basis~ for~ V~ that~ includes ~the ~vectors~ \begin{pmatrix}1\\2\\3\\4\end{pmatrix}~ and ~\begin{pmatrix}0\\3\\2\\1\end{pmatrix}.}$$
The way to proceed is to add a known (and preferably simple) basis to the vectors given, thus we consider
\[v_{1}=\begin{pmatrix}1\\2\\3\\4\end{pmatrix},v_{2}=\begin{pmatrix}0\\3\\2\\1\end{pmatrix},e_{1}=\begin{pmatrix}1\\0\\0\\0\end{pmatrix},e_{2}=\begin{pmatrix}0\\1\\0\\0\end{pmatrix},e_{3}=\begin{pmatrix}0\\0\\1\\0\end{pmatrix},e_{4}=\begin{pmatrix}0\\0\\0\\1\end{pmatrix}.\]
The last four vectors are clearly a basis (make sure you understand this....) and are called the \(\textit {canonical basis}\). We want to keep \(v_{1}\) and \(v_{2}\) but find a way to turf out two of the vectors in the canonical basis leaving us a basis of four vectors. To do that, we have to study linear independence, or in other words a linear system problem defined by
$$0=\alpha_{1} e_{1} + \alpha_{2} e_{2} + \alpha_{3} v_{1} + \alpha_{4} v_{2} + \alpha_{5} e_{3} + \alpha_{6} e_{4} \, .$$
We want to find solutions for the \(\alpha's\) which allow us to determine two of the \(e's\). For that we use an augmented matrix
$$\left(\begin{array}{cccccc|c}1&0&1&0&0&0&0\\2&3&0&1&0&0&0\\3&2&0&0&1&0&0\\4&1&0&0&0&1&0\end{array}\right)\, .$$
Next comes a bunch of row operations. Note that we have dropped the last column of zeros since it has no information--you can fill in the row operations used above the \(\sim\)'s as an exercise:
$$\begin{pmatrix}1&0&1&0&0&0\\2&3&0&1&0&0\\3&2&0&0&1&0\\4&1&0&0&0&1\end{pmatrix}\sim\begin{pmatrix}1&0&1&0&0&0\\0&3&3&1&0&0\\0&2&2&0&1&0\\0&1&1&0&0&1\end{pmatrix}$$
$$\sim\begin{pmatrix}1&0&1&0&0&0\\0&1&1&2&0&0\\0&2&2&0&1&0\\0&1&1&0&0&1\end{pmatrix}\sim\begin{pmatrix}1&0&1&0&0&0\\0&1&1&2&0&0\\0&0&0&1&1&0\\0&0&0&3&0&1\end{pmatrix}$$
$$\sim\begin{pmatrix}1&0&1&0&0&0\\0&1&1&0&3&0\\0&0&0&1&1&0\\0&0&0&0&2&1\end{pmatrix}\sim\begin{pmatrix}1&0&1&0&0&0\\0&1&1&0&3&0\\0&0&0&1&1&0\\0&0&0&0&1&3\end{pmatrix}$$
$$\sim\begin{pmatrix}\underline1&0&1&0&0&0\\0&\underline1&1&0&0&1\\0&0&0&\underline1&0&2\\0&0&0&0&\underline1&3\end{pmatrix}$$
The pivots are underlined. The columns corresponding to non-pivot variables are the ones that can be eliminated--their coefficients (the \(\alpha\)'s) will be arbitrary, so set them all to zero save for the one next to the vector you are solving for which can be taken to be unity. Thus that vector can certainly be expressed in terms of previous ones. Hence, altogether, our basis is
$$\left\{\begin{pmatrix}1\\2\\3\\4\end{pmatrix} \, , \begin{pmatrix}0\\3\\2\\1\end{pmatrix} ,\begin{pmatrix}0\\1\\0\\0\end{pmatrix} ,\begin{pmatrix}0\\0\\1\\0\end{pmatrix}\right\}\, .$$
Finally, as a check, note that \(e_{1}=v_{1}+v_{2}\) which explains why we had to throw it away.
Hint for Review Problem 2
Since there are two possible values for each entry, we have \(|B^{n}| = 2^{n}\). We note that \(\dim B^{n} = n\) as well. Explicitly we have \(B^{1} = {(0), (1)}\) so there is only 1 basis for \(B^{1}\). Similarly we have
$$B^{2}=\left\{\begin{pmatrix}0\\0\end{pmatrix} \, , \begin{pmatrix}1\\0\end{pmatrix} ,\begin{pmatrix}0\\1\end{pmatrix} ,\begin{pmatrix}1\\1\end{pmatrix}\right\}\, .$$
and so choosing any two non-zero vectors will form a basis. Now in general we note that we can build up a basis \({e_{i}}\) by arbitrarily (independently) choosing the first \(i-1\) entries, then setting the \(i\)-th entry to \(1\) and all higher entries to \(0\).
|
Practice Paper 1 Question 20
Evaluate \(\lim\limits_{n\rightarrow\infty} \left(\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{2n}\right)\).
Hint: Graph sketching may help. Related topics Warm-up Questions Sketch \(y=\frac{x}{x^2-a^2}\) for various values of \(a\). Integrate \(y=x\ln x\). Evaluate \(\lim\limits_{x\to\infty} \big(\ln x-\ln(2x-1)\big).\) Hints Hint 1What does the graph of \(\frac{1}{x}\) look like? Hint 2On the above graph, try representing each term of the sum as a (very basic) shape with an area equal to the term's value. Hint 3Which 2 continuous functions can you use as upper and lower bounds for the terms of the sum, having this new representation? Hint:one of them you already used. Hint 4Consider the graph below. How do you relate the areas underneath the two functions to the sum? Hint 5Squeeze theorem? Solution
Each number \(\frac{1}{k}\) is equal to the area of the rectangle extending to the right of the number, having height \(\frac{1}{k}\) and width 1. By sketching, notice there are two continuous functions bounding these rectangles, one above, i.e. \(\frac{1}{x-1}\), and one below, i.e. \(\frac{1}{x}\), whose integrals will also bound the initial sum under the limit.
Concretely,
\[ \int_{n+1}^{2n+1} \frac{dx}{x} < \sum_{k=n+1}^{2n}\frac{1}{k} < \int_{n+1}^{2n+1}\frac{dx}{x-1}. \] At \(n\rightarrow\infty\), both integrals become \(\ln2,\) and by the squeeze theorem, so must the sum.
If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
|
$SL(2,R)xSU(2)/R^2$ string model in curved spacetime and exact conformal results
Bars, I and Sfetsos, K (1992)
$SL(2,R)xSU(2)/R^2$ string model in curved spacetime and exact conformal results Phys.Lett. B, 301. pp. 183-190. Abstract
Pursuing further the recent methods in the algebraic Hamiltonian approach to gauged WZW models, we apply them to the bosonic SL(2,R) X SU(2)/R^2 model recently investigated by Nappi and Witten. We find the global space and compute the conformally exact metric and dilaton fields to all orders in the $1/k$ expansion. The semiclassical limit $k',k\to \infty$ of our exact results agree with the lowest order perturbation computation which was done in the Lagrangian formalism. We also discuss the supersymmetric type-II and heterotic versions of this model and verify the non-renormalization of $e^\Phi\sqrt{-G}$.
Item Type: Article Authors :
Date : 2 August 1992 DOI : 10.1016/0370-2693(93)90686-C Uncontrolled Keywords : hep-th, hep-th Related URLs : Depositing User : Symplectic Elements Date Deposited : 17 May 2017 12:27 Last Modified : 17 May 2017 12:27 URI: http://epubs.surrey.ac.uk/id/eprint/835376 Actions (login required)
View Item Downloads
Downloads per month over past year
|
" Find the points $z \in \mathbb{C}$ at which the function $g(z) = \cos(\bar{z})$ satisfies the Cauchy-Riemann equations. "
So I've written $g(z) = g(x,y) = \cos(x-iy)=\cos(x)\cosh(y)+i\sin(x)\sinh(y)$. So that $u(x,y)=\cos(x)\cosh(y)$ and $v(x,y)=\sin(x)\sinh(y)$.
From the first C-R equation I get $-\sin(x)\cosh(y) = \sin(x)\cosh(y)$ which is true when $\sin(x)\cosh(y)=0$.
Now here I thought this implied $\sin(x) = 0$ or $\cosh(y) = 0$. However the solutions give only $\sin(x) = 0$, why?
Also, when doing the second equation, you get $\cos(x)\sinh(y) = 0$, but this time the solutions split the case into $\cos(x)=0$ and $\sinh(y)=0$ as I did above.
Why this?
EDIT
Fair enough, I've found this document explaining why it is not zero, as it was said in the comments. Hyperbolic Functions
|
Question:
The closed feedwater heater is to heat feedwater from {eq}5000 kPa {/eq} and {eq}220 ^oC {/eq} (state 1) to a saturated liquid (state 2). The turbine supplies bleed steam at {eq}6000 kPa {/eq} and {eq}320^oC {/eq} (state 3) to this unit. This steam is condensed to a mixture of liquid vapor (97.352% vapor and 2.648% liquid (state 4) before entering the pump. Each state indicated the position {eq}1,2,3 {/eq} and {eq}4 {/eq}. Assuming no kinetic and potential energy change, all flows assume that it is a steady state condition in control volumes, no heat loss from this closed feedwater heater (heat exchanger) to ambient, and no pressure loss in each streams ({eq}P_1=P_2 {/eq} and {eq}P_3=P_4 {/eq}).
Determine:
a) The exit temperature of feedwater ({eq}T_2 {/eq} in {eq}^oC {/eq})
b) The exit temperature of the bleed steam after passing the closed feedwater heater ({eq}T_4 {/eq}in{eq}^o C {/eq} )
c) The mass flow rate of bleed steam required to heat flowing 1 kg/s of feedwater in the unit in kg/s.
Feed Water Heater:
A component of the power plant which is used to heat pumped water from the bleed steam before entering pumped water into the boiler to reduce irreversibility of steam generation is termed as feed water heater.
Answer and Explanation: Given data The inlet pressure of feed water heater is {eq}P_1 = 5000\;{\rm{kPa}} {/eq} The inlet temperature of feed water heater is {eq}T_1 = 220^\circ {\rm{C}} {/eq} The pressure of bleed steam is {eq}P_3 = 6000\;{\rm{kPa}} {/eq} The temperature of bleed steam is {eq}T_3 = 320^\circ {\rm{C}} {/eq} The quality of bleed steam at exit of feed water heater {eq}x = 0.97352 {/eq} The mass of feed water entering feed water heater is {eq}\dot m_1 = 1\;{\rm{kg/s}} {/eq}
The figure below shows the feed water heater schematics
(a)
From the steam table of saturated water
The enthalpy of feed water entering the feed water heater is
{eq}h_1 = 944.383\;{\rm{kJ/kg}} {/eq}
The saturated temperature of steam at {eq}P_1 = 5000\;{\rm{kPa}} {/eq} is
{eq}T_{sat,2} = 264^\circ {\rm{C}} {/eq}
The temperature of feed water leaving the feed water heater is
{eq}T_2 = T_{sat,2} {/eq}
Substitute the value in above expression
{eq}T_2 = 264^\circ {\rm{C}} {/eq}
The enthalpy of saturated water at {eq}P_1 = 5000\;{\rm{kPa}} {/eq} is
{eq}h_{f,2} = 1154.5\;{\rm{kJ/kg}} {/eq}
The enthalpy of feed water leaving the feed water heater is
{eq}h_2 = h_{f,2} {/eq}
Substitute the value in above expression
{eq}h_2 = 1154.5\;{\rm{kJ/kg}} {/eq}
Hence exit temperature of feed water is {eq}264^\circ {\rm{C}} {/eq}.
(b)
From the steam table of super heated steam
The enthalpy of bleed steam is
{eq}h_3 = 2953.6\;{\rm{kJ/kg}} {/eq}
The saturated temperature of bleed steam is
{eq}T_{sat,4} = 275.5^\circ {\rm{C}} {/eq}
The enthalpy of saturated steam at {eq}6000\;{\rm{kPa}} {/eq} is
{eq}h_{g,4} = 2784.6\;{\rm{kJ/kg}} {/eq}
The enthalpy of saturated steam at {eq}6000\;{\rm{kPa}} {/eq} is
{eq}h_{f,4} = 1213.9\;{\rm{kJ/kg}} {/eq}
The expression for enthalpy of bleed steam is
{eq}h_4 = h_{f,4} + x\left( {h_{g,4} - h_{f,4} } \right) {/eq}
Substitute the value in above expression
{eq}\begin{align*} h_4 &= 1213.9\;{\rm{kJ/kg}} + 0.97352\left( {1229.4\;{\rm{kJ/kg}} - 1213.7\;{\rm{kJ/kg}}} \right) \\ &= 1213.9\;{\rm{kJ/kg}} + 1529.1\;{\rm{kJ/kg}} \\ &= 2743\;{\rm{kJ/kg}} \\ \end{align*} {/eq}
Hence exit temperature of bleed steam after passing the closed feed water heater is {eq}275.5^\circ {\rm{C}} {/eq}.
(c)
The expression for energy balance in feed water heater is
{eq}\dot m_1 \left( {h_2 - h_1 } \right) = \dot m\left( {h_3 - h_4 } \right) {/eq}
Here, mass flow rate of bleed steam is {eq}\dot m {/eq}.
Substitute the value in above expression
{eq}\begin{align*} 1\;{\rm{kg/s}}\left( {1154.5\;{\rm{kJ/kg}} - 944.383\;{\rm{kJ/kg}}} \right) &= \dot m\left( {2953.6\;{\rm{kJ/kg}} - 2743\;{\rm{kJ/kg}}} \right) \\ \dot m &= \dfrac{{1\;{\rm{kg/s}} \times 210.117\;{\rm{kJ/kg}}}}{{210.55}} \\ \dot m &= 0.997\;{\rm{kg/s}} \\ \end{align*} {/eq}
Hence mass flow rate of bleed steam is {eq}0.997\;{\rm{kg/s}} {/eq}.
Learn more about this topic:
from Geography 101: Human & Cultural GeographyChapter 13 / Lesson 9
|
There isn't a single way in which one can approach a discrete optimization problem using Differential Evolution (DE).
Widespread techniques listed under the Discrete Differential Evolution label aren't DE-specific.
You can allow variables to take values in a continuous range and use penalty functions to enforce integer values:
$$ \bar{f}(w) = f(w) - \sum_i{k_i \cdot (w_i - \operatorname{round}(w_i))^2} $$
$w$ is the vector of parameters (chromosome values), $f: \mathbb R^n \rightarrow \mathbb R$ the basic fitness function (here assuming "greater is better"), $k$ a problem-specific scaling vector, $\bar{f}(\cdot)$ the "penalized" fitness function.
In this way the DE algorithm (
DE/rand/1) stays the same:
$$\begin{align}X_{j,r2}^G - X_{j,r3}^G & = \{2,2,3,0,4,2\} - \{1,2,3,3,0,1\} = \{1,0,0,-3,4,1\} \\F \cdot (X_{j,r2}^G - X_{j,r3}^G) & = 0.5 \cdot \{1,0,0,-3,4,1\} = \{0.5,0,0,-1.5,2,0.5\} \\V_{j,i}^{G+1} & = \{4,1,3,2,2,0\} + \{0.5,0,0,-1.5,2,0.5\} = \{4.5,1,3,0.5,4,0.5\}\end{align}$$
The trial vector $U$ is obtained via crossover between the
donor vector $V_{j,i}^{G+1}$ and a target vector $X$:
$$U_{j,i}^{G+1} = \operatorname{crossover}(V_{j,i}^{G+1}, X_{j,i}^{G})$$
The
target vector is compared with the trial vector and the best one is admitted to the next generation.
This is the recommended procedure with R DEOptim Package (via the optional
fnMap parameter).
You can round all the real-valued parameters before evaluating the fitness function:
$$\bar{f}(w) = f(\operatorname{round}(w))$$
(
round acts as a
repair operator)
This is the technique used by
Mathematica's functions
NMinimize /
NMaximize with the options
Method → "DifferentialEvolution" and
Element[w,Integers]
There are also many variations of DE named
something-Discrete-DE: Binary Discrete Differential Evolution: the solution of a problem is presented as a binary string instead of a real-valued vector Real Value based Discrete Differential Evolution introduces forward/backward transformations to map integer into real numberand viceversa Exchange based Discrete Differential Evolution: here the crossover operator doesn't change but mutation, being the primary operator acting onelements of vector in continuous space, is replaced. ...
So you should specify what form of Discrete DE you're interested in for a step by step example.
Meanwhile A Comparative Study of Discrete Differential Evolution on BinaryConstraint Satisfaction Problems by Qingyun Yang (2008 IEEE Congress on Evolutionary Computation) is a good starting point with many references.
|
Let $S $ be the spectrum of a complete dvr with algebraically closed residue field. Let $\eta$ be its generic point and let $s$ be its closed point with $k(s)$ of positive characteristic. Let $X\to S$ be a smooth proper morphism of schemes whose geometric fibres are connected.
Edit:For all $b$ in $S$, assume that $\mathrm{H}^1(X_b,\mathcal{O}_{X_b}) = \mathrm{H}^0(X_b,\Omega^1_{X_b})=0$ and that $\mathrm{Pic}(X_b)$ is torsion-free. Moreover, assume that $\mathrm{rk}(\mathrm{Pic}(X_{\eta})) = \mathrm{rk}(\mathrm{Pic}(X_s))$. Also, assume the fibres of $X\to S$ are Fano varieties. (Some of these assumptions are redundant.)
Let $L$ be a line bundle on $X_s$ such that $L^{\otimes p}\cong \omega_{X_s/k(s)}$, where $p$ equals the characteristic of $k(s)$.
Question.Does $L$ lift to $X$?
The motivation for this question comes from a question about the index of a Fano variety. Namely, it seems reasonable to suspect that the index of a Fano variety doesn't jump upon specialization. This is clear in characteristic zero, as $\mathrm{H}^1(X,\mathcal{O}_X)=\mathrm{H}^2(X,\mathcal{O}_X)=0$ for a Fano variety $X$ over $\mathbb C$. However, the vanishing of these cohomology groups is currently not known for Fano varieties over fields of positive characteristic. (The vanishing of $\mathrm{H}^1(X,\mathcal{O}_X)$ is known for SRC Fano varieties.)
|
Based on such demands, I designed a co-pilot which can:
Automatically clustering the clusters; Remove periodic boundary conditions and make the center-of-mass at $(0,0,0)^T$; Adjust the view vector along the minor axis of the aggregate; Classify the aggregate.
After obtaining clusters in step 1, we must remove periodic boundary conditions of the cluster. If in step 1, one uses BFS or DFS + Linked Cell List method, then one can remove periodic boundary condition during clustering; but this method has limitations, it does not work properly if the cluster is percolated throughout the box. Therefore, in this step, I use circular statistics to deal with the clusters. In periodic boundary condition simulation box, distance between an NP in an aggregate and the midpoint will never exceed $L/2$ in corresponding box dimension. Midpoint here is not center-of-mass, e.g., distance between a nozzle point and center-of-mass of an ear-syringe-bulb is clearly larger than its half length; midpoint is a actually a "de-duplicated" center-of-mass. Besides, circular mean also puts most points in the center in case of percolation. Therefore, in part 2, we have following steps:
Choose a $r_\text{cut}$ to test whether the aggregate is percolate; If the aggregate is percolate, evaluate the circular mean of all points $r_c$; Set all coordinates $r$ as $r\to pbc(r-r_c)$; If the aggregate is not percolate, midpoint is evaluated by calculating circular mean of coordinates $r$ where $\rho(r)>0$, $\rho(r)$ is calculated using bin size that smaller than $r_\text{cut}$ used in step 1; Same as step 3, update coordinates; After step 5, the aggregates are unwrapped from the box, set $r\to r-\overline{r}$ to set center-of-mass at $(0,0,0)^T$
$$\alpha=\underset{\beta}{\operatorname{argmin}}\sum_i (1-\cos(\alpha_i-\beta))$$
Adjusting the view vector is simple, evaluate the eigenspace of gyration tensor as $rr^T/n$ and sort the eigenvectors by eigenvalue, i.e., $\lambda_1\ge\lambda_2\ge\lambda_3$, then the minor axis is corresponding eigenvector $v_3$, the aggregate then can be rotated by $[v_1, v_2, v_3]$ so that the minor axis is $z$-axis.
The last step is a bit more tricky, the best trail I attempted was to use SVC, a binary classification method. I used about 20 samples labeled as "desired", these 20 samples were extended to 100 samples by adding some noises to the samples, e.g., moving coordinates a little bit, adding several NPs into the aggregate or removing several NPs randomly, without "breaking" the category of the morphology. Together with 100 "undesired" samples, I trained the SVC with a Gaussian kernel. The result turned out to be pretty good. I also tried to use ANN to classify all 5 categories of morphologies obtained from simulations, but ANN model did not work very well, perhaps the reason was lack of samples or the model I built was too rough. I didn't try other multi-class methods, anyway, that part of work was done, I stopped developing this co-pilot long time ago.
|
Consider the finite multisets $\mathbf{Bag}\:X$. Its elements are given by $\{x_1,\ldots,x_n\}$ quotiented by permutations, so that $\{x_1,\ldots,x_n\}=\{x_{\pi 1},\ldots,x_{\pi n}\}$ for any $\pi\in\mathbf{S}_n$. What is a one-hole context for an element in such a thing? Well, we must have had $n>0$ to select a position for the hole, so we are left with the remaining $n-1$ elements, but we are none the wiser about which is where. (That's unlike lists, where choosing a position for the hole cuts one list into two sections, and the second derivative cuts selects one of those sections and cuts it further, like "point" and "mark" in an editor, but I digress.) A one-hole context in a $\mathbf{Bag}\:X$ is thus a $\mathbf{Bag}\:X$, and every $\mathbf{Bag}\:X$ can arise as such. Thinking spatially, the derivative of $\mathbf{Bag}\:X$ ought to be itself.
Now,
$$\mathbf{Bag}\:X = \sum\limits_{n\in\mathbb{N}}X^n/\mathbf{S}_n$$
a choice of tuple size $n$, with a tuple of $n$ elements up to a permutation group of order $n!$, giving us exactly the power series expansion of $e^x$.
Naively, we can characterize container types by a set of shapes $S$ and a shape-dependent family of positions $P$:$$\sum\limits_{s:S}X^{(P\,s)}$$so that a container is given by a choice of shape and a map from positions to elements. With bags and the like, there's an extra twist.
The "shape" of a bag is some $n\in\mathbb{N}$; the "positions" are $\{1,\ldots,n\}$, the finite set of size $n$, but the map from positions to elements must be invariant under permutations from $\mathbf{S}_n$. There should be no way to access a bag that "detects" the arrangement of its elements.
The East Midlands Container Consortium wrote about such structures in Constructing Polymorphic Programs with Quotient Types, for Mathematics of Program Construction 2004. Quotient containers extend our usual analysis of structures by "shapes" and "positions" by allowing an automorphism group to act on the positions, allowing us to consider structures such as unordered pairs $X^2/2$, with derivative $X$. An unordered $n$-tuple is given by $X^n/n!$, and its derivative (when $n>0$ is an unordered $n-1$ tuple). Bags take the sum of these. We can play similar games with
cyclic $n$-tuples, $X^n/n$, where choosing a position for the hole nails the rotation to one spot, leaving $X^{n-1}$, a tuple one smaller with no permutation.
"Type division" is hard to make sense of in general, but quotienting by permutation groups (as in combinatorial species) does make sense, and is fun to play with. (Exercise: formulate a structural induction principle for unordered pairs of numbers, $\mathbb{N}^2/2$, and use it to implement addition and multiplication so that they're commutative by construction.)
The "shapes-and-positions" characterization of containers imposes finiteness on neither. Combinatorial species tend to organised by
size, rather than shape, which amounts to collecting terms in and computing the coefficient for each exponent. Quotient-containers-with-finite-position-sets and combinatorial species are basically different spins on the same substance.
|
555851 results
Contributors: Bianchi, S., Saglimbeni, F., Lepore, A., Di Leonardo, R.
Date: 2015-06-30
... E. coli bacteria swim following a run and tumble pattern. In the run state all flagella join in a single helical bundle that propels the cell body along approximately straight paths. When one or more flagellar motors reverse direction the bundle unwinds and the cell randomizes its orientation. This basic picture represents an idealization of a much more complex dynamical problem. Although it has been shown that bundle formation can occur at either pole of the cell, it is still unclear whether this two run states correspond to asymmetric propulsion features. Using holographic microscopy we record the 3D motions of individual bacteria swimming in optical traps. We find that most cells possess two run states characterised by different propulsion forces, total torque and bundle conformations. We analyse the statistical properties of bundle reversal and compare the hydrodynamic features of forward and backward running states. Our method is naturally multi-particle and opens up the way towards controlled hydrodynamic studies of interacting swimming cells.
Files:
Contributors: Bodendorfer, Norbert, Lewandowski, Jerzy, Świeżewski, Jedrzej
Date: 2015-06-30
... Firstly, we present a reformulation of the standard canonical approach to spherically symmetric systems in which the radial gauge is imposed. This is done via the gauge unfixing technique, which serves as the exposition in the context of the radial gauge. Secondly, we apply the same techniques to the full theory, without assuming spherical symmetry, resulting in a reduced phase space description of general relativity. The canonical structure of the theory is analyzed.
Files:
Contributors: Kim, Jongpil, Pavlovic, Vladimir
Date: 2015-06-30
... In this paper, we propose a novel method to find characteristic landmarks on ancient Roman imperial coins using deep convolutional neural network models (CNNs). We formulate an optimization problem to discover class-specific regions while guaranteeing specific controlled loss of accuracy. Analysis on visualization of the discovered region confirms that not only can the proposed method successfully find a set of characteristic regions per class, but also the discovered region is consistent with human expert annotations. We also propose a new framework to recognize the Roman coins which exploits hierarchical structure of the ancient Roman coins using the state-of-the-art classification power of the CNNs adopted to a new task of coin classification. Experimental results show that the proposed framework is able to effectively recognize the ancient Roman coins. For this research, we have collected a new Roman coin dataset where all coins are annotated and consist of observe (head) and reverse (tail) images.
Files:
Contributors: Kim, Jihn E., Mo, Doh Young, Seo, Min-Seok
Date: 2015-06-30
... We estimate the CKM matrix elements in the recently proposed minimal model, anti-SU(7) GUT for the family unification, $[\,3\,]+2\,[\,2\,]+8\,[\,\bar{1}\,]$+\,(singlets). It is shown that the real angles of the right-handed unitary matrix diagonalizing the mass matrix can be determined to fit the Particle Data Group data. However, the phase in the right-handed unitary matrix is not constrained very much. We also includes an argument about allocating the Jarlskog phase in the CKM matrix. Phenomenologically, there are three classes of possible parametrizations, $\delq=\alpha,\beta,$ or $\gamma$ of the unitarity triangle. For the choice of $\delq=\alpha$, the phase is close to a maximal one.
Files:
Contributors: Joshi, Chaitanya, Larson, Jonas
Date: 2015-06-30
... Prospects for reaching persistent entanglement between two spatially separated atomic Bose-Einstein condensates are outlined. The system set-up comprises of two condensates loaded in an optical lattice, which, in return, is confined within a high-Q optical resonator. The system is driven by an external laser that illuminates the atoms such that photons can scatter into the cavity. In the superradiant phase a cavity field is established and we show that the emerging cavity mediated interactions between the two condensates is capable of entangling them despite photon losses. This macroscopic atomic entanglement is sustained throughout the time-evolution apart from occasions of sudden deaths/births. Using an auxiliary photon mode and coupling it to a collective quadrature of the two condensates we demonstrate that the auxiliary mode's squeezing is proportional to the atomic entanglement and as such it can serve as a probe field of the macroscopic entanglement.
Files:
Coarse-grained modelling of strong DNA bending I: Thermodynamics and comparison to an experimental "molecular vice"
Contributors: Harrison, Ryan M., Romano, Flavio, Ouldridge, Thomas E., Louis, Ard A., Doye, Jonathan P. K.
Date: 2015-06-30
... DNA bending is biologically important for genome regulation and is relevant to a range of nanotechnological systems. Recent results suggest that sharp bending is much easier than implied by the widely-used worm-like chain model; many of these studies, however, remain controversial. We use a coarse-grained model, previously fitted to DNA's basic thermodynamic and mechanical properties, to explore strongly bent systems. We find that as the end-to-end distance is decreased sufficiently short duplexes undergo a transition to a state in which the bending strain is localized at a flexible kink that involves disruption of base-pairing and stacking. This kinked state, which is not well-described by the worm-like chain model, allows the duplex to more easily be sharply bent. It is not completely flexible, however, due to constraints arising from the connectivity of both DNA backbones. We also perform a detailed comparison to recent experiments on a "molecular vice" that probes highly bent DNA. Close agreement between simulations and experiments strengthens the hypothesis that localised bending via kinking occurs in the molecular vice and causes enhanced flexibility of duplex DNA. Our calculations therefore suggests that the cost of kinking implied by this experiment is consistent with the known thermodynamic and mechanical properties of DNA.
Files:
Contributors: Holt, Nathan P. M., Hohler, Paul M., Rapp, Ralf
Date: 2015-06-30
... We investigate thermal photon emission rates in hot hadronic matter from a system consisting of pi, rho, and omega mesons. The rates are calculated using both relativistic kinetic theory with Born diagrams as well as thermal field theory at the two-loop level. This enables us to cross-check our calculations and to manage a pole contribution that arises in the Born approximation corresponding to the omega -> pi^0 gamma radiative decay. After implementing hadronic form factors to account for finite-size corrections, we find that the resulting photo-emission rates are comparable to existing results from pi rho -> pi gamma processes in the energy regime of 1-3 GeV. We expect that our new sources will provide a non-negligible contribution to the total hadronic rates, thereby enhancing calculated thermal photon spectra from heavy-ion collisions, which could improve the description of current direct-photon data from experiment.
Files:
Contributors: Omodei, Elisa, De Domenico, Manlio, Arenas, Alex
Date: 2015-06-30
... Nowadays, millions of people interact on a daily basis on online social media like Facebook and Twitter, where they share and discuss information about a wide variety of topics. In this paper, we focus on a specific online social network, Twitter, and we analyze multiple datasets each one consisting of individuals' online activity before, during and after an exceptional event in terms of volume of the communications registered. We consider important events that occurred in different arenas that range from policy to culture or science. For each dataset, the users' online activities are modeled by a multilayer network in which each layer conveys a different kind of interaction, specifically: retweeting, mentioning and replying. This representation allows us to unveil that these distinct types of interaction produce networks with different statistical properties, in particular concerning the degree distribution and the clustering structure. These results suggests that models of online activity cannot discard the information carried by this multilayer representation of the system, and should account for the different processes generated by the different kinds of interactions. Secondly, our analysis unveils the presence of statistical regularities among the different events, suggesting that the non-trivial topological patterns that we observe may represent universal features of the social dynamics on online social networks during exceptional events.
Files:
Contributors: Alayrac, Jean-Baptiste, Bojanowski, Piotr, Agrawal, Nishant, Sivic, Josef, Laptev, Ivan, Lacoste-Julien, Simon
Date: 2015-06-30
... We address the problem of automatically learning the main steps to complete a certain task, such as changing a car tire, from a set of narrated instruction videos. The contributions of this paper are three-fold. First, we develop a new unsupervised learning approach that takes advantage of the complementary nature of the input video and the associated narration. The method solves two clustering problems, one in text and one in video, applied one after each other and linked by joint constraints to obtain a single coherent sequence of steps in both modalities. Second, we collect and annotate a new challenging dataset of real-world instruction videos from the Internet. The dataset contains about 800,000 frames for five different tasks that include complex interactions between people and objects, and are captured in a variety of indoor and outdoor settings. Third, we experimentally demonstrate that the proposed method can automatically discover, in an unsupervised manner, the main steps to achieve the task and locate the steps in the input videos.
Files:
Contributors: Evans, Tyler J., Fialowski, Alice
Date: 2015-06-30
... In this paper, we describe restricted one-dimensional central extensions of all finite dimensional simple restricted Lie algebras defined over fields of characteristic $p\ge 5$.
Files:
|
Errors and residuals
Regression analysis Models Estimation Background
In statistics and optimization,
statistical errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "theoretical value". The error (or disturbance) of an observed value is the deviation of the observed value from the (unobservable) true function value, while the residual of an observed value is the difference between the observed value and the estimated function value. Contents Introduction 1 Example 1.1 Regressions 2 Stochastic error 2.1 Other uses of the word "error" in statistics 3 See also 4 References 5 Introduction
Suppose there is a series of observations from a univariate distribution and we want to estimate the mean of that distribution (the so-called location model). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean.
A
statistical error (or disturbance) is the amount by which an observation differs from its expected value, the latter being based on the whole population from which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is −0.05 meters. The expected value, being the mean of the entire population, is typically unobservable, and hence the statistical error cannot be observed either.
A
residual (or fitting error), on the other hand, is an observable estimate of the unobservable statistical error. Consider the previous example with men's heights and suppose we have a random sample of n people. The sample mean could serve as a good estimator of the population mean. Then we have: The difference between the height of each man in the sample and the unobservable populationmean is a statistical error, whereas The difference between the height of each man in the sample and the observable samplemean is a residual.
Note that the sum of the residuals within a random sample is necessarily zero, and thus the residuals are necessarily
not independent. The statistical errors on the other hand are independent, and their sum within the random sample is almost surely not zero. Example X_1, \dots, X_n\sim N(\mu,\sigma^2)\,
and the sample mean
\overline{X}={X_1 + \cdots + X_n \over n}
is a random variable distributed thus:
\overline{X}\sim N(\mu, \sigma^2/n).
The
statistical errors are then \varepsilon_i=X_i-\mu,\,
whereas the
residuals are \widehat{\varepsilon}_i=X_i-\overline{X}.
(As is often done, the "hat" over the letter ε indicates an observable
estimate of an unobservable quantity called ε.) \sum_{i=1}^n \left(X_i-\mu\right)^2/\sigma^2\sim\chi^2_n.
This quantity, however, is not observable. The sum of squares of the
residuals, on the other hand, is observable. The quotient of that sum by σ 2 has a chi-squared distribution with only n − 1 degrees of freedom: \sum_{i=1}^n \left(\,X_i-\overline{X}\,\right)^2/\sigma^2\sim\chi^2_{n-1}.
This difference between
n and n − 1 degrees of freedom results in Bessel's correction for the estimation of sample variance of a population with unknown mean and unknown variance, though if the mean is known, no correction is necessary.
It is remarkable that the sum of squares of the residuals and the sample mean can be shown to be independent of each other, using, e.g. Basu's theorem. That fact, and the normal and chi-squared distributions given above, form the basis of calculations involving the quotient
{\overline{X}_n - \mu \over S_n/\sqrt{n}}.
The probability distributions of the numerator and the denominator separately depend on the value of the unobservable population standard deviation
σ, but σ appears in both the numerator and the denominator and cancels. That is fortunate because it means that even though we do not know σ, we know the probability distribution of this quotient: it has a Student's t-distribution with n − 1 degrees of freedom. We can therefore use this quotient to find a confidence interval for μ. Regressions
In regression analysis, the distinction between
errors and residuals is subtle and important, and leads to the concept of studentized residuals. Given an unobservable function that relates the independent variable to the dependent variable – say, a line – the deviations of the dependent variable observations from this function are the unobservable errors. If one runs a regression on some data, then the deviations of the dependent variable observations from the fitted function are the residuals.
However, a terminological difference arises in the expression mean squared error (MSE). The mean squared error of a regression is a number computed from the sum of squares of the computed
residuals, and not of the unobservable errors. If that sum of squares is divided by n, the number of observations, the result is the mean of the squared residuals. Since this is a biased estimate of the variance of the unobserved errors, the bias is removed by multiplying the mean of the squared residuals by n / df where df is the number of degrees of freedom ( n minus the number of parameters being estimated). This method gets the exact same answer as the method using the the mean of the squared error). to This latter formula serves as an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error. [1]
Another method to calculate the means square of error when analyzing the variance of linear regression using a technique like that used in ANOVA (they are the same because ANOVA is a type of regression), the sum of squares of the residuals (aka sum of squares of the error) is divided by the degrees of freedom (where the degrees of freedom equals
n-p-1, where p is the number of 'parameters' or predictors used in the model (i.e. the number of variables in the regression equation). One can then also calculate the mean square of the model by dividing the sum of squares of the model minus the degrees of freedom, which is just the number of parameters. Then the F value can be calculated by divided MS(model) by MS(error), and we can then determine significance (which is why you want the means squares to begin with.). [2]
However, because of the behavior of the process of regression, the
distributions of residuals at different data points (of the input variable) may vary even if the errors themselves are identically distributed. Concretely, in a linear regression where the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will be higher than the variability of residuals at the ends of the domain: linear regressions fit endpoints better than the middle. This is also reflected in the influence functions of various data points on the regression coefficients: endpoints have more influence.
Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability of
residuals, which is called studentizing. This is particularly important in the case of detecting outliers: a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain. Stochastic error
The stochastic error in a measurement is the error that is random from one measurement to the next. Stochastic errors tend to be gaussian (normal), in their distribution. That's because the stochastic error is most often the sum of many random errors, and when many random errors are added together, the distribution of their sum looks gaussian, as shown by the Central Limit Theorem. A stochastic error is added to a regression equation to introduce all the variation in Y that cannot be explained by the included Xs. It is, in effect, a symbol of our inability to model all the movements of the dependent variable.
Other uses of the word "error" in statistics
The use of the term "error" as discussed in the sections above is in the sense of a deviation of a value from a hypothetical unobserved value. At least two other uses also occur in statistics, both referring to observable prediction errors:
Mean square error or
mean squared error (abbreviated MSE) and root mean square error (RMSE) refer to the amount by which the values predicted by an estimator differ from the quantities being estimated (typically outside the sample from which the model was estimated). Sum of squared errors, typically abbreviated SSE or SS e, refers to the residual sum of squares (the sum of squared residuals) of a regression; this is the sum of the squares of the deviations of the actual values from the predicted values, within the sample used for estimation. Likewise, the sum of absolute errors (SAE) refers to the sum of the absolute values of the residuals, which is minimized in the least absolute deviations approach to regression. See also Absolute deviation Consensus forecasts Deviation (statistics) Error detection and correction Explained sum of squares Innovation (signal processing) Innovations vector Lack-of-fit sum of squares Margin of error Mean absolute error Propagation of error Regression dilution Root mean square deviation Sampling error Studentized residual Type I and type II errors References Steel, Robert G. D.; Torrie, James H. (1960). Principles and Procedures of Statistics, with Special Reference to Biological Sciences. McGraw-Hill. p. 288. Zelterman, Daniel (2010). Applied linear models with SAS([Online-Ausg.]. ed.). Cambridge: Cambridge University Press. Cook, R. Dennis; Weisberg, Sanford (1982). Residuals and Influence in Regression.(Repr. ed.). New York: Weisberg, Sanford (1985). Applied Linear Regression(2nd ed.). New York: Wiley. Hazewinkel, Michiel, ed. (2001), "Errors, theory of",
|
I am generating random 64-bit numbers, in groups of 1000 numbers. How many groups can I expect to generate before there is a collision within one of the groups? Again, I don't care if anything in the first group collides with anything from the second group, only if numbers collide within the same group.
There are $2^{64 \cdot 1000}$ combinations for the batch of 1000. Probability of having a collision within this batch is
$$ 1 - \frac{1}{2^{64 \cdot 1000}} 2^{64} ( 2^{64}-1 ) \ldots (2^{64} - 999) \approx 2.7 \cdot 10^{-14} $$
The expected number of batches is the reciprocal of this probability, which is about $3.7 \cdot 10^{13}$.
Working approximately, there are ${1000 \choose 2} = \frac{1000\cdot 999}{2}=499500$ pairs in a group, with each pair having a collision probability of $2^{-64}$, so the chance in one group is $499500 \cdot 2^{-64} \approx 2.7 \cdot 10^{-14}$. We can argue about factors like $e$, but you should expect to generate of order $2.7 \cdot 10^{14}$ before a collision.
|
There is a beautiful formula to count the number of ideals $I$ in the ring of integers $\mathcal{O}_K$ of a number field $K$, given by
$$\sum_{n \leq X} a_n \sim C_K X,$$
where $a_n$ is the number of ideals in $\mathcal{O}_K$ of norm $n$ and $C_K$ is the residue at $s = 1$ of the Dedekind zeta function $\zeta_K$ of $K$. The quantity $C_K$ is used in the class number formula, see: https://en.wikipedia.org/wiki/Class_number_formula
Indeed, one can further refine the result by counting principal ideals. Let $b_n$ denote the number of principal ideals in $\mathcal{O}_K$ of norm $n$. Then
$$\displaystyle \sum_{n \leq X} b_n \sim \frac{C_K}{h_K} X,$$
where $h_K$ is the class number of $K$.
Now let $K = \mathbb{Q}(\sqrt{d})$, where $d$ is a positive fundamental discriminant. Let $\epsilon_d = u_0 + v_0 \sqrt{d}$, where $(u_0, v_0)$ is the smallest positive solution to the Pell equation $x^2 - dy^2 = \pm 4$. Then the regulator of $K$ is given by $\log \epsilon_d$, so the class number formula yields
$$\displaystyle C_K = \frac{2 h(d) \log \epsilon_d}{\sqrt{d}}.$$
Therefore, the number of principal ideals up to $X$ is given by
$$\displaystyle \sum_{n \leq X} b_n \sim \frac{2 \log \epsilon_d}{\sqrt{d}} X.$$
My question is, what about the error term? According to these notes due to Andrew Granville, one gets an error term of $O_d(X^{1/2})$. He did not specify the dependence on $d$. I believe it is not too difficult to obtain $O(\max\{u_0, v_0\} X^{1/2})$ from Granville's arguments, but this seems very large (at least, the dependence on $d$ of the error term is far worse than the dependence on $d$ of the main term). Is there a way to show that the dependence on $d$ is small, say $O(X^{1/2} \log \epsilon_d)$?
|
1. About Unit-selection Speech Synthesis
Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech computer or speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech.
And for
Unit selection synthesis is using large databases of recorded speech. During database creation, each recorded utterance is segmented into some or all of the following: individual phones, diphones, half-phones, syllables, morphemes, words, phrases, and sentences. 2. Unit-selection Speech Synthesis
In Unit-selection Speech Synthesis, we use a large speech database which provides all syllable in. Like Japanese, Use those pronunciation:
あ か さ た な あ ま や ら わ ん が ざ だ ば ぱ い き し ち に ひ み り ぎ じ び ぴ う く す つ ぬ ふ む ゆ る ぐ ず ぶ ぷ え け せ て ね へ め れ げ ぜ で べ ぺ お こ そ と の ほ も よ ろ を ご ぞ ど ぼ ぽ きゃ しゃ ちゃ にゃ ひゃ みゃ りゃ ぎゃ じゃ びゃ ぴゃ きゅ しゅ ちゅ にゅ ひゅ みゅ りゅ ぎゅ じゅ びゅ ぴゅ きょ しょ ちょ にょ ひょ みょ りょ ぎょ じょ びょ ぴょ きぇ しぇ ちぇ にぇ ひぇ みぇ りぇ ぎぇ じぇ びぇ ぴぇ ふぁ ふぃ ふぇ ふぉ いぁ うぃ うぇ うぉ つぁ つぃ つぇ つぉ すぃ てぃ てゅ とぅ ずぃ でぃ でゅ どぅ ゔぁ ゔぃ ゔ ゔぇ ゔぉ
We need collect all pronounce of those words and record it. After that, we need add label for the $Initials$ and $finals$,than, do $prosodic\ parameter\ extraction$ and $spectral\ parameter\ extraction$. After that load into the Unit-selection Synthesis model which waiting for synthesis.
The Unit-selection Speech Synthesis Algorithm
$$ \begin{aligned} C^{c}\left(u_{i-1}, u_{i}\right) &=\sum_{k=1}^{q} w_{k}^{c} C_{k}^{c}\left(u_{i-1}, u_{i}\right) \\ C^{\prime}\left(t_{i}, u_{i}\right) &=\sum_{j=1}^{p} w_{j}^{\prime} C_{j}^{t}\left(t_{i}, u_{i}\right) \end{aligned} $$
$$ \hat{u}_{1}^{n}=\arg \min _{u^{i}}\left\{C\left(t_{1}^{n}, u_{1}^{n}\right)\right\} $$
$$ C\left(t_{1}^{n}, u_{1}^{n}\right)=\sum_{i=1}^{n} C^{\prime}\left(t_{i}, u_{i}\right)+\sum_{i=1}^{n} C^{c}\left(u_{i-1}, u_{i}\right) $$
$$ C^{c}\left(u_{i-1}, u_{i}\right) : \text { Concatenation cost } $$
$$ C^{t}\left(t_{i}, u_{i}\right) : \text { Target cost } $$
REFERENCES https://en.wikipedia.org/wiki/Speech_synthesis Expressive Prosody for Unit-selection Speech Synthesis
|
Finding the lengths of the arcs
If the perimeter of the square is 20 cm, then each side must be 5 cm long.
The sides of the square are the diameters of the semicircles, so the circumferences of the full circles would be $5\times\pi=5\pi$ cm.
As shown in the diagram below, the 4 semicircles make up 2 full circles.
So the total perimeter is $5\pi+5\pi=10\pi$ cm.
Using scale factors
The sides of the square are the diameters of the semicircles, and so the circumferences of the full circles would be $\pi\times\text{diameter}=\pi\times\text{side length}$.
Each semicircle has only half the circumference of a full circle, so its length is $\frac{1}{2}\pi\times\text{side length}$.
So to go from a square to a semicircle, each side length is mutliplied by a scale factor of $\frac{1}{2}\pi$. So the perimeter must also be multiplied by this scale factor. So the perimeter of the new shape will be $20\times\frac{1}{2}\pi=10\pi$ cm.
|
Why doesn´t the von Neumann hierarchy to $V_{\omega_1}$ exist in Zermelo set theory if with Scott´s trick you can "count" to $ \omega_1 $
To see what goes wrong, let's try to prove something even simpler: that $\omega_1$ exists.
We definitely have the set of all well-orderings of $\omega$ (via Powerset and Separation). Now, via Separation and Choice, we can pick a family $\{R_i: i\in I\}$ of well-orderings of $\omega$ such that each well-ordering of $\omega$ is order-isomorphic with exactly one $R_i$. The usual proof that well-orderings are well-ordered by initial segment embedding goes through, so we can build a well-ordering $W$ of order-type $\omega_1$ by "adding all the $R_i$s together." Of course, this is
not an ordinal, it's just a binary relation on a set (specifically, on $\omega\times I$) which is a well-ordering and which has "length $\omega_1$" intuitively.
OK, so we can build a
thing of length $\omega_1$; why can't we build $\omega_1$ itself?
Well, what we would
want to do is show: "For every well-ordering, there is an ordinal of the same length." (Appropriately formalized.) However, this fact uses Replacement!
Specifically, the usual proof goes roughly as: "Suppose there was a well-ordering $R$ of a set $X$, which is not in order-preserving bijection with any ordinal. We can assume that for each $x\in X$, the set $X_x=\{y\in X: y<_R x\}$
is in order-preserving bijection with some ordinal. (Otherwise, if there is some $x\in X$ such that $\{y\in X: y<_R x\}$ is not in order-preserving bijection with any ordinal, then since $R$ is a well-ordering there is a least such $x$; so just replace $X$ and $R$ with $X_x$ and $R\upharpoonright X_x\times X_x$.) So we have a map from elements of $X$ to ordinals. By Replacement, the image of that map exists and is a downwards-closed set of ordinals, and hence an ordinal; and it's easy to show that this ordinal is in order-preserving bijection with $R$."
In $ZC$, however, there is no reason for the class of ordinals corresponding to elements of $X$ to be a set! So the proof breaks down here.
In fact, it’s worse: in $ZC$ we can’t even prove that $\omega+\omega$ exists! (And the only way $ZC$ knows $\omega$ exists is by having it "built in" via the Axiom of Infinity.)
Technically, this doesn’t answer your question: all I’ve explained is why the usual proof that $\omega_1$ exists breaks down. I haven’t shown that
no proof exists. In order to prove that, really, $ZC$ can’t prove $\omega+\omega$ exists, it suffices to show that $V_{\omega+\omega}$ is a model of $ZC$. (This is a fun exercise!)
EDIT: You may be interested in the paper "Slim models of Zermelo set theory" by Mathias (https://www.dpmms.cam.ac.uk/~ardm/slim.dvi) - it focuses specifically on recursive definitions in Z(C), and where they break. Note that the first sentence of the abstract confirms my statement that Z alone cannot prove that $V_\omega$ exists.
|
I am wondering about generalisations of the concept of equivalence relations to suplattices.
Here is my motivation: Given a set $X$. The powerset $\mathcal{P}(X)$ is a suplattice. For suplattices there is a tensor product, giving $\mathcal{P}(X)\otimes\mathcal{P}(X)=\mathcal{P}(X\times X)$. Now we can define the diagonal $\Delta:=\left\{(x,x)\mid x\in X\right\}\in\mathcal{P}(X\times X)$, which is an equivalence relation. Take a general suplattice $M$. There is a tensor product $M\otimes M$, which can be embedded into the suplattice $\mathcal{P}(M\times M)$ (reference). The elements of $\mathcal{P}(M\times M)$ can be projected to the first (or to the second) component and you can take the supremum of the resulting subset of $M$. Thus you get projections $\pi_0,\pi_1\colon M\otimes M\to M$, preserving suprema.
In this context you could call a $\Delta\in M\otimes M$ an “equivalence relation in $M$ if it satisfies these conditions:
Symmetry: For all $a,b\in M$ such that $a\otimes b\le\Delta$ we have $b\otimes a\le\Delta$. Transitivity: For all $a,b,c\in M$ such that $a\otimes b\le\Delta$ and $b\otimes c\le\Delta$ we have $a\otimes c\le\Delta$. Reflexivity: The projections of the relation are maximal: $\pi_0(\Delta)=\pi_1(\Delta)=\top$ where $\top\in M$ is the top of the lattice.
Clearly, the diagonal is an equivalence relation in $\mathcal{P}(X)$. However,
I am interested in ways to construct such relations in non-trivial cases (i. e. where the lattice is not the powerset of a set and $\Delta$ should not be the top itself). Is there such a concept in the literature? Has it been studied? Or do you have any comment? Non-trivial examples (maybe there are even canonical examples?) or objections regarding the soundness of the definition are welcome.
My original motivation for regarding these concepts is the definition of the semantics of equality in certain logics interpreted in $\mathcal{P}(X\times\ldots\times X)$ using the diagonal—I am not an expert in the theory of suplattices etc. Best regards
|
The question:
(a) Let $f$ be a continuous periodic complex valued function with period $2\pi $.Then prove that given $\epsilon >0,$ there exists a function $Q(x)=\sum_{k=-M}^{M}c_{k}e^{ikx}$ with $c_k\in \mathbb{C}$ and $M\in \mathbb{N}$ defined on $[-\pi ,\pi ]$ such that $\int_{-\pi }^{\pi }\left | f(x)-Q(x) \right |dx<\epsilon $.
(b)Suppose $f$ is a continuous function on $R^{1}$,$f(x+2\pi )=f(x)$, and $\frac{\alpha }{\pi }$ is irrational. Prove that $\lim_{n\rightarrow \infty }\frac{1}{N}\sum_{n=1}^{N}f(x+n\alpha )=\frac{1}{2\pi }\int_{-\pi }^{\pi }f(t)dt$
for every $x$. Hint: Do it first for $f(x)=e^{ikx}$.
My attempt:
(a) $f$ is continuous on $[-\pi ,\pi ]$, so $f$ is Riemann integrable on $[-\pi ,\pi ]$.
Given $\epsilon >0,$ there exists a continuous function g such that $\int_{-\pi}^{\pi}\left | f(x)-g(x) \right |dx<\epsilon $. I wonder how $Q(x)$ is approximated to $g(x)$ here.
(b) I used the hint and put $f(x)=e^{ikx}$ and found that both sides of the equality are $0$, but I don't know how to expand to the trigonometric polynomials and why the condition "$\frac{\alpha }{\pi }$ is irrational" is needed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.