text
stringlengths 256
16.4k
|
|---|
Difference between revisions of "Image Dimensions"
(Added a bit of description of the properties dialog)
m
Line 10: Line 10:
;The on-screen size(?): The fields ''Width'' and ''Height'' tell synfigstudio how many pixels the image shall cover at a zoom level of 100%.
;The on-screen size(?): The fields ''Width'' and ''Height'' tell synfigstudio how many pixels the image shall cover at a zoom level of 100%.
;The physical size: The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images.
;The physical size: The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images.
−
;The mysterious ''Image Area'': Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: <font style="vertical-align:
+
;The mysterious ''Image Area'': Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: <font style="vertical-align:%;"><math>\text{span}=\sqrt{\Delta x^2 + \Delta y^2}</math></font>). The unit seems to be not pixels but ''unit''s, which are at [[Unit System|60 pixels each]].
Revision as of 14:52, 18 September 2008 Describing the fields of the Canvas Properties Dialog
The user access the image dimensions in the Canvas Properties Dialog.
The 'Others' tab
Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well).
The 'Image' tab
Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit:
The on-screen size(?) The fields Widthand Heighttell synfigstudio how many pixels the image shall cover at a zoom level of 100%. The physical size The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images. The mysterious Image Area Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \scriptstyle\text{span}=\sqrt{\Delta x^2 + \Delta y^2}}). The unit seems to be not pixels but units, which are at 60 pixels each.
|
Thanks all for participating POW actively. Here’s the list of winners:
1st prize: Lee, Myeongjae (이명재) – 2012학번 2nd prize: Kim, Taeho (김태호) – 수리과학과 2011학번 3rd prize: Park, Minjae (박민재) – 2011학번 4th prize: Suh, Gee Won (서기원) – 수리과학과 2009학번 5th prize: Lim, Hyunjin (임현진) – 물리학과 2010학번
Congratulations! We again have very good prizes this semester – iPad 16GB for the 1st prize, iPad Mini 16GB for the 2nd prize, etc.
정종헌 (2012학번) 2
GD Star Rating loading...
Consider all non-empty subsets \(S_1,S_2,\ldots,S_{2^n-1}\) of \(\{1,2,3,\ldots,n\}\). Let \(A=(a_{ij})\) be a \((2^n-1)\times(2^n-1)\) matrix such that \[a_{ij}=\begin{cases}1 & \text{if }S_i\cap S_j\ne \emptyset,\\0&\text{otherwise.}\end{cases}\] What is \(\lvert\det A\rvert\)?
The best solution was submitted by Kim, Taeho (김태호), 수리과학과 2011학번. Congratulations!
Here is his Solution of Problem 2012-24.
Alternative solutions were submitted by 이명재 (2012학번, +3), 임현진 (물리학과 2010학번, +3), 정종헌 (2012학번, +2), 어수강 (서울대학교 수리과학부 석사과정, +3).
GD Star Rating loading...
Prove that for each positive integer \(n\), there exist \(n\) real numbers \(x_1,x_2,\ldots,x_n\) such that \[\sum_{j=1}^n \frac{x_j}{1-4(i-j)^2}=1 \text{ for all }i=1,2,\ldots,n\] and \[\sum_{j=1}^n x_j=\binom{n+1}{2}.\]
The best solution was submitted by Taehyun Eom (엄태현), 2012학번. Congratulations!
Here is his Solution of Problem 2012-23.
Alternative solutions were submitted by 박민재 (2011학번, +3, Solution), 김태호 (수리과학과 2011학번, +2), 이명재 (2012학번, +2).
GD Star Rating loading...
|
Ab Initio “A journey of a thousand miles begins with a single step.” –Lao Tzu.
When someone is studying competitive programming from the beginning, he or she is said to be a student of competitive programming
ab initio. Students should be given relatively easy problems to begin their journeys, before they are gradually given more difficult problems.
Here is an example of an easy problem that does not even need a description.
Input
The first line of input contains three integers $V$ ($2 \leq V \leq 2\, 000$), $E$ ($1 \leq E \leq 200\, 000$), and $Q$ ($1 \leq Q \leq 2\, 000$), the number of vertices, the number of edges and the number of queries that must be performed on a
directed, unweighted graph $G$, respectively. For simplicity, we label the vertices $0, 1, 2, \dots , V-1$.
The next $E$ lines describe the edges of $G$, given in the format of an
edge list. In particular, the $i^\text {th}$ of these lines contains two integers $A_ i$ and $B_ i$ ($0\leq A_ i, B_ i < V$; $A_ i \neq B_ i$), denoting that the $i^\text {th}$ edge connects vertex $A_ i$ to vertex $B_ i$. It is guaranteed that for any pair of vertices $(a, b)$, there is at most one edge from $a$ to $b$.
The next $Q$ lines contain the queries. In particular, the $i^\text {th}$ of these lines begins with a single integer:
If this integer is $1$, no integers follow.
You should add a new vertex labeled $V$ to $G$. This vertex should not have edges to or from any other vertex. $V$ – the current size of $G$ – now increases by 1.
If this integer is $2$, two integers $X_ i$ and $Y_ i$ ($0\leq X_ i, Y_ i < V$; $X_ i \neq Y_ i$) follow.
You should add a new directed edge connecting vertex $X_ i$ to vertex $Y_ i$ in $G$. It is guaranteed that this edge does not currently exist.
If this integer is $3$, a single integer $X_ i$ ($0\leq X_ i < V$) follows.
You should delete all the incoming and outgoing edges of $X_ i$ from the graph $G$.
If this integer is $4$, two integers $X_ i$ and $Y_ i$ ($0\leq X_ i, Y_ i < V$; $X_ i \neq Y_ i$) follow.
You should remove the directed edge connecting vertex $X_ i$ to vertex $Y_ i$ from $G$. It is guaranteed that this edge currently exists.
If this integer is $5$, no integers follow.
You should replace $G$ with its transpose $G’$, defined as follows:
For every pair of vertices $(a, b)$, the edge from $a$ to $b$ exists in $G’$ if and only $a \neq b$ and the edge from $b$ to $a$ exists in $G$.
If this integer is $6$, no integers follow.
You should replace $G$ with its complement $\bar{G}$, defined as follows:
For every pair of vertices $(a, b)$, the edge from $a$ to $b$ exists in $\bar{G}$ if and only $a \neq b$ and the edge from $a$ to $b$ does not exist in $G$. Output
After performing all the queries, you will have a final graph $G$. Since this graph can be very large, we will not ask you to output the entire graph. Instead, by doing the below, you can convince us that you indeed have the required graph.
On the first line, output a single integer $V$, the number of vertices in the graph $G$.
For each of the next $V$ lines, output two integers. In particular, in the $i^\text {th}$ of these lines, output:
$d_ i$, the outdegree of vertex $i$, and
$h_ i$, the\begin{equation*} h_ i = 7^0\cdot n_1 + 7^1\cdot n_2 + 7^2\cdot n_3 + \dots + 7^{d_ i-1}\cdot n_{d_ i} \end{equation*}
hashof the adjacency listof vertex $i$, defined as follows. Suppose the vertices in the out-neighborhood of vertex $i$ are $n_1 < n_2 < \dots < n_{d_ i}$. Then
Since $h_ i$ can be quite large, you should output only the remainder after dividing this number by $10^9+7$.
Sample Input 1 Sample Output 1 3 2 8 0 1 0 2 2 1 2 1 2 3 0 4 0 2 3 0 2 0 3 6 5 4 3 162 3 161 2 21 2 15
|
The Gell-Mann matrices $\lambda^\alpha$ are the generators of $SU(3)$.
Applying an SU(3) - transformation on the triple $q = ( u , d, s )$ of 4-spinors looks like this:
$$ q \rightarrow q' = e^{i \Phi_\alpha \lambda^\alpha / 2} q.$$
So far I can follow and I also understand why the expression $\bar{q}q$ is invariant under this transformation.
Now my book defines axial transformations as $ q \rightarrow q' = e^{i \Phi_\alpha \lambda^\alpha / 2 \gamma_5} q$ and states that the expression $\bar{q}q$ is not invariant any longer under this transformation.
What confuses me is the fact that the $\lambda$ generators of $SU(3)$ and $\gamma$ matrices are being multiplied in the exponent, even though the $\lambda$ have 3 and the $\gamma$ have 4 dimensions.
Maybe this is not a matrix product but some sort of tensor product? In that case, how should the exponential expression be understood? I suspect $\lambda$ and $\gamma$ commute as they act on different vector spaces.
Or maybe it is a typo?
Or maybe the $\gamma_5$ is not 4-dimensional in this context?
|
783 1
I was told that the general definition of a derivative is
[tex]f'(x) = \lim_{\Delta x \rightarrow 0} \frac{\Delta y}{\Delta x}[/tex] (supposed to be delta y over delta x, but I can't make the latex work )
but why can't it work when [itex]\Delta y \rightarrow 0[/itex]?
[tex]f'(x) = \lim_{\Delta x \rightarrow 0} \frac{\Delta y}{\Delta x}[/tex]
(supposed to be delta y over delta x, but I can't make the latex work )
but why can't it work when [itex]\Delta y \rightarrow 0[/itex]?
Last edited by a moderator:
|
In geometry,
parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel. However, two lines in three-dimensional space which do not meet must be in a common plane to be considered parallel; otherwise they are called skew lines. Parallel planes are planes in the same three-dimensional space that never meet.
Parallel lines are the subject of Euclid's parallel postulate.
[1] Parallelism is primarily a property of affine geometries and Euclidean space is a special instance of this type of geometry. Some other spaces, such as hyperbolic space, have analogous properties that are sometimes referred to as parallelism.
Contents Symbol 1 Euclidean parallelism 2 Two lines in a plane 2.1 Conditions for parallelism 2.1.1 History 2.1.2 Construction 2.1.3 Distance between two parallel lines 2.1.4 Two lines in three-dimensional space 2.2 A line and a plane 2.3 Two planes 2.4 Extension to non-Euclidean geometry 3 Reflexive variant 4 See also 5 Notes 6 References 7 Further reading 8 External links 9 Symbol
The parallel symbol is \parallel. For example, AB \parallel CD indicates that line
AB is parallel to line CD.
In the Unicode character set, the "parallel" and "not parallel" signs have codepoints U+2225 (∥) and U+2226 (∦), respectively. In addition, U+22D5 (⋕) represents the relation "equal and parallel to".
[2] Euclidean parallelism Two lines in a plane Conditions for parallelism
As shown by the tick marks, lines
a and b are parallel. This can be proved because the transversal t produces congruent corresponding angles \theta, shown here both to the right of the transversal, one above and adjacent to line a and the other above and adjacent to line b.
Given parallel straight lines
l and m in Euclidean space, the following properties are equivalent: Every point on line m is located at exactly the same (minimum) distance from line l ( equidistant lines). Line m is in the same plane as line l but does not intersect l (recall that lines extend to infinity in either direction). When lines m and l are both intersected by a third straight line (a transversal) in the same plane, the corresponding angles of intersection with the transversal are congruent.
Since these are equivalent properties, any one of them could be taken as the definition of parallel lines in Euclidean space, but the first and third properties involve measurement, and so, are "more complicated" than the second. Thus, the second property is the one usually chosen as the defining property of parallel lines in Euclidean geometry.
[3] The other properties are then consequences of Euclid's Parallel Postulate. Another property that also involves measurement is that lines parallel to each other have the same gradient (slope). History
The definition of parallel lines as a pair of straight lines in a plane which do not meet appears as Definition 23 in Book I of Euclid's Elements.
[4] Alternative definitions were discussed by other Greeks, often as part of an attempt to prove the parallel postulate. Proclus attributes a definition of parallel lines as equidistant lines to Posidonius and quotes Geminus in a similar vein. Simplicius also mentions Posidonius' definition as well as its modification by the philosopher Aganis. [4]
At the end of the nineteenth century, in England, Euclid's Elements was still the standard textbook in secondary schools. The traditional treatment of geometry was being pressured to change by the new developments in projective geometry and non-Euclidean geometry, so several new textbooks for the teaching of geometry were written at this time. A major difference between these reform texts, both between themselves and between them and Euclid, is the treatment of parallel lines.
[5] These reform texts were not without their critics and one of them, Charles Dodgson (a.k.a. Lewis Carroll), wrote a play, Euclid and His Modern Rivals, in which these texts are lambasted. [6]
One of the early reform textbooks was James Maurice Wilson's
Elementary Geometry of 1868. [7] Wilson based his definition of parallel lines on the Constructing a parallel line through a given point with compass and straightedge External links Papadopoulos, Athanase; Théret, Guillaume (2014), La théorie des parallèles de Johann Heinrich Lambert : Présentation, traduction et commentaires, Paris: Collection Sciences dans l'histoire, Librairie Albert Blanchard, Further reading Richards, Joan L. (1988), Mathematical Visions: The Pursuit of Geometry in Victorian England, Boston: Academic Press, Wilson, James Maurice (1868), Elementary Geometry (1st ed.), London: Macmillan and Co. Wylie, Jr., C.R. (1964), Foundations of Geometry, McGraw–Hill (3 vols.): ISBN 0-486-60088-2 (vol. 1), ISBN 0-486-60089-0 (vol. 2), ISBN 0-486-60090-4 (vol. 3). Heath's authoritative translation plus extensive historical research and detailed commentary throughout the text. References ^ Although this postulate only refers to when lines meet, it is needed to prove the uniqueness of parallel lines in the sense of Playfair's axiom. ^ "Mathematical Operators – Unicode Consortium". Retrieved 2013-04-21. ^ Wylie, Jr. 1964, pp. 92—94 ^ a b Heath 1956, pp. 190–194 ^ Richards 1988, Chap. 4: Euclid and the English Schoolchild. pp. 161–200 ^ Carroll, Lewis (2009) [1879], Euclid and His Modern Rivals, Barnes & Noble, ^ Wilson 1868 ^ Einführung in die Grundlagen der Geometrie, I, p. 5 ^ Heath 1956, p. 194 ^ Richards 1988, pp. 180–184 ^ Heath 1956, p. 194 ^ Only the third is a straightedge and compass construction, the first two are infinitary processes (they require an "infinite number of steps".) ^ H. S. M. Coxeter (1961) Introduction to Geometry, p 192, John Wiley & Sons ^ Wanda Szmielew (1983) From Affine to Euclidean Geometry, p 17, D. Reidel ISBN 90-277-1243-3 ^ Andy Liu (2011) "Is parallelism an equivalence relation?", The College Mathematics Journal 42(5):372 Notes See also
Another way of describing this type of parallelism is the requirement that their intersection is
not a singleton. Two lines are then parallel when they have all or none of their points in common. It has been noted that Playfair's axiom used in affine and Euclidean geometry is then equivalent to the statement that parallelism forms a transitive relation on the set of lines in the plane. [15]
In synthetic, affine geometry the relation of two parallel lines is a fundamental concept that is modified from the usage in Euclidean geometry. It is clear that the relation of parallelism is a symmetric relation and a transitive relation. These are two properties of an equivalence relation. In Euclidean geometry a line is
not considered to be parallel to itself, but in affine geometry [13] [14] it is convenient to hold a line as parallel to itself, thus yielding parallelism as an equivalence relation. Reflexive variant
In spherical geometry, all geodesics are great circles. Great circles divide the sphere in two equal hemispheres and all great circles intersect each other. Thus, there are no parallel geodesics to a given geodesic, as all geodesics intersect. Equidistant curves on the sphere are called
parallels of latitude analogous to the latitude lines on a globe. Parallels of latitude can be generated by the intersection of the sphere with a plane parallel to a plane through the center of the sphere.
On the sphere
there is no such thing as a parallel line. Line
a
is a great circle
, the equivalent of a straight line in spherical geometry. Line
c
is equidistant to line
a
but is not a great circle. It is a parallel of latitude. Line
b
is another geodesic which intersects
a
in two antipodal points. They share two common perpendiculars (one shown in blue).
Spherical
In the literature
ultra parallel geodesics are often called non-intersecting. Geodesics intersecting at infinity are then called limit geodesics. intersecting, if they intersect in a common point in the plane, parallel, if they do not intersect in the plane, but have a common limit point at infinity, or ultra parallel, if they do not have a common limit point at infinity.
While in Euclidean geometry two geodesics can either intersect or be parallel, in general, and in hyperbolic space in particular, there are three possibilities. Two geodesics can either be:
In non-Euclidean geometry, it is more common to talk about geodesics than (straight) lines. A geodesic is the shortest path between two points in a given geometry. In physics this may be interpreted as the path that a particle follows if no force is applied to it. In non-Euclidean geometry (elliptic or hyperbolic geometry) the three Euclidean properties mentioned above are not equivalent and only the second one, since it involves no metrics, is useful in non-Euclidean geometries. In general geometry the three properties above give three different types of curves,
equidistant curves, parallel geodesics and geodesics sharing a common perpendicular, respectively. Extension to non-Euclidean geometry
Two distinct planes
q and r are parallel if and only if the distance from a point P in plane q to the nearest point in plane r is independent of the location of P in plane q. This will never hold if the two planes are not in the same three-dimensional space.
Similar to the fact that parallel lines must be located in the same plane, parallel planes must be situated in the same three-dimensional space and contain no point in common.
Two planes
Equivalently, they are parallel if and only if the distance from a point
P on line m to the nearest point in plane q is independent of the location of P on line m.
A line
m and a plane q in three-dimensional space, the line not lying in that plane, are parallel if and only if they do not intersect. A line and a plane
Two distinct lines
l and m in three-dimensional space are parallel if and only if the distance from a point P on line m to the nearest point on line l is independent of the location of P on line m. This never holds for skew lines.
Two lines in the same three-dimensional space that do not intersect need not be parallel. Only if they are in a common plane are they called parallel; otherwise they are called skew lines.
Two lines in three-dimensional space d = \frac{|c_2-c_1|}{\sqrt {a^2+b^2}}.
their distance can be expressed as
ax+by+c_1=0\, ax+by+c_2=0,\,
When the lines are given by the general form of the equation of a line (horizontal and vertical lines are included):
d = \frac{|b_2-b_1|}{\sqrt{m^2+1}}\,.
which reduces to
d = \sqrt{\left(\frac{b_1m-b_2m}{m^2+1}\right)^2 + \left(\frac{b_2-b_1}{m^2+1}\right)^2}\,,
These formulas still give the correct point coordinates even if the parallel lines are horizontal (i.e.,
m = 0). The distance between the points is \left( x_2,y_2 \right)\ = \left( \frac{-b_2m}{m^2+1},\frac{b_2}{m^2+1} \right).
and
\left( x_1,y_1 \right)\ = \left( \frac{-b_1m}{m^2+1},\frac{b_1}{m^2+1} \right)\,
to get the coordinates of the points. The solutions to the linear systems are the points
\begin{cases} y = mx+b_2 \\ y = -x/m \end{cases}
and
\begin{cases} y = mx+b_1 \\ y = -x/m \end{cases}
the distance between the two lines can be found by locating two points (one on each line) that lie on a common perpendicular to the parallel lines and calculating the distance between them. Since the lines have slope
m, a common perpendicular would have slope −1/ m and we can take the line with equation y = − x/ m as a common perpendicular. Solve the linear systems y = mx+b_1\, y = mx+b_2\,,
Because parallel lines in a Euclidean plane are equidistant there is a unique distance between the two parallel lines. Given the equations of two non-vertical, non-horizontal parallel lines,
Distance between two parallel lines
Property 1: Line
m has everywhere the same distance to line l.
Property 2: Take a random line through
a that intersects l in x. Move point x to infinity.
Property 3: Both
l and m share a transversal line through a that intersect them at 90°.
The problem: Draw a line through
a parallel to l.
The three properties above lead to three different methods of construction
[12] of parallel lines. Construction
Other properties, proposed by other reformers, used as replacements for the definition of parallel lines, did not fare much better. The main difficulty, as pointed out by Dodgson, was that to use them in this way required additional axioms to be added to the system. The equidistant line definition of Posidonius, expounded by Francis Cuthbertson in his 1874 text
Euclidean Geometry suffers from the problem that the points that are found at a fixed given distance on one side of a straight line must be shown to form a straight line. This can not be proved and must be assumed to be true. [11] The corresponding angles formed by a transversal property, used by W. D. Cooley in his 1860 text, The Elements of Geometry, simplified and explained requires a proof of the fact that if one transversal meets a pair of lines in congruent corresponding angles then all transversals must do so. Again, a new axiom is needed to justify this statement.
[10]
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
The eigenstates of a time-independent Hamiltonian$$ H |\psi_j \rangle = E_j|\psi_j\rangle$$ have the usual rotating-phase time dependence in the Schrödinger picture:$$ |\psi_j(t)\rangle = |\psi_j(t_0)\rangle \cdot \exp(E_j(t-t_0)/i\hbar) $$However, your formula indicates that $H_0$ and $V$ are time-dependent. So if the state vector is an eigenvector of $H_0(t)$ at one moment $t$, it will almost certainly not be an eigenstate of $H_0(t')$ at another moment $t'$ – because the operator is probably changing in a way that doesn't preserve the eigenvectors' being eigenvectors. For that reason, the simple "changing phase" Ansatz isn't a solution to the Schrödinger's equation.
For a generic Hamiltonian $H_0(t)$, there can't exist any state vector $|\psi(t)\rangle$ that simultaneously solves Schrödinger's equation (with the Hamilonian $H$ or even with $H_0(t)$); and that remains an eigenstate of $H_0(t)$ at each moment of time.
Because you say that $H$ is time-independent and $H_0,V$ seem to be time-dependent, it seems that all the formulae you are describing are in the Heisenberg picture, not Schrödinger's picture. In the Heisenberg picture, the state vector is independent of time (a constant function of $t$). In the Schrödinger's picture, the dependence of $H_0$ and $V$ on time would have to be "explicit" (the dynamical dependence is encoded in the evolution of the state vector in this picture) and in that case, it would be unreasonable for the explicitly time-dependent parts of $H_0,V$ to cancel in $H$.
In the Heisenberg picture, the dynamical time dependence is included in the evolution of the operators,$$H_0(t) = \exp(-Ht/i\hbar) H_0(0) \exp(Ht/i\hbar) $$and similarly for $V$ instead of $H_0$ (and for any operator).
|
The implication here is that each individual discoverer must start from nothing but a bag of crying cells, and build up knowledge in a linear order before making a discovery in a vacuum.
In reality, I find we have an entire interwoven society trying to make the discoveries, not independent individuals. There is an entire section of society dedicated to distilling the human essence into teaching. There is an entire section devoted to building infrastructure to make it easier to step beyond. There is an entire section devoted to getting discoverers together, so that they don't ALL have to learn ALL of the knowledge; they merely need to have all of the knowledge when they put their minds together.
Consider that the trade knowledge needed to run a particle accelerator is equally essential to discovery as the quantum physics models used to point the accelerator in new and exciting directions. The physicists probably don't know how to correctly shim the hundreds of segments of the accelerator to be in a perfect shape (and doesn't have the time to learn). The physics probably hasn't spent enough time with high voltage to wire up thousands of electromagnets without a short taking the entire accelerator down. This knowledge, held in the minds of the tradesmen who support the physicists, is equally essential
but the physicists never had to learn them; these skills were learned in parallel by all of humanity.
The only thing I have found which can leave us with no time to discover is society itself. If society dulls, and our lives suddenly require an entire lifetime of learning just to survive, that could be the cusp where humanity simply cannot learn any further.
However, even then there is a light at the end of the tunnel. The poets have a long list of skills like "how to love" which take a lifetime to learn, and yet we keep working on them day after day. Perhaps one day, discovery will simply take the form of loving the universe and seeing what it wishes to tell us today. Oh fine! Lets see some math
Lets try to put some mathematical equations down to make sure we're all on the same page. I'll use it to show how a rather boring society resembling the Vulcans could go about never ending discover
First off, I am going to assume there is a never ending supply of things to discover in the universe. If there is a finite number of things to discover, then it is trivial to show that the number of discoveries humankind can make is finite. Let us define the universe of potential discoveries to be $\mathbb{D}$
I am going to assume the only thing in our brain that matters in the long run are structures. These are structures you have to learn over time in order to effectively do a task, such as discovering a new direction. I believe there is more to the brain, but I think this is close enough to model your question of learning and technology. Let us define these structures to be $\mathbb{S}$, the set of all helpful structures that the human brain can possibly organize into, and let $\text{Fits}(S), S\in \mathbb{S}$ to be a predicate that returns true if the set of structures $S$ would fit into a single human brain, and false otherwise. Because entering the world with new structures makes it trivial to prove we can keep discovering, we can assume $S$ of a newborn is $\emptyset$.
Now we need a notation for learning. I will assume, for simplicity, that people learn at a constant rate through their entire lives. I leave it to the reader to show that handling the case where learning rate is variable is a trivial transform from this simpler case. Because I am arguing that we will never run out of things to learn, I can assume the worst case of "you can only learn one thing at a time" without loss of generality. Consider the universe of learning activities, $\mathbb{L}$. For any learning activity $l \in \mathbb{L}$, we can define a function $\text{cost}_{\text{learn}}(l, S)$ which defines the cost (in time) of doing learning activity $l$ given that you already have all of the structures $S$ in your head. Let $\text{results}_{\text{learn}}(l, S)$ be a function which returns a set of structures in your brain after doing a learning activity.
Finally, we need a notation for discovery. $\text{cost}_{\text{discover}}(d, S)$ is the cost of discovering a particular element of $\mathbb{D}$.
Now we can define the goals. Let us define $\text{cost}_{\text{schooling}}(L)$ and $\text{results}_{\text{schooling}}(L)$ where $L$ is an ordered set of learning activities to be the cost and results of raising an individual up from $S = \emptyset$ through a sequence of learning activities. Thus $\text{cost}_{\text{schooling}}$ will be the sum of $\text{cost}_{\text{learn}}$, and $\text{results}_{\text{schooling}}$ will be the final result at the end of iterating $\text{results}_{\text{learn}}$. Our goal is to prove that there can always be a $\text{cost}_{\text{schooling}}(L) + \text{cost}_{\text{discover}}(d, \text{results}_{\text{schooling}}(L)) < \text{lifespan}$. Let us assign this a predicate: $\text{DiscoveryCapable}(L, D_{prev} \Leftrightarrow \exists_{d\in\mathbb{D},L^\prime}[(\forall{l\in L^\prime} l\in L)\land d\notin D_{prev}]$, which is a mouthful to day "A society is DiscoveryCapable if, for their set of known learning activities, and previously discovered disoveries, there exists a discoverable thing." Let us also add $\text{Discoverable}(L, d) \Leftrightarrow \exists_{L^\prime} \text{cost}_{\text{schooling}}(L^\prime) + \text{cost}_{\text{discover}}(d, \text{results}_{L^\prime}) < \text{lifespan}$, or "A discovery is discoverable if, given the known set of learning activities, someone can discover it in a lifetime."
Now here we will note that $\forall_{l\in\mathbb{L}}l \in \mathbb{D}$, or in words, every learning activity is something which can be discovered. This leads to a "Lotus Eaters" situation, where could simply continuously develop new ways to learn without going anywhere, so lets fix that. Lets define $\text{Trivial}(l)$ to be true if $\forall_{S\in populace}\exists_{L_0} (\forall_s s\in \text{results}_{\text{learn}}(l, S) \to \text{results}_{\text{learn}}(l, S_0)) \land \text{cost}_{\text{learn}}(L) \ge \text{cost}_{\text{schooling}}(L_0) $. In other words, its trivial to develop a new learning activity which doesn't teach anything new and costs more than an existing schooling!
Now we do a proof by contradiction. We assume $\text{DiscoveryCapable}(L, D_{prev})$ is false for our society. We will prove this is contradictory, meaning there is no such society that cannot find a discovery.
If $\text{DiscoveryCapable}$ is false, then that means there are no new non-trivial learning activities which are discoverable. If we find that there must be a non-trivial learning activity to discover, we have a proof by contradiction. This means we must prove $\forall_{L, D_{prev}}\exists_l \lnot \text{Trivial}(l) \land \text{Discoverable}(L, l)$
Consider the Turing machine, which is accepted to be far simpler than even a human. If we can prove that, at this time, a Turing machine can develop a new useful learning activity for us, then we can make a discovery simply by following that program. We are, after all, at least as impressive as computers.
Let us devise a turing machine to help. Select a subset of $L$ called $L_T$ which is the learning activities which can be analyzed by a Turing machine. We want to find a program which finds a $l \notin L_T$ such that $\lnot \text{Trivial}(l)$. The first step is easy. It is trivial for computers to find an activity $\exists_{l\in 2^{L_T}} l \notin L$. Such power set behaviors occur all the time in NP problems.
Now what if the computer can't do this? The next step is to gather some data about the universe. If we can't find any new data, then we are literally out of things to discover. If we find new data, we can have the computers crunch it harder, to find things that we don't understand, but computers can find. If they cannot, then all Turing-capable learning methods are exhausted, and we have covered the universe with our computational prowess. We, in effect, used computers to extend our life, crunching a subset of our possible learning activities, in hopes of finding a new one.
And now we sit back and look at the non Turing learning activities. It is not easy to tell if there is a faster way to learn such things. In fact, the only limit seems to be creativity.
The only limit for our capacity to discover is our own creativity.
|
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
|
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
|
Subset is Right Compatible with Ordinal Exponentiation Theorem
Let $x, y, z$ be ordinals.
Then: $x \le y \implies x^z \le y^z$ Proof
The proof shall proceed by Transfinite Induction on $z$.
Basis for the Induction
If $z = \varnothing$, then $x^z = 1$
\(\displaystyle x^z\) \(=\) \(\displaystyle 1\) Definition of Ordinal Exponentiation \(\displaystyle y^z\) \(=\) \(\displaystyle 1\) Definition of Ordinal Exponentiation \(\displaystyle x^z\) \(\le\) \(\displaystyle y^z\) Set is Subset of Itself This proves the basis for the induction.
$\Box$
Induction Step
The inductive hypothesis states that $x^z \le y^z$ for $y$.
Then:
\(\displaystyle x^{z^+}\) \(=\) \(\displaystyle x^z \times x\) Definition of Ordinal Exponentiation \(\displaystyle \) \(\le\) \(\displaystyle x^z \times y\) Membership is Left Compatible with Ordinal Multiplication \(\displaystyle \) \(\le\) \(\displaystyle y^z \times y\) Subset is Right Compatible with Ordinal Multiplication \(\displaystyle \) \(=\) \(\displaystyle y^{z^+}\) Definition of Ordinal Exponentiation This proves the induction step.
$\Box$
Limit Case
The inductive hypothesis for the limit case states that:
$\forall w \in z: x^w \le y^w$ where $z$ is a limit ordinal.
\(\, \displaystyle \forall w \in z: \, \) \(\displaystyle x^w\) \(\subseteq\) \(\displaystyle y^w\) Inductive Hypothesis \(\displaystyle \implies \ \ \) \(\displaystyle \bigcup_{w \mathop \in z} x^w\) \(\subseteq\) \(\displaystyle \bigcup_{w \mathop \in z} y^w\) by Indexed Union Subset \(\displaystyle \implies \ \ \) \(\displaystyle x^z\) \(\subseteq\) \(\displaystyle y^z\) Definition of Ordinal Exponentiation
This proves the limit case.
$\blacksquare$
|
RGPV First Year Engineering (Set A) (Semester 1)
Engineering Mathematics -I December 2011
Engineering Mathematics -I
December 2011
Total marks: --
Total time: --
Total time: --
INSTRUCTIONS
(1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary
(1) Assume appropriate data and state your reasons
(2) Marks are given to the right of every question
(3) Draw neat diagrams wherever necessary
Answer any one question from Q1 & Q2
1 (a) Expand e
a sinin ascending power of x. -1x
7 M
1 (b) If p= x cos a+y sin a, touches the curve \[ \left ( \dfrac {x}{a} \right )^{\frac {n}{n-1}} + \left ( \dfrac {y}{b} \right )^{\frac {n}{n-1}}=1 \] Prove that: p
n=(a cos a) n+ (b sin a) n
7 M
2 (a) Show that the radius of curvature at any point of the cycloid \[ x=a (\theta + \sin \theta ), y=a (1-\cos \theta) \ is \ 4 a \cos \left ( \dfrac {\theta}{2} \right ) \]
7 M
2 (b) \[ If \ u=\sin^{-1} \left ( \dfrac {x+y}{\sqrt{x}+ \sqrt{y}} \right ), \ prove \ that : \\ i) \ x\dfrac {\partial u}{\partial x} + y \dfrac {\partial u}{\partial y} = \dfrac {1}{2} \tan u \\ ii) \ x^2 \dfrac {\partial^2 u}{\partial x^2}+ 2xy \dfrac {\partial^2 u}{\partial x \partial y} + y^2 \dfrac {\partial^2 u}{\partial y^2} = - \dfrac {\sin u \cos 2 u}{4 \cos^3 u} \]
7 M
Answer any one question from Q3 & Q4
3 (a) Find the limit as n→∞ of the series: \[ \dfrac {1}{n+1} + \dfrac {1}{n+2}+ \dfrac {1}{n+3} + \cdots \ \cdots + \dfrac {1}{2n} \]
7 M
3 (b) Find the volume common to the cylinders
x
x
2+y 2=a 2, x 2+z 2=a 2
7 M
4 (a) Evaluate: \[ \displaystyle \int^\infty_0 \int^{x}_0 xe^{-x^2/y}dy \ dx \] by changing the order integration.
7 M
4 (b) Prove that:
\((i)\ \dfrac{\beta (m+1, n)}{m} = \dfrac{\beta (m,n+1)}{n} = \dfrac{\beta (m,n)}{m+n}\\ (ii)\ \Gamma (m) \Gamma \left(m+\dfrac {1}{2} \right) = \dfrac {\sqrt{\pi}}{2^{2m-1}} \Gamma(2m)\)
\((i)\ \dfrac{\beta (m+1, n)}{m} = \dfrac{\beta (m,n+1)}{n} = \dfrac{\beta (m,n)}{m+n}\\ (ii)\ \Gamma (m) \Gamma \left(m+\dfrac {1}{2} \right) = \dfrac {\sqrt{\pi}}{2^{2m-1}} \Gamma(2m)\)
7 M
Answer any one question from Q5 & Q6
5 (a) Solve the equation: \[ (y-x) \dfrac{dy}{dx} = a^2 \]
7 M
5 (b) Solve the equation: \[ \dfrac {d^2 y}{dx^2} + 4y = \sec 2x \\ \] by the method of variation of parameters.
7 M
6 (a) Solve the equation: \[ x^2 \dfrac {d^2 y}{dx^2} -2x \dfrac {dy}{dx} - 4y = x^2 + \log x \]
7 M
6 (b) Solve the simultaneous equations: \[ \dfrac {dx}{dt} + y = \sin t \\ \dfrac {dy}{dt}+x \cos t \]
7 M
Answer any one question from Q7 & Q8
7 (a) Reduce the matrix: \[ A= \begin{bmatrix} 2 &3 &4 &5 \\3 &4 &5 &6 \\4 &5 &6 &7 \\9 &10 &11 &12 \end{bmatrix} \] to normal form and find its range.
7 M
7 (b) Find the eigen values and eigen vectors of the matrix: \[ A=\begin{bmatrix}2 &1 &1 \\1 &2 &1 \\0 &0 &1 \end{bmatrix} \]
7 M
8 (a) Text for consistency and solve:
5x+3y+7z=4 3x+26y+2z=9 7x+2y+10z=5
5x+3y+7z=4
3x+26y+2z=9
7x+2y+10z=5
7 M
8 (b) Verify Cayley-Hamilton theorem for the matrix: \[ A=\begin{bmatrix} 1 &2 &1 \\0 &1 &-1 \\3 &-1 &1 \end{bmatrix} \] and find its inverse.
7 M
Answer any one question from Q9 & Q10
9 (a) Define the following terms with examples:
i) Simple graph ii) Degree of a vertex iii) Isomorphic graphs iv) Spanning tree
i) Simple graph
ii) Degree of a vertex
iii) Isomorphic graphs
iv) Spanning tree
7 M
9 (b) Express the following function into disjunctive normal form:
f(x,y,z)=(x+y+z)(x·y+x'·z)'
f(x,y,z)=(x+y+z)(x·y+x'·z)'
7 M
10 (a) Let X={a, b, c, d} be a universe of discourse and A, B be the fuzzy sets on X defined by: \[ A= \left \{ \dfrac {0.3}{a}, \dfrac {0.5}{b}, \dfrac {0.6}{c}, \dfrac {0.4}{d} \right \} \\ B= \left \{ \dfrac {0.2}{a}, \dfrac {0.6}{b}, \dfrac {0.3}{c}, \dfrac {0.7}{d} \right \} \] Find:
i) Height of A ∪ B ii) α-cut of A ∩ B for \alpha;=0.4 (A ∪ B)' iv) A' ∩ B'
i) Height of A ∪ B
ii) α-cut of A ∩ B for \alpha;=0.4
(A ∪ B)'
iv) A' ∩ B'
7 M
10 (b) Prove that the number of vertices of odd degree in a graph in always even.
7 M
More question papers from Engineering Mathematics -I
|
As @EricWofsey pointed out, your construction doesn't work because you're taking a possibly infinite disjunction, but first-order logic only has finite disjunctions.
Here's one way to go about solving the problem:
First, we can assume without loss of generality that $\varphi$ has no constant symbols. (If it has constant symbols $c_1,\dots,c_n,$ just replace $\varphi$ with the sentence $(\exists x_1)\dots(\exists x_n)\varphi(c_1/x_1,\dots,c_n/x_n),$ where the $x_i$ are variables that do not appear in $\varphi$ and where $c/x$ means to replace all occurrences of $c$ with $x.$ The resulting formula has the same spectrum as the original $\varphi,$ but has no constant symbols.)
Expand the language that $\varphi$ is written in by adding a new two-place relation symbol $R.$
Let $u$ be a variable that doesn't appear in the sentence $\varphi.$ For every formula $\chi$ in which the variable $u$ doesn't appear, we'll define a formula $\chi^*$ which has as its free variables all the free variables that appear in $\chi$ and possibly also $u.$ The definition of $\chi^*$ proceeds by induction, as follows:
If $\chi$ is atomic, then $\chi^*$ is $\chi.$
$(\lnot \chi)^*$ is $\lnot (\chi^*).$
$(\chi_1 \lor \chi_2)^*$ is $\chi_1^* \lor \chi_2^*.$
$(\chi_1 \land \chi_2)^*$ is $\chi_1^* \land \chi_2^*.$
$(\exists y \,\chi)^*$ is $\exists y \,(R(u,y)\land(\chi^*)).$
$(\forall y \,\chi)^*$ is $\forall y \,(R(u,y)\rightarrow (\chi^*)).$
Now define the sentence $\psi$ to be $$(\forall y)(\exists!u)R(u,y)\,\land\,(\forall u)\big((\exists yR(u,y))\rightarrow \varphi^*\big).$$ (As usual, $"\!\exists!u \,P(u)\!"$ is an abbreviation for $"\!\exists u \,(P(u) \land \forall v\,(P(v)\rightarrow v=u))\!",$ where $v$ is a variable that doesn't appear in $P.)$
Let $\mathscr{M}=\langle M, \dots\rangle$ be a finite model of $\psi,$ and let $\,C\,$ be the finite set $\,\{x \in M \mid \mathscr{M}\models (\exists y)R(x,y)\}.$ For each $c\in C,$ let $\mathscr{M}_c$ be the submodel of $\mathscr{M}$ with domain $M_c=\{y\in M \mid \mathscr{M}\models R(c,y) \}.$ (Each $M_c$ is non-empty, by the definition of $C.)$
The $M_c$ are pairwise disjoint, and $\bigcup_{c\in C}M_c=M,$ because $\mathscr{M}\models (\forall y)(\exists!u)\big(R(u,y)\big).$
Next observe that that the following is true by induction on $\chi\!:\;$ If $\chi$ is any formula with free variables among $x_1, \dots, x_n$ in which the variable $u$ does not appear, then, for any $c\in C$ and any $b_1,\dots,b_n\in M_c,$ we have
$$\mathscr{M}_c\models\chi(b_1,\dots, b_n) \;\text{ iff }\; \mathscr{M}\models\chi^*(b_1,\dots,b_n,u/c),$$where $u/c$ means that $c$ is substituted for the free variable $u$ in $\chi^*.$
We know that $\mathscr{M}\models (\forall u)\big((\exists y\,R(u,y))\rightarrow \varphi^*\big),$ so, for every $c\in C,$ $\mathscr{M}\models \varphi^*(u/c),$ and it follows that each $\mathscr{M}_c$ is a model of $\varphi.$
So each $M_c$ has cardinality in $\operatorname{spec}(\varphi).$ Since the $M_c$ form a partition of $M,$ we can conclude that the cardinality of $M$ is a finite sum of members of $\operatorname{spec}(\varphi),$ as desired.
Conversely, if $a_1, \dots, a_n \in \operatorname{spec}(\varphi),$ we can let $\mathscr{M}_k=\langle M_k,\dots\rangle$ be a model of $\varphi$ of cardinality $a_k,$ for $1\le k \le n.$ Without loss of generality, we can assume that the $M_k$ are pairwise disjoint (by replacing each $\mathscr{M}_k$ by an isomorphic copy). Let $\mathscr{M}$ be the union of the models $\mathscr{M}_k,$ with the interpretation of relation and function symbols defined to extend the corresponding relations and functions on the $M_k.$ (If a relation or function is presented with arguments from more than one $M_k,$ define the relation or function on those arguments in any way you want — it doesn't matter. As for constant symbols, we eliminated all those in our language earlier precisely because there wouldn't be any way to define interpretations of any constant symbols in $\mathscr{M}$ in a way compatible with each $\mathscr{M}_k.)$
$\mathscr{M}$ is a model for the language of $\varphi;$ we still need to specify an interpretation for $R.$ For $1\le k \le n,$ pick $c_k\in M_k.$ Define $R(x,y)$ to be true iff $x$ is some $c_k$ and $y\in M_k.$
You can check that $\mathscr{M}$ is a model of $\psi,$ and $M$ has cardinality $a_1+\dots+a_n,$ so we're done.
|
Let $s$ be a real number greater than 1. Let the function $f$ be defined as
$f(x)=\frac{ln(x)}{x^s}, x>0$.
a) Show that $\int_{a}^{\infty}\frac{ln(x)}{x^s}dx$ converges for $a>0$ and find its value.
b) Show that the infinite series $\sum_{n=1}^{\infty} \frac{ln(x)}{n^s}$ converges.
I believe I can show that (a) converges but I'm not certain how to find it's value. Any help is appreciated.
For (a) I note that for $f(s)=\frac{ln(x)}{x^s}, x>0$ we have that f is a positive valued decreasing function because $f'(s)=\frac{1}{x^{s+1}}-\frac{s\cdot ln(x)}{x^{s+1}}<0$ for $x>0$.
Therefore
$\int_{a}^{\infty}f(x)dx=\lim_{t \to \infty}\int_{a}^{t}\frac{ln(x)}{x^s}dx=\frac{ln(a)}{a^{a-s}(s-1)}+\frac{1}{a^{a-s}(s-1)^2}=\frac{(s-1)ln(a)+1}{a^{s-1}(s-1)^2}<\infty$.
Which implies that $\int_{a}^{\infty}\frac{ln(x)}{x^s}dx$ is convergent.
But I'm uncertain how to find it's value.
for (b). I will argue that it converges because f is positive and decreasing with $\lim_{x \to \infty}f(x)=0$. Meaning that as $\int_{a}^{\infty}\frac{ln(x)}{x^s}dx$ converges $=>$ $\sum_{n=1}^{\infty} \frac{ln(x)}{n^s}$ converges.
|
Efficiently estimating recall June 09, 2019 tl;dr Factorize recall measurement into a cheaper precision measurement problem and profit. Measuring precision tells you where your model made a mistakebut measuring recall tells you where your model can improve. Estimating precisiondirectly is relatively easy but estimating recall directly is quintessentially hard on "open-world" domains because you don't know what you don't know. As a result, recall-oriented annotation can cost an order of magnitude more than the analogous precision-oriented annotation. By combining cheaper precision-oriented annotations on several models' predictions with an importance-reweighted estimator, you can triple the data efficiency of getting an unbiased estimate of true recall.
If you're trying to detect or identify an event with a machine learning system, the metrics you really care about are precision and recall: if you think of the problem as finding needles in a haystack, precision tells you how often your system mistakes a straw of hay for a needle, while recall helps you measure
how often your system misses needles entirely. Both of these metrics are important in complementary ways. The way I like to think about it is that measuring precision tells you where your model made a mistake while measuring recall tells you where your model can improve.
In most real applications (and in accordance with our haystack metaphor) the positive class we care about tends to be much rarer than the negative class. As a result, recall is quintessentially hard to measure because
no one really knows how many needles are in the haystack and there's too much hay to go through and count it manually. A naive solution is to start counting straws of hay until you have enough samples to get a reasonable estimate of recall (what I'll call exhaustive or recall-oriented annotation) -- however, because there's an order of magnitude more hay than there are needles so you'll have to go through a lot of hay to get a good estimate. On the other hand, precision tends to be relatively easy to measure: instead of sifting through the haystack to find needles, you just need to inspect (or have paid annotators inspect) the "needles" predicted by your system and track how many times it got it right or made a mistake (I'll call this precision-oriented annotation). It can cost roughly cost 10 times as much to annotate for recall as it does to annotate for precision! Is there a more cost-effective way to measure recall than exhaustive annotation? In this article, I'll describe a simple but effective technique Ashwin Paranjape and I came up with to leverage relatively cheap precision-oriented annotations to reduce the costs of measuring recall by about a factor of three. This post will stay high-level and focus on the key ideas, but you can find all the gory details and proofs in our EMNLP 2017 paper. Case study: extracting relational facts from documents
Before we proceed, I'd like to ground the problem into a concrete use-case. The one that I've studied most extensively has been that of
knowledge base population (KBP) or relation extraction: given a document corpus, we'd like to identify relationships between people and organizations mentioned in the dataset.
Consider the following sentence about the late actress, Carrie Fisher:
[Fisher]’s mother, [Debbie Reynolds], also an [actress], said on Twitter on Sunday that her daughter was stabilizing.
A KBP system must identify that (a) 'Fisher' refers to Carrie Fisher, (b) Carrie Fisher and Debbie Reynolds are related by a "parent-child" relationship and (c) that Carrie Fisher was an actress. While this example is relatively simple, there are lots and lots of unique ways in which facts are described in text making this a very challenging task.
Given a corpus of (say) 100,000 news articles, how do we know how much information about Carrie Fisher our system has identified from these articles? How many such bits of information are there even? These are precisely the types of questions measuring recall will help us answer.
Bonus case study: measuring conversation success
The rest of this article will use the setup in the case study above since that's what's in our paper, but I wanted to take this opportunity to highlight the ubiquity of the recall measurement problem. At Eloquent Labs, a conversational AI company, a fundamental problem we needed to tackle was measuring how well our chatbot was able to respond to the questions or requests (intents) customers asked. Our business value could be measured as the percentage of issues or customer intents we were able to successfully resolve
1.
While it's easy to measure the absolute number of intents the chatbot successfully handled, how do you count the intents that the chatbot missed to get the total number of customer intents in the first place? This is a recall measurement problem.
Why is estimating recall so hard?
In both the scenarios described above, the key reason that estimating recall is hard is that "you don't know what you don't know." If you want to collect the data needed to measure recall it's hard to avoid having a person identify needles that the model was not able to. In most practical scenarios, that just requires a substantial amount of effort.
Let's look at our case study as an example. We spent almost a month of iterating interface designs to help annotators (a) verify the relation between two entities extracted by a system (precision-oriented annotations) and (b) identify all possible relations within a single document (recall-oriented annotations). In order to measure recall, annotators had to first identify all the possible entities in the document and then ascertain if the text described a relationship between any of them (Figure 1).
We found that recall-oriented annotation took about ten times longer per fact as compared to precision-oriented data annotation. The actual difference in annotation time (and, as a corollary, cost) will vary by task and interface, but the difference is typically substantial. Can we use a model to reduce the costs of recall-oriented annotation?
It is tempting to think that we can make this process more efficient by using the model by only showing people things that the model finds "likely"
2. This mode of annotation looks more similar to what's needed to measure precision, which we've seen can be substantially cheaper 3. Unfortunately, this approach can introduce substantial bias in our measurements: we are unlikely to annotate any facts that our model places low probability mass on, and consequently we may never realize that we've missed those facts, inflating our estimates of recall.
A popular workaround is to combine the "likely" facts from many different systems with the hope that using the union of their predictions gives us enough coverage to approximate true recall (Figure 2). This measure is called
pooled recall and is one of the official evaluation measures used by the TAC-KBP shared task: pooled recall is measured by comparing how many facts a single system found with the union of all the facts found by the teams that participated that year 4.
However, in our paper we show that there are two key problems with pooled recall:
In practice, even the union of 20--30 systems does not provide sufficient coverage to approximate true recall. Naive measurements of pooled recall are biased because of correlations between the systems that are combined.
In essence,
naive pooled recall, while significantly cheaper to measure, is a biased approximation of true recall.
In the next two sections, we'll first show to combine exhaustive annotations with pooled recall to fix the coverage problem and then we'll provide a new estimator to fix the bias problem. Put together, we'll have a simple, correct (unbiased) and efficient estimator for recall.
Key idea 1: Use exhaustive annotations to extrapolate the coverage of pooled recall.
As we saw in the previous section pooled recall underestimates true recall because it simply doesn't cover all the facts in our universe (see Figure 2 for a visual aid). Our fix this problem is very simple: we will extrapolate true recall of a system (\(S_i\)) from its pooled recall by using a separate estimate of
the true recall of the pool of systems (\(\bigcup_j S_j\)): \[{\text{TrueRecall}}(S_i) = {\text{TrueRecall}}\left( \bigcup_j S_j \right) \times {\text{PooledRecall}}(S_i).\]
Explained via analogy, suppose we are trying to measure what fraction of PhDs in the world are doing computer science, \({\text{TrueRecall}}(\text{CS PhDs})\). For the purposes of this example, let's assume that every country has an equal distribution of PhDs. The above equation essentially says we can break up our computation into first measuring how many CS PhDs there are in the United States, \({\text{PooledRecall}}(\text{US CS PhDs})\), and then multiplying it by the fraction of PhDs from the United States, \({\text{TrueRecall}}(\text{US PhDs})\). This roundabout way of measuring \({\text{TrueRecall}}\) works if we can measure \({\text{PooledRecall}}\) to a greater precision than \({\text{TrueRecall}}\), e.g. if the NSF tracks doctorate recipients in the US better than other countries do.
Why does this help us? Intuitively, we're able to exploit the fact that, for the same cost, we can collect ten times as many precision-oriented annotations to estimate pooled recall and thus significantly reduce the variance of our estimate of pooled recall.
Key idea 2: Use importance sampling to correct bias when estimating pooled recall.
The final problem we'll need to solve is correcting the bias introduced when naively combining the output from different systems. Figure 3 shows a simple example of how this bias is created: when two systems have correlated output (e.g. they were trained on similar data and hence predict similar relations), sampling
independently from each component system will lead to some regions (\(A \cup B\), the dark blue region) being over-represented. As a result, systems that predict more unique or novel relations (e.g. \(C\) in the figure) would measure lower on pooled recall than the other systems (\(A\) and \(B\)) even if they all have the same true recall. 5
One way to resolve this problem is to sample uniformly from the whole pool instead of sampling each system independently. Unfortunately, this approach doesn't really let you reuse your annotations when adding new systems into the pool: the distribution from which you sampled output has now changed.
Instead, we propose what we call an importance-reweighted estimator that uses independent per-system samples, but re-weights them for each pooled-recall computation. When evaluating a new system, you can still reuse the old annotations with new weights. The resultant estimator looks like this: \[\text{PooledRecall}(S_i) = \frac{\sum_{j} w_{ij} \sum_{x \in \text{Correct}(S_j)} \mathbb{I}[x \in \text{Correct}(S_i)] {q_i(x)}^{-1}}{\sum_{j} w_{ij} \sum_{x \in \text{Correct}(S_j)} q_i(x)^{-1}},\] where \(w_{ij}\) is an arbitrary weighting factor, \(p_j(x)\) is the probability of drawing a particular instance or relation \(x\) from \(S_j\)'s output, \(q_i(x) = \sum_{j} w_{ij} p_{j}(x)\) and \(\text{Correct}(S_j)\) is shorthand for the correct subset of \(S_j\)'s output. I wish I could provide a simplified, non-mathematical explanation of our importance-reweighted estimator, but the details are quite nuanced and I'll have to refer you to the paper for a complete explanation.
Concluding notes
With the two components described above, we finally have an
unbiased estimator of true recall that is guaranteed to be more efficient than naive sampling. Figure 4 provides a teaser for how our estimator performs in practice on data obtained through the TAC-KBP shared task. The key takeaways are that a naive pooled estimator is significantly biased, while the importance reweighted estimator presented here has about a third the variance of the naive true recall estimator without introducing any bias.
That's it for this post: I hope you've found at least some of the ideas presented here useful!
Note that our measure of intent satisfaction is a bit more nuanced than just call deflection: we want to know how often we've
helpedcustomers and not just how often we've prevented them from talking to a person.↩
I put "likely" in quotes because getting meaningful confidence estimates from a statistical model is non-trivial and most of the time the scores returned by the model are quite arbitrary.↩
In fact, the annotations collected to measure system precision can be reused to measure pooled recall, which further amortizes the costs of measuring pooled recall.↩
As an extra measure to try to reduce the bias of pooled recall, the TAC-KBP organizers also include a
humanteam that tries to identify facts from the corpus within a certain amount of time. Our analysis always incorporates these annotations, and while they help, it's not by much.↩
It's worth noting that the bias effect described here is an artifact of the sampling process. If one could annotate
allof the system's output, the problem would go away. However, in most real scenarios the system output is large enough that it is impractical if not prohibitively expensive to annotate it all.↩
|
ZFITTER
ZFITTER v. 6. 21: A semi-analytical program for fermion pair production in e + e - annihilation. Characteristics of fermions pair production is important for the study of the properties of the Z-boson and for precision tests of the Standard Model at linear colliders at higher energies. High precision correction computation are needed for obtaining necessary data. ZFITTER is a Fortran 77 program based on a semi-analytical approach to fermions pair production in e + e - annihilation at a wide range of centre-of-mass energies. The paper gives detailed descriptions of all physical parameter which can be calculated by the ZFITTER program. (
Source: http://cpc.cs.qub.ac.uk/summaries/) Keywords for this software References in zbMATH (referenced in 12 articles , 1 standard article )
Showing results 1 to 12 of 12.
Bernreuther, Werner; Chen, Long; Dekkers, Oliver; Gehrmann, Thomas; Heisler, Dennis: The forward-backward asymmetry for massive bottom quarks at the (Z) peak at next-to-next-to-leading order QCD (2017) Ghezzi, Margherita; Gomez-Ambrosio, Raquel; Passarino, Giampiero; Uccirati, Sandro: NLO Higgs effective field theory and (\kappa)-framework (2015) Blanke, Monika; Buras, Andrzej J.; Gemmler, Katrin; Heidsieck, Tillmann: (\DeltaF = 2) observables and (B \toX_q\gamma) decays in the left-right model: Higgs particles striking back (2012) Holthausen, Martin; Lim, Kher Sham; Lindner, Manfred: Planck scale boundary conditions and the Higgs mass (2012) Domingo, Florian; Lenz, Teresa: (W) mass and leptonic (Z)-decays in the NMSSM (2011) Casagrande, S.; Goertz, F.; Haisch, U.; Neubert, M.; Pfoh, T.: The custodial Randall-Sundrum model: from precision tests to Higgs physics (2010) Del Aguila, F.; De Blas, J.; Pérez-Victoria, M.: Electroweak limits on general new vector bosons (2010) Awramik, M.; Czakon, M.; Freitas, A.; Kniehl, B. A.: Two-loop electroweak fermionic corrections to (\sin^2\Theta_eff^b\barb) (2009) Dutta, Sukanta; Hagiwara, Kaoru; Yan, Qi-Shu; Yoshida, Kentaroh: Constraints on the electroweak chiral Lagrangian from the precision data (2008) Actis, Stefano; Passarino, Giampiero: Two-loop renormalization in the Standard Model. III: Renormalization equations and their solutions (2007) Passarino, Giampiero; Uccirati, Sandro: Algebraic-numerical evaluation of Feynman diagrams: two-loop self-energies (2002) Bardin, D.; Bilenky, M.; Christova, P.; Jack, M.; Kalinovskaya, L.; Olchevski, A.; Riemann, S.; Riemann, T.: ZFITTER v. 6. 21: A semi-analytical program for fermion pair production in (e^+e^-) annihilation (2001)
|
If two topological spaces are weak homotopy equivalent to each other, are their Cech cohomology groups the same?
$$T = \left\{ \left( x, \sin \frac{1}{x} \right ) : x \in (0,1] \right\} \cup \{(0,y)\mid y\in[-1,1]\}$$
This has trivial homotopy groups in degrees $\ge1$ but according to Wikipedia nontrivial Čech cohomology in degree 1.
To give a more enlightening answer to the question:
Cech cohomology is
not the same as singular cohomology. However it is on CW-complexes. But there is CW approximation for topological spaces and singular cohomology is a weak homotopy invariant, so Cech cohomology can't be.
|
Let $T$ be the time-ordering operator which orders operators $A_1(t_1), A_2(t_2), \ldots$ such that the time parameter decreases from left to right:
$$T[A_1(t_1) A_2(t_2)] = A_2(t_2) A_1(t_1) \text{ if } t_2 > t_1 \text{ and }= A_1(t_1)A_2(t_2) \text{ otherwise. } $$
The time $t_i$ does not have to be a physical time, it can also be an imaginary time, etc.
Question:I would like to know why the following equation holds: for $t_i \leq t_1, t_2 \leq t_f$ it holds that $$T\left[A_1(t_1) A_2(t_2) \exp\left(-i\int_{t_i}^{t_f}H(t) dt\right)\right] \\ = T\left[\exp\left(-i\int_{t_{\pi_1}}^{t_f}H(t) dt\right)\right] A_{\pi_1}(t_{\pi_1}) \cdot T\left[\exp\left(-i\int_{t_{\pi_2}}^{t_{\pi_1}}H(t) dt\right)\right] A_{\pi_2}(t_{\pi_2}) \cdot T\left[\exp\left(-i\int_{t_i}^{t_{\pi_2}}H(t) dt\right)\right] ,$$ where $\pi$ is a permutation such that the times are ordered.
I encountered this equation in Negele & Orland (1998) in eq. (2.49) on p. 63 and in eq. (2.67b) on p. 70, where they split the integral
$$\int_{t_i}^{t_f} dt = \int_{t_i}^{t_{\pi_2}} dt + \int_{t_{\pi_2}}^{t_{\pi_1}} dt + \int_{t_{\pi_1}}^{t_f} dt$$
and used the time-ordering. It appears in calculations of greens functions respectively correlation functions.
I tried to prove this equation in an elementary way by using
$$ T\left[\exp\left(-i\int_{t_i}^{t_f}H(t) dt\right)\right] = 1 + \sum_{n=1}^\infty \frac{(-i)^n}{n!} \int_{t_i}^{t_f} d\tau_1 \ldots \int_{t_i}^{t_f} d\tau_n T \left[ H(\tau_1) \ldots H(\tau_n) \right]$$
[cf. eq. (2.10) on p. 50] and applying the $T$-operator on the expression, but I did not succeed yet. If someone can show me a valid proof or point out some literature where it is proven, I'd be thankful.
|
In my lecture notes I have that the distribution of a random variable $Y$ is said to be in the exponential family if it can be written as $f(y;\theta)=exp(a(y)b(\theta)+c(\theta)+d(y))$, where $a,b,c$ and $d$ are fixed functions. If we have (in the exponential family form) that $a(y)=y$, then the distribution is said to be in canonical form and then $b(\theta)$ is the called the natural parameter and $b$ is called the natural link function. Then we have shown that for a random variable $Y$ in exponential family form we have $\mathbb{E}(a(Y))=\frac{-c'(\theta)}{b'(\theta)}$ and $var(a(Y))=\frac{b''(\theta)c'(\theta)-c''(\theta)b'(\theta)}{[b'(\theta)]^{3}}$.
I am now working on the following problem: the gamma pdf of r.v. $Y$ is given by
$\begin{equation} f(y) = (s^{a}\Gamma (a))^{-1}y^{a-1}e^{y/s} \end{equation}$,
where $y \geq 0$, $s$ is the scale parameter, $a$ the shape parameter. The first question asks me to reparameterise this pdf by setting $a=1/\phi$ and $s=\mu\phi$, and hence show that it is a member of the exponential family. So after introducing $a=1/\phi$ and $s=\mu\phi$ into the pdf and rearranging I get
$ f(y) = exp((\frac{1}{\phi}-1)log(y)-\frac{y}{\mu\phi}-\frac{1}{\phi}log(\mu\phi)-log(\Gamma(1/\phi))=exp(a(y)b(\theta)+c(\theta)+d(y))$,
where $a(y)=y$,$b(\mu)=-\frac{1}{\mu\phi}$,$c(\mu)=-\frac{1}{\phi}log(\mu\phi)$ and $d(y)=(\frac{1}{\phi}-1)log(y) - log(\Gamma(\frac{1}{\phi}))$, where we treat the (dispersion) parameter $\phi$ as a nuisance parameter. Is that correct?
The next question says: deduce that the canonical link for the gamma is $\theta=\frac{1}{\mu}=\eta=\textbf{X}\beta$. So I'm thinking since $f$ is in a canonical form, the canonical parameter is $b(\mu)=-\frac{1}{\mu\phi}$ and so ignoring all the constants of proportionality we have that the canonical link, in its simplest form, is $\frac{1}{\mu}$, as required. Does that make sense? I still don't have a good enough understanding of link/canonical link functions, I'm afraid.
Then the next question asks me to deduce further that the variance function is $b''(\theta)=-1/\theta^{2}=-\mu^{2}$. I don't really know how to do this. Why is this the variance function? What is $\theta$ here? A canonical link? That clearly doesn't make sense. Or is just the dummy variable for our parameter of interest ( $\mu$ in our case )? I've tried to differentiate $b(\mu)$ w.r.t. to $\mu$ but it doesn't work. I'd really appreciate some help.
|
In Appendix C of a paper by Michael E. Tipping and Christopher M. Bishop about mixture models for probabilistic PCA, the probability of a single data vector $\mathbf{t}$ is expressed as a mixture of PCA models (equation 69):
$$ p(\mathbf{t}) = \sum_{i=1}^M\pi_i p(\mathbf{t}|i) $$
where $\pi$ is the mixing proportion and $p(\mathbf{t}|i)$ is a single probabilistic PCA model.
The model underlying the probabilistic PCA method is (equation 2)
$$ \mathbf{t} = \mathbf{Wx} + \boldsymbol\mu + \boldsymbol\epsilon. $$ Where $\mathbf{x}$ is a latent variable. By introducing a new set of variables $z_{ni}$ "labelling which model is responsible for generating each data point $\mathbf{t}_n$", Bishop formulates the complete log likelihood as (equation 70):
$$\mathcal{L}_C = \sum_{n=1}^N\sum_{i=1}^Mz_{ni}ln\{\pi_ip(\mathbf{t}_n, \mathbf{x}_{ni})\}.$$I would like to understand how he derives this expression as he doesn't provide a solution himself.
How is this expression for the complete log likelihood found?
|
Data on the mean multiplicity of strange hadrons produced in minimum bias proton--proton and central nucleus--nucleus collisions at momenta between 2.8 and 400 GeV/c per nucleon have been compiled. The multiplicities for nucleon--nucleon interactions were constructed. The ratios of strange particle multiplicity to participant nucleon as well as to pion multiplicity are larger for central nucleus--nucleus collisions than for nucleon--nucleon interactions at all studied energies. The data at AGS energies suggest that the latter ratio saturates with increasing masses of the colliding nuclei. The strangeness to pion multiplicity ratio observed in nucleon--nucleon interactions increases with collision energy in the whole energy range studied. A qualitatively different behaviour is observed for central nucleus--nucleus collisions: the ratio rapidly increases when going from Dubna to AGS energies and changes little between AGS and SPS energies. This change in the behaviour can be related to the increase in the entropy production observed in central nucleus-nucleus collisions at the same energy range. The results are interpreted within a statistical approach. They are consistent with the hypothesis that the Quark Gluon Plasma is created at SPS energies, the critical collision energy being between AGS and SPS energies.
Elastic and inelastic 19.8 GeV/c proton-proton collisions in nuclear emulsion are examined using an external proton beam of the CERN Proton Synchrotron. Multiple scattering, blob density, range and angle measurements give the momentum spectra and angular distributions of secondary protons and pions. The partial cross-sections corresponding to inelastic interactions having two, four, six, eight, ten and twelve charged secondaries are found to be, respectively, (16.3±8.4) mb, (11.5 ± 6.0) mb, (4.3 ± 2.5) mb, (1.9 ± 1.3) mb, (0.5 ± 0.5) mb and (0.5±0.5)mb. The elastic cross-section is estimated to be (4.3±2.5) mb. The mean charged meson multiplicity for inelastic events is 3.7±0.5 and the average degree of inelasticity is 0.35±0.09. Strong forward and backward peaking is observed in the center-of-mass system for both secondary charged pions and protons. Distributions of energy, momentum and transverse momentum for identified charged secondaries are presented and compared with the results of work at other energies and with the results of a statistical theory of proton-proton collisions.
Double differential K+cross sections have been measured in p+C collisions at 1.2, 1.5 and 2.5 GeV beam energy and in p+Pb collisions at 1.2 and 1.5 GeV. The K+ spectrum taken at 2.5 GeV can be reproduced quantitatively by a model calculation which takes into account first chance proton-nucleon collisions and internal momentum with energy distribution of nucleons according to the spectral function. At 1.2 and 1.5 GeV beam energy the K+ data excess significantly the model predictions for first chance collisions. When taking secondary processes into account the results of the calculations are in much better agreement with the data.
The differential and total cross sections for kaon pair production in the pp->ppK+K- reaction have been measured at three beam energies of 2.65, 2.70, and 2.83 GeV using the ANKE magnetic spectrometer at the COSY-Juelich accelerator. These near-threshold data are separated into pairs arising from the decay of the phi-meson and the remainder. For the non-phi selection, the ratio of the differential cross sections in terms of the K-p and K+p invariant masses is strongly peaked towards low masses. This effect can be described quantitatively by using a simple ansatz for the K-p final state interaction, where it is seen that the data are sensitive to the magnitude of an effective K-p scattering length. When allowance is made for a small number of phi events where the K- rescatters from the proton, the phi region is equally well described at all three energies. A very similar phenomenon is discovered in the ratio of the cross sections as functions of the K-pp and K+pp invariant masses and the identical final state interaction model is also very successful here. The world data on the energy dependence of the non-phi total cross section is also reproduced, except possibly for the results closest to threshold.
The production of eta mesons has been measured in the proton-proton interaction close to the reaction threshold using the COSY-11 internal facility at the cooler synchrotron COSY. Total cross sections were determined for eight different excess energies in the range from 0.5 MeV to 5.4 MeV. The energy dependence of the total cross section is well described by the available phase-space volume weighted by FSI factors for the proton-proton and proton-eta pairs.
Sigma+ hyperon production was measured at the COSY-11 spectrometer via the p p --> n K+ Sigma+ reaction at excess energies of Q = 13 MeV and Q = 60 MeV. These measurements continue systematic hyperon production studies via the p p --> p K+ Lambda/Sigma0 reactions where a strong decrease of the cross section ratio close-to-threshold was observed. In order to verify models developed for the description of the Lambda and Sigma0 production we have performed the measurement on the Sigma+ hyperon and found unexpectedly that the total cross section is by more than one order of magnitude larger than predicted by all anticipated models. After the reconstruction of the kaon and neutron four momenta, the Sigma+ is identified via the missing mass technique. Details of the method and the measurement will be given and discussed in view of theoretical models.
K+ meson production in pA (A = C, Cu, Au) collisions has been studied using the ANKE spectrometer at an internal target position of the COSY-Juelich accelerator. The complete momentum spectrum of kaons emitted at forward angles, theta < 12 degrees, has been measured for a beam energy of T(p)=1.0 GeV, far below the free NN threshold of 1.58 GeV. The spectrum does not follow a thermal distribution at low kaon momenta and the larger momenta reflect a high degree of collectivity in the target nucleus.
We report a new measurement of the pseudorapidity (eta) and transverse-energy (Et) dependence of the inclusive jet production cross section in pbar b collisions at sqrt(s) = 1.8 TeV using 95 pb**-1 of data collected with the DZero detector at the Fermilab Tevatron. The differential cross section d^2sigma/dEt deta is presented up to |eta| = 3, significantly extending previous measurements. The results are in good overall agreement with next-to-leading order predictions from QCD and indicate a preference for certain parton distribution functions.
We present the first observation of exclusive $e^+e^-$ production in hadron-hadron collisions, using $p\bar{p}$ collision data at \mbox{$\sqrt{s}=1.96$ TeV} taken by the Run II Collider Detector at Fermilab, and corresponding to an integrated luminosity of \mbox{532 pb$^{-1}$}. We require the absence of any particle signatures in the detector except for an electron and a positron candidate, each with transverse energy {$E_T>5$ GeV} and pseudorapidity {$|\eta|<2$}. With these criteria, 16 events are observed compared to a background expectation of {$1.9\pm0.3$} events. These events are consistent in cross section and properties with the QED process \mbox{$p\bar{p} \to p + e^+e^- + \bar{p}$} through two-photon exchange. The measured cross section is \mbox{$1.6^{+0.5}_{-0.3}\mathrm{(stat)}\pm0.3\mathrm{(syst)}$ pb}. This agrees with the theoretical prediction of {$1.71 \pm 0.01$ pb}.
|
We all know that the density of the nucleus is very high.
Nuclei are made up of protons and neutrons, and while protons have the same charge, they are closely packed in a nucleus. How does the repulsion between protons not break apart the nucleus?
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community
We all know that the density of the nucleus is very high.
Nuclei are made up of protons and neutrons, and while protons have the same charge, they are closely packed in a nucleus. How does the repulsion between protons not break apart the nucleus?
Since the gravitational force between two protons is negligible there must be another force holding the nucleus together. This is the strong nuclear force, which as the name suggests is extremely strong but it is also extremely short range and so it's effects are only felt on the scale of nuclei and baryons. As you can see in the graph, if two protons approach each other they repel each other due to electrostatic repulsion but once they get within about $\mathrm{3~fm}$ of each other the strong nuclear force begins to become significant and at about $\mathrm{2~fm}$ it starts to outweigh the electrostatic repulsion and bind the two protons together.
It is also interesting to note that at even shorter ranges the strong nuclear force is repulsive and so there is an equilibrium position, indicated on the graph, where the strong nuclear attraction and the electrostatic repulsion are equal in magnitude and so the net force on the protons is zero.
Note that the graph shown is for two protons and so in nuclei containing more than two protons the numbers will be different but the principle is the same.
Protons and neutrons in a nucleus are constantly emitting and absorbinglittle particles. When one nucleon emits a little particle called a "meson" that another nucleon absorbs, a strong force between the two nucleons results. This is called (strangely enough) the strong nuclear force
1, and it is strong enough to counteract the powerful electrostatic repulsion of the protons in a nucleus. Note: The modern explanation is richer and more complex than this. Since the 70's these "mesons" have been understood to be made up of smaller particles called quarks and gluons, which are also the building blocks of nucleons. In fact the strong force doesn't just bind the nucleus together; it holds the quarks in protons and neutrons together, too. The contemporary explanation of the strong interaction is in the realm of quantum chromodynamics, and really beyond the scope of chemistry. You can read more about it here.
Production and destruction of the messenger mesons violates the law of conservationof mass and energy! However, if the messenger particle has a very shortlifetime, and so exists only for a very short time within a very small space,the particle can exist within the limitations set by the uncertainty principle.Particles like this are called
virtual particles.
Nuclei are very small (on the order of femtometers in radius), so the range of the strong nuclear force must be very small. You can make a back-of-the-envelope estimate of the range as follows.
The uncertainty principle says that you can’t
exactly determine the positionand momentum of a very small particle simultaneously:
$$\sigma_x \, \sigma_p\ge \frac{\hbar}{2}$$
where $\sigma_x$ and $\sigma_p$ are the uncertainties in the position and momentum of a particle, and $\hbar$ is the reduced Planck constant. Given that the mass of the meson that mediates the strong force is $m \approx 2.4\times 10^{−28}$ kg, and the uncertainty in the velocity can’t be any larger than the speed of light $c$, you can compute $m c$ as a bound on $\sigma_p$.
You can then estimate $\sigma_x$, which should be related to the maximum distance that a meson can exist from the nucleon that generated it without violating the uncertainty principle, or the universal speed limit:
$$\sigma_x \approx \frac{\hbar}{2 m c} = 0.7\ \text{fm}$$
That’s the right order of magnitude for the range of the strong nuclear force; it's also about the distance between nucleons in the nucleus, and about 6/10ths of the classical radius of a proton.
Coulomb's electrostatic repulsion between two protons is extremely large as compared to the gravitational force of attraction between them. Therefore, as you understand correctly, if Coulomb's repulsive and gravitational attractive forces are the only forces operating inside the nucleus, it cannot be stable. The stability of nucleus has been attributed to the existence of a third type of force inside the nucleus called Nuclear force.
Reference: Modern's abc of Physics by Satish K.Gupta, Part 2, Class 12.
|
We have seen that sometimes double integrals are simplified by doing them in polar coordinates; not surprisingly, triple integrals are sometimes simpler in cylindrical coordinates or spherical coordinates. To set up integrals in polar coordinates, we had to understand the shape and area of a typical small region into which the region of integration was divided. We need to do the same thing here, for three dimensional regions.
The cylindrical coordinate system is the simplest, since it is just the polar coordinate system plus a \(z\) coordinate. A typical small unit of volume is the shape shown below "fattened up'' in the \(z\) direction, so its volume is \(r\Delta r\Delta \theta\Delta z\), or in the limit, \(r\,dr\,d\theta\,dz\).
A polar coordinates "grid".
Example \(\PageIndex{1}\)
Find the volume under \(z=\sqrt{4-r^2}\) above the quarter circle inside \(x^2+y^2=4\) in the first quadrant.
Solution
We could of course do this with a double integral, but we'll use a triple integral:
$$\int_0^{\pi/2}\int_0^2\int_0^{\sqrt{4-r^2}} r\,dz\,dr\,d\theta=
\int_0^{\pi/2}\int_0^2 \sqrt{4-r^2}\; r\,dr\,d\theta= {4\pi\over3}.$$
Compare this to Example 15.2.1.
Example \(\PageIndex{2}\)
An object occupies the space inside both the cylinder \(x^2+y^2=1\) and the sphere \(x^2+y^2+z^2=4\), and has density \(x^2\) at \((x,y,z)\). Find the total mass.
Solution
We set this up in cylindrical coordinates, recalling that \(x=r\cos\theta\):
$$\eqalign{
\int_0^{2\pi}\int_0^1\int_{-\sqrt{4-r^2}}^{\sqrt{4-r^2}} r^3\cos^2(\theta)\,dz\,dr\,d\theta &=\int_0^{2\pi}\int_0^1 2\sqrt{4-r^2}\;r^3\cos^2(\theta)\,dr\,d\theta\cr &=\int_0^{2\pi} \left({128\over15}-{22\over5}\sqrt3\right)\cos^2(\theta)\,d\theta\cr &=\left({128\over15}-{22\over5}\sqrt3\right)\pi\cr }$$
Spherical coordinates are somewhat more difficult to understand. The small volume we want will be defined by \(\Delta\rho\), \(\Delta\phi\), and \(\Delta\theta\), as pictured in Figure \(\PageIndex{1}\).
The small volume is nearly box shaped, with 4 flat sides and two sides formed from bits of concentric spheres. When \(\Delta\rho\), \(\Delta\phi\), and \(\Delta\theta\) are all very small, the volume of this little region will be nearly the volume we get by treating it as a box. One dimension of the box is simply \(\Delta\rho\), the change in distance from the origin. The other two dimensions are the lengths of small circular arcs, so they are \(r\Delta\alpha\) for some suitable \(r\) and \(\alpha\), just as in the polar coordinates case.
Figure \(\PageIndex{1}\): A small unit of volume for a spherical coordinates (AP)
The easiest of these to understand is the arc corresponding to a change in \(\phi\), which is nearly identical to the derivation for polar coordinates, as shown in the left graph in Figure \(\PageIndex{2}\). In that graph we are looking "face on'' at the side of the box we are interested in, so the small angle pictured is precisely \(\Delta\phi\), the vertical axis really is the \(z\) axis, but the horizontal axis is
not a real axis---it is just some line in the \(x\)-\(y\) plane. Because the other arc is governed by \(\theta\), we need to imagine looking straight down the \(z\) axis, so that the apparent angle we see is \(\Delta\theta\). In this view, the axes really are the \(x\) and \(y\) axes. In this graph, the apparent distance from the origin is not \(\rho\) but \(\rho\sin\phi\), as indicated in the left graph.
Figure \(\PageIndex{2}\): Setting up integration in spherical coordinates.
The upshot is that the volume of the little box is approximately \(\Delta\rho(\rho\Delta\phi)(\rho\sin\phi\Delta\theta) =\rho^2\sin\phi\Delta\rho\Delta\phi\Delta\theta\), or in the limit \(\rho^2\sin\phi\,d\rho\,d\phi\,d\theta\).
Example \(\PageIndex{3}\)
Suppose the temperature at \((x,y,z)\) is \[T=\dfrac{1}{1+x^2+y^2+z^2}.\nonumber\] Find the average temperature in the unit sphere centered at the origin.
Solution
In two dimensions we add up the temperature at "each'' point and divide by the area; here we add up the temperatures and divide by the volume, \((4/3)\pi\):
$${3\over4\pi}\int_{-1}^1\int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}}
\int_{-\sqrt{1-x^2-y^2}}^{\sqrt{1-x^2-y^2}} {1\over1+x^2+y^2+z^2}\,dz\,dy\,dx \nonumber $$
This looks quite messy; since everything in the problem is closely related to a sphere, we'll convert to spherical coordinates.
$${3\over4\pi}\int_0^{2\pi}\int_0^\pi
\int_0^1 {1\over1+\rho^2}\,\rho^2\sin\phi\,d\rho\,d\phi\,d\theta ={3\over4\pi}(4\pi -\pi^2)=3-{3\pi\over4}. \nonumber $$
|
My friend and I were building paper cones made from circles in which we cut out sectors of the circle and joined the two sides together. We were wondering about the relationship between the angle of the sector of the circle we cut out and the inclined angle of the cone the circle eventually made. While doing the math, when cutting the circle in half, a 180 degree sector, we eventually figured out the slant angle was somewhere around 56.944 degrees using cosine and the radius and slant height of the cone. I happened to notice that number was also very close to 180/pi, or 57.296 degrees. Is there any relationship between these two? Or is it coincidence that they're close in number? Could anyone explain to me what the relationship between these two degrees are?
The length of a circular arc is directly proportional to its central angle, so when you cut out a sector of angle $\theta$ and glue the cut edges together, the circumference of the base of the cone is going to be smaller than that of the original circle by this proportional amount: $c=C-\theta R$. Since the circumference of a circle is $2\pi$ times its radius, we then have for the radius of the cone’s base $$r = \left(1-\frac\theta{2\pi}\right)R.$$
The slant height of the cone is the original circle’s radius $R$, so $$\sin \alpha=\frac r R=1-\frac\theta{2\pi},$$ where $\alpha$ is the half-angle at the cone’s vertex.
If you cut out half of the circle, you’ll have $\sin\alpha=\frac12$, or $\alpha=\frac\pi6$, which is $30$ degrees.
The projection of slant height $L$ is the cone radius $ r$, so semi-vertical angle of cone is:
$$ \sin \alpha = \frac{r}{L} = \dfrac{\gamma}{2 \pi}$$
$$ \dfrac{\gamma}{\sin \alpha} = 2 \pi $$
is the relation you are asking for. Accordingly the fully developed angle $\gamma $ at center of sector is
$$ 2 \pi \sin \alpha $$
In your case semi-vertex angle is $ \sin^{-1}\dfrac12 = \sin^{-1} \dfrac{\pi}{2 \pi} = 30^{\circ}$ from straight angle $\pi$ to full circle $2 \pi $ ratio $=\dfrac12$.
If you repeat the experiment with little more care on a larger/stiffer sheet the double angle would be
exactly $60^{\circ}$, not $180/\pi $ degrees or a radian.
|
Let $X$ be a smooth manifold, perhaps oriented if necessary. The frame bundle $\pi:P \to X$ carries a canonical trivialization of the pullback of the tangent bundle of $X$ and thus a canonical trivialization of each Stiefel-Whitney class of $TX$. I want to describe these geometrically.
The goal is to describe a $w_n$ structure as a homotopy invariant assignment of signs to framed $(n-1)$-folds such that if we twist the framing in a certain way we get a minus sign. Understanding the canonical structure is enough to understand what this twist should be.
Since there has been some confusion, let me describe the case $w_2$ in detail.
A spin structure is a cocycle on the frame bundle which assigns $-1$ to the nontrivial loop of the fiber $SO$.
Now, $\pi^*TX$ has a tautological framing over $P$, since points of $P$ are pairs $x\in X$ and a frame of $T_x X$. $\pi^*TX$ inherits a canonical spin structure via this framing. That is, given a curve on $P$ with a choice of framing of $\pi^*TX$ restricted to that curve, we can compare this framing to the tautological one which gives us a well-defined sign. This sign clearly depends only on the homotopy class of the framed curve on $P$ and we get a minus sign if we twist the framing by one unit (ie. we add a copy of the nontrivial loop in the fiber).
There is another way of describing spin structures: a spin structure is a 1-cocycle $\eta$ with $\mathbb{Z}/2$ coefficients twisted by $w_2$. We can make this concrete by picking a set-theoretic section $s:SO \to Spin$. This gives us a cocycle representative of $w_2$, and shifting $s$ applied to the transition maps of $P$ by $\eta$, which we think of as $Spin$-valued, gives us transition maps for a $Spin$ frame bundle. That is, to each curve $\gamma$ we get an SO element $T(\gamma)$ describing the transition function of the tangent bundle. Then $$ s(T(\gamma)) + \eta(\gamma) $$ is the transition function for the spin frame bundle.
This bundle is a double cover of the frame bundle, and so should be classified by a 1-cocycle reproducing the previous definition of a spin structure.
To make better sense of this, we should look again at the canonical spin structure on $\pi^*TX$.
The canonical spin structure on $\pi^*TX$ gives us a way of translating between these two descriptions. In one definition, the canonical spin structure defines a 1-cocycle on $P$ with $\mathbb{Z}/2$ coefficients twisted by $\pi^*w_2$. Call this $F$.
Let $\eta$ be a spin structure on $X$. Then $$ \alpha = \pi^*\eta - F $$ is an ordinary 1-cocycle on $P$. I claim that this is a spin structure according to the first definition as long as we can figure out what $F$ is. To show that, we just need to see that $\alpha$ assigns $-1$ to the nontrivial loop in the fiber. $\pi^*\eta$ vanishes on that loop because the loop projects to a point in $X$, so this is the same as $F$ applied to that loop. Thus, $F$ can be anything as long as it assigns $-1$ to this loop. Some more thought needs to go into a concrete definition of $F$. It's just a matter of working through the general discussion above (trivialization to spin structure) in reverse.
I want to similarly describe this canonical trivialization of $w_n$. This will be a $\mathbb{Z}/2$-valued $(n-1)$-cochain on $P$ pulled back from the tautological bundle over $BO$. Its differential is $w_n$.
I want a description that looks like: to every framed $(n-1)$-fold we assign a sign such that if we twist the framing in a certain way we get a minus sign.
For $n=3$, my conjecture is that to a framed tube, the twist looks like we take a framed ring (a slice of the framed tube) and rotate the ring around its center (think smoke rings).
Any help or references is much appreciated.
EDIT: For the more abstract, I offer the following diagram. The class $w_n$ is represented by a map $BO \to B^n\mathbb{Z}/2$. We also have the maps $X \to BO$ classifying the tangent bundle, $X \to B^n\mathbb{Z}/2$ the class $w_n(TX)$, the identity $X\to X$, the maps $\star \to BO$, $\star \to B^n\mathbb{Z}/2$, and finally $\star \to \star$. This gives a map between two diagrams, one of which has as pullback the frame bundle $P$, the other has as pullback the $B^{n-1}\mathbb{Z}/2$ bundle over $X$ sections of which are $(n-1)$-cocycles in the $w_n(TX)$-twisted cohomology of $X$. The induced map (over $X$) from $P$ to this bundle is the canonical trivialization I'm talking about.
|
How to Implement the Fourier Transformation from Computed Solutions
We previously learned how to calculate the Fourier transform of a rectangular aperture in a Fraunhofer diffraction model in the COMSOL Multiphysics® software. In that example, the aperture was given as an analytical function. The procedure is a bit different if the source data for the Fourier transformation is a computed solution. In this blog post, we will learn how to implement the Fourier transformation for computed solutions with an electromagnetic simulation of a Fresnel lens.
Fourier Transformation with Fourier Optics
Implementing the Fourier transformation in a simulation can be useful in Fourier optics, signal processing (for use in frequency pattern extraction), and noise reduction and filtering via image processing. In Fourier optics, the Fresnel approximation is one of the approximation methods used for calculating the field near the diffracting aperture. Suppose a diffracting aperture is located in the (x,y) plane at z=0. The diffracted electric field in the (u,v) plane at the distance z=f from the diffracting aperture is calculated as
where, \lambda is the wavelength and E(x,y,0), \ E(u,v,f) account for the electric field at the (x,y) plane and the (u,v) plane, respectively. (See Ref. 1 for more details.)
In this approximation formula, the diffracted field is calculated by Fourier transforming the incident field multiplied by the quadratic phase function {\rm exp}\{-i\pi (x^2+y^2)/(\lambda f)\}.
The sign convention of the phase function must follow the sign convention of the time dependence of the fields. In COMSOL Multiphysics, the time dependence of the electromagnetic fields is of the form {\rm exp}(+i\omega t). So, the sign of the quadratic phase function is negative.
Fresnel Lenses
Now, let’s take a look at an example of a Fresnel lens. A Fresnel lens is a regular plano-convex lens except for its curved surface, which is folded toward the flat side at every multiple of m \lambda/(n-1) along the lens height, where
m is an integer and n is the refractive index of the lens material. This is called an m th-order Fresnel lens.
The shift of the surface by this particular height along the light propagation direction only changes the phase of the light by 2m \pi (roughly speaking and under the paraxial approximation). Because of this, the folded lens fundamentally reproduces the same wavefront in the far field and behaves like the original unfolded lens. The main difference is the diffraction effect. Regular lenses basically don’t show any diffraction (if there is no vignetting by a hard aperture), while Fresnel lenses always show small diffraction patterns around the main spot due to the surface discontinuities and internal reflections.
When a Fresnel lens is designed digitally, the lens surface is made up of discrete layers, giving it a staircase-like appearance. This is called a multilevel Fresnel lens. Due to the flat part of the steps, the diffraction pattern of a multilevel Fresnel lens typically includes a zeroth-order background in addition to the higher-order diffraction.
Why are we using a Fresnel lens as our example? The reason is similar to why lighthouses use Fresnel lenses in their operations. A Fresnel lens is folded into m \lambda/(n-1) in height. It can be extremely thin and therefore of less weight and volume, which is beneficial for the optics of lighthouses compared to a large, heavy, and thick lens of the conventional refractive type. Likewise, for our purposes, Fresnel lenses can be easier to simulate in COMSOL Multiphysics and the add-on Wave Optics Module because the number of elements are manageable.
Modeling a Focusing Fresnel Lens in COMSOL Multiphysics®
The figure below depicts the optics layout that we are trying to simulate to demonstrate how we can implement the Fourier transformation, applied to a computed solution solved for by the
Wave Optics, Frequency Domain interface. Focusing 16-level Fresnel lens model.
This is a first-order Fresnel lens with surfaces that are digitized in 16 levels. A plane wave E_{\rm inc} is incident on the incidence plane. At the exit plane at z=0, the field is diffracted by the Fresnel lens to be E(x,y,0). This process can be easily modeled and simulated by the
Wave Optics, Frequency Domain interface. Then, we calculate the field E(u,v,f) at the focal plane at z=f by applying the Fourier transformation in the Fresnel approximation, as described above.
The figures below are the result of our computation, with the electric field component in the domains (top) and on the boundary corresponding to the exit plane (bottom). Note that the geometry is not drawn to scale in the vertical axis. We can clearly see the positively curved wavefront from the center and from every air gap between the saw teeth. Note that the reflection from the lens surfaces leads to some small interferences in the domain field result and ripples in the boundary field result. This is because there is no antireflective coating modeled here.
The computed electric field component in the Fresnel lens and surrounding air domains (vertical axis is not to scale). The computed electric field component at the exit plane. Implementing the Fourier Transformation from a Computed Solution
Let’s move on to the Fourier transformation. In the previous example of an analytical function, we prepared two data sets: one for the source space and one for the Fourier space. The parameter names that were defined in the Settings window of the data set were the spatial coordinates (x,y) in the source plane and the spatial coordinates (u,v) in the image plane.
In today’s example, the source space is already created in the computed data set, Study 1/Solution 1
(sol1){dset1}, with the computed solutions. All we need to do is create a one-dimensional data set, Grid1D {grid1}, with parameters for the Fourier space; i.e., the spatial coordinate u in the focal plane. We then relate it to the source data set, as seen in the figure below. Then, we define an integration operator
intop1 on the exit plane.
Settings for the data set for the transformation.
The intop1 operator defined on the exit plane (vertical axis is not to scale).
Finally, we define the Fourier transformation in a 1D plot, shown below. It’s important to specify the data set we previously created for the transformation and to let COMSOL Multiphysics know that u is the destination independent variable by using the
dest operator.
Settings for the Fourier transformation in a 1D plot.
The end result is shown in the following plot. This is a typical image of the focused beam through a multilevel Fresnel lens in the focal plane (see Ref. 2). There is the main spot by the first-order diffraction in the center and a weaker background caused by the zeroth-order (nondiffracted) and higher-order diffractions.
Electric field norm plot of the focused beam through a 16-level Fresnel lens. Concluding Remarks
In this blog post, we learned how to implement the Fourier transformation for computed solutions. This functionality is useful for long-distance propagation calculation in COMSOL Multiphysics and extends electromagnetic simulation to Fourier optics.
Next Steps
Download the model files for the Fresnel lens example by clicking the button below.
Read More About Simulating Wave Optics Simulating Holographic Data Storage in COMSOL Multiphysics How to Simulate a Holographic Page Data Storage System How to Implement the Fourier Transformation in COMSOL Multiphysics References J.W. Goodman, Introduction to Fourier Optics, The McGraw-Hill Company, Inc. D. C. O’Shea, Diffractive Optics, SPIE Press.
Comments (10) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Defining parameters
Level: \( N \) = \( 33 = 3 \cdot 11 \) Weight: \( k \) = \( 2 \) Nonzero newspaces: \( 4 \) Newforms: \( 5 \) Sturm bound: \(160\) Trace bound: \(2\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{2}(\Gamma_1(33))\).
Total New Old Modular forms 60 39 21 Cusp forms 21 19 2 Eisenstein series 39 20 19 Decomposition of \(S_{2}^{\mathrm{new}}(\Gamma_1(33))\)
We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Label \(\chi\) Newforms Dimension \(\chi\) degree 33.2.a \(\chi_{33}(1, \cdot)\) 33.2.a.a 1 1 33.2.d \(\chi_{33}(32, \cdot)\) 33.2.d.a 2 1 33.2.e \(\chi_{33}(4, \cdot)\) 33.2.e.a 4 4 33.2.e.b 4 33.2.f \(\chi_{33}(2, \cdot)\) 33.2.f.a 8 4
|
M5: Multivariable Calculus - Material for the year 2019-2020
16 lectures
In these lectures, students will be introduced to multi-dimensional vector calculus. They will be shown how to evaluate volume, surface and line integrals in three dimensions and how they are related via the Divergence Theorem and Stokes' Theorem - these are in essence higher dimensional versions of the Fundamental Theorem of Calculus.
Students will be able to perform calculations involving div, grad and curl, including appreciating their meanings physically and proving important identities. They will further have a geometric appreciation of three-dimensional space sufficient
to calculate standard and non-standard line, surface and volume integrals. In later integral theorems they will see deep relationships involving the differential operators.
Multiple integrals: Two dimensions. Informal definition and evaluation by repeated integration; example over a rectangle; properties. General domains. Change of variables. Examples. [2]
Volume integrals: Jacobians for cylindrical and spherical polars, examples. [1.5]
Recap on surface and line integrals. Flux integrals including solid angle. Work integrals and conservative fields. [2]
Scalar and vector fields. Vector differential operators: divergence and curl; physical interpretation. Calculation. Identities. [2.5]
Divergence theorem. Example. Consequences: Green's 1st and second theorems. $\int_V \nabla \phi dV = \int_{\delta V} \phi dS$. Uniqueness of solutions of Poisson's equation. Derivation of heat equation. Divergence theorem in plane. Informal proof for plane. [4]
Stokes's theorem. Examples. Consequences. The existence of potential for a conservative force. [2]
Gauss' Flux Theorem. Examples. Equivalence with Poisson's equation. [2]
1) D. W. Jordan & P. Smith,
Mathematical Techniques (Oxford University Press, 3rd Edition, 2003).
2) Erwin Kreyszig,
Advanced Engineering Mathematics (Wiley, 8th Edition, 1999).
3) D. E. Bourne & P. C. Kendall,
Vector Analysis and Cartesian Tensors (Stanley Thornes, 1992).
4) David Acheson,
From Calculus to Chaos: An Introduction to Dynamics (Oxford University Press, 1997).
|
Per OP's wish, here's the math.SE answer I link to in my comment above.
Maybe it's worthwhile to talk through where the dual comes from on an example problem. This will take a while, but hopefully the dual won't seem so mysterious when we're done.
Suppose with have a primal problem as follows.
$$ Primal =\begin{Bmatrix} \max \ \ \ \ 5x_1 - 6x_2 \\ \ \ \ s.t. \ \ \ \ 2x_1 -x_2 = 1\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x_1 +3x_2 \leq9\\ \ \ \ \ x_1 \geq 0\\ \end{Bmatrix}$$
Now, suppose we want to use the primal's constraints as a way to find an upper bound on the optimal value of the primal. If we multiply the first constraint by $9$, the second constraint by $1$, and add them together, we get $9(2x_1 - x_2) + 1(x_1 +3 x_2)$ for the left-hand side and $9(1) + 1(9)$ for the right-hand side. Since the first constraint is an equality and the second is an inequality, this implies $$19x_1 - 6x_2 \leq 18.$$But since $x_1 \geq 0$, it's also true that $5x_1 \leq 19x_1$, and so $$5x_1 - 6x_2 \leq 19x_1 - 6x_2 \leq 18.$$Therefore, $18$ is an upper-bound on the optimal value of the primal problem.
Surely we can do better than that, though. Instead of just guessing $9$ and $1$ as the multipliers, let's let them be variables. Thus we're looking for multipliers $y_1$ and $y_2$ to force $$5x_1 - 6x_2 \leq y_1(2x_1-x_2) + y_2(x_1 + 3x_2) \leq y_1(1) + y_2(9).$$
Now, in order for this pair of inequalities to hold, what has to be true about $y_1$ and $y_2$? Let's take the two inequalities one at a time.
The first inequality: $5x_1 - 6x_2 \leq y_1(2x_1-x_2) + y_2(x_1 + 3x_2)$
We have to track the coefficients of the $x_1$ and $x_2$ variables separately. First, we need the total $x_1$ coefficient on the right-hand side to be at least $5$. Getting exactly $5$ would be great, but since $x_1 \geq 0$, anything larger than $5$ would also satisfy the inequality for $x_1$. Mathematically speaking, this means that we need $2y_1 + y_2 \geq 5$.
On the other hand, to ensure the inequality for the $x_2$ variable we need the total $x_2$ coefficient on the right-hand side to be exactly $-6$. Since $x_2$ could be positive, we can't go lower than $-6$, and since $x_2$ could be negative, we can't go higher than $-6$ (as the negative value for $x_2$ would flip the direction of the inequality). So for the first inequality to work for the $x_2$ variable, we've got to have $-y_1 + 3y_2 = -6$.
The second inequality
: $y_1(2x_1-x_2) + y_2(x_1 + 3x_2) \leq y_1(1) + y_2(9)$
Here we have to track the $y_1$ and $y_2$ variables separately. The $y_1$ variables come from the first constraint, which is an equality constraint. It doesn't matter if $y_1$ is positive or negative, the equality constraint still holds. Thus $y_1$ is unrestricted in sign. However, the $y_2$ variable comes from the second constraint, which is a less-than-or-equal to constraint. If we were to multiply the second constraint by a negative number that would flip its direction and change it to a greater-than-or-equal constraint. To keep with our goal of upper-bounding the primal objective, we can't let that happen. So the $y_2$ variable can't be negative. Thus we must have $y_2 \geq 0$.
Finally, we want to make the right-hand side of the second inequality as small as possible, as we want the tightest upper-bound possible on the primal objective. So we want to minimize $y_1 + 9y_2$.
Putting all of these restrictions on $y_1$ and $y_2$ together we find that the problem of using the primal's constraints to find the best upper-bound on the optimal primal objective entails solving the following linear program:
$$\begin{align*}\text{Minimize }\:\:\:\:\: y_1 + 9y_2& \\\text{subject to }\:\:\:\:\: 2y_1 + y_2& \geq 5 \\ -y_1 + 3y_2& = -6\\ y_2 & \geq 0.\end{align*}$$
And that's the dual.
It's probably worth summarizing the implications of this argument for all possible forms of the primal and dual. The following table is taken from p. 214 of
Introduction to Operations Research
, 8th edition, by Hillier and Lieberman. They refer to this as the SOB method, where SOB stands for Sensible, Odd, or Bizarre, depending on how likely one would find that particular constraint or variable restriction in a maximization or minimization problem.
Primal Problem Dual Problem
(or Dual Problem) (or Primal Problem)
Maximization Minimization
Sensible <= constraint paired with nonnegative variable
Odd = constraint paired with unconstrained variable
Bizarre >= constraint paired with nonpositive variable
Sensible nonnegative variable paired with >= constraint
Odd unconstrained variable paired with = constraint
Bizarre nonpositive variable paired with <= constraint
|
You can use
ListConvolve to simulate a single diffusion time step and build a simulation out of that. I'll show a simple example: Let's say we start with simple initial conditions like in your example
(initialconditions = Normal@SparseArray[{{3, 3} -> 1}, {5, 5}]) // MatrixForm
and a diffusion kernel
kernel = {
{1/120, 1/60, 1/120},
{1/60, 9/10, 1/60},
{1/120, 1/60, 1/120}
};
Then we can define a function that simulates a single time step as
Step = ListConvolve[kernel, #, 2, 0] &;
Here the
2 aligns with the center of our kernel to make sure the simulation doesn't drift. The
0 is for padding outside our kernel, otherwise we would get cyclic convolution which we don't want in this case.
Now we can simulate multiple time steps via
NestList:
solution = NestList[Step, initialconditions, 30];
and plot the solution as an animation:
ListAnimate[ListPlot3D[#, PlotRange -> {0, 1}, MeshFunctions -> {#3 &}] & /@ solution]
Realistic thermal diffusion example
We can do a more realistic example to show how we could actually use this to simulate a real physical scenario.
We'll start with the heat equation
heateq = dudt - \[Alpha] laplaceu == 0
where
u is the temperature, $\alpha$ is the thermal diffusivity,
dudt is the change of temperature over time, and
laplaceu is the curvature of the temperature, i.e. the total of the second derivatives of
u with respect to our spatial dimensions
x and
y.
Let's discretize our heat equation in time by replacing our time derivative by a finite difference
heateq /. {dudt -> \[CapitalDelta]u/\[CapitalDelta]t}
now, we can solve this for $\Delta u$ to know what our next u after one timestep should be
nextu = u + \[CapitalDelta]u /. First@Solve[%, \[CapitalDelta]u]
u + laplaceu $\alpha$ $\Delta$t
The next step is to discretize our heat equation in space, too. We do this by approximating our spatial derivatives by finite difference approximations, which requires the value of the immediate neighbours for every grid cell for which a 3x3 kernel is sufficient. The kernel can be constructed like this:
(diffusionkernel = nextu /. {u -> ( {
{0, 0, 0},
{0, 1, 0},
{0, 0, 0}
} ), laplaceu -> (1/\[CapitalDelta]x^2 ( {
{0, 0, 0},
{1, -2, 1},
{0, 0, 0}
} ) + 1/\[CapitalDelta]y^2 ( {
{0, 1, 0},
{0, -2, 0},
{0, 1, 0}
} ))}) // MatrixForm
and now we have a nice general diffusion kernel where we can plug in physical values for the width and height of our grid cells, the thermal diffusivity of our material and the amount of time one simulated time step represents.
For this example let's go with 1mm x 1mm grid cells and a time step of 1ms and Gold, which has a thermal diffusivity of $1.27\cdot10^{-4} m^2/s$
kernel = diffusionkernel /. {
\[Alpha] -> 1.27*10^-4(*thermal diffusivity of gold*),
\[CapitalDelta]x -> 1/1000(*1mm*),
\[CapitalDelta]y -> 1/1000(*1mm*),
\[CapitalDelta]t -> 1/1000(*1ms*)
}
We define our diffusion step as before
DiffusionStep = ListConvolve[kernel, #, 2, 0] &
and choose a grid dimension to represent 1cm x 1cm
n = 11;(* set the dimensions of our simulation grid *)
We need some initial conditions
(initialconditions = Array[20 &, {n, n}]) // MatrixForm
representing constant room temperature of 20 degrees. Also we need some interesting boundary conditions. Let's say we choose to heat the left half of the border of our gold bar to a constant 100 degrees and the right side we cool to have constant 0 degrees:
(bcmask = Array[Boole[#1 == 1 \[Or] #1 == n \[Or] #2 == 1 \[Or] #2 == n] &, {n, n}]) // MatrixForm
(bcvalues = Array[If[#2 <= n/2, 100, 0] &, {n, n}]) // MatrixForm
Here we encoded the boundary conditions as a binary mask, which specifies if the grid cell is a boundary cell and a matrix which contains the values the boundary cells should have.
We can now write a step which enforces our boundary condition:
EnforceBoundaryConditions = bcmask*bcvalues + (1 - bcmask) # &;
E.g. applied to our initial conditions it looks like this
EnforceBoundaryConditions[initialconditions] // MatrixForm
Now that we have everything together we can start our simulation of our partly heated/partly cooled infinite 1cm x 1cm gold bar!
solution = NestList[
Composition[EnforceBoundaryConditions, DiffusionStep],
initialconditions,
30
];
and visualize the result:
anim = ListPlot3D[#2,
PlotRange -> {0, 100}, MeshFunctions -> {#3 &},
AxesLabel -> {"x/mm", "y/mm", "T/\[Degree]C"},
PlotLabel -> "Temperature distribution after " <> ToString[#1] <> " ms",
DataRange -> {{0, n - 1}, {0, n - 1}}
] & @@@ Transpose[{Range[Length[#]] - 1, #} &@solution];
ListAnimate[anim]
This was actually fun working on, thanks to @J.M. for the suggestion!
|
We start with a simulated Poisson example where the \(\Theta_i\) are drawn from a chi-squared density with 10 degrees of freedom and the \(X_i|\Theta_i\) are Poisson with expectation \(\Theta_i:\)
\[ \Theta_i \sim \chi^2_{10} \mbox{ and } X_i|\Theta_i \sim \mbox{Poisson}(\Theta_i) \]
The \(\Theta_i\) for this setting, with
N = 1000 observations can be generated as follows.
set.seed(238923) ## for reproducibilityN <- 1000Theta <- rchisq(N, df = 10)
Next, the \(X_i|\Theta_i\), for each of
nSIM = 1000 simulations can be generated as below.
nSIM <- 1000data <- sapply(seq_len(nSIM), function(x) rpois(n = N, lambda = Theta))
We take the discrete set \(\mathcal{T}=(1, 2, \ldots, 32)\) as the \(\Theta\)-space and apply the
deconv function in the package
deconvolveR to estimate \(g(\theta).\)
library(deconvolveR)tau <- seq(1, 32)results <- apply(data, 2, function(x) deconv(tau = tau, X = x, ignoreZero = FALSE, c0 = 1))
The default setting for
deconv uses the
Poisson family and a natural cubic spline basis of degree 5 as \(Q.\) The regularization parameter for this example (
c0) is set to 1. The
ignoreZero parameter indicates that this dataset contains zero counts, i.e., there zeros have not been truncated. (In the Shakespeare example below, the counts are of words seen in the canon, and so there is a natural truncation at zero.)
Some warnings are emitted by the
nlm routine used for optimization, but they are mostly inconsequential.
Since
deconv works on a sample at a time, the result above is a list of lists from which various statistics can be extracted. Below, we construct a table of values for various values of \(\Theta\).
g <- sapply(results, function(x) x$stats[, "g"])mean <- apply(g, 1, mean)SE.g <- sapply(results, function(x) x$stats[, "SE.g"])sd <- apply(SE.g, 1, mean)Bias.g <- sapply(results, function(x) x$stats[, "Bias.g"])bias <- apply(Bias.g, 1, mean)gTheta <- pchisq(tau, df = 10) - pchisq(c(0, tau[-length(tau)]), df = 10)gTheta <- gTheta / sum(gTheta)simData <- data.frame(theta = tau, gTheta = gTheta, Mean = mean, StdDev = sd, Bias = bias, CoefVar = sd / mean)table1 <- transform(simData, gTheta = 100 * gTheta, Mean = 100 * Mean, StdDev = 100 * StdDev, Bias = 100 * Bias)
The table below summarizes the results for some chosen values of \(\theta .\)
knitr::kable(table1[c(5, 10, 15, 20, 25), ], row.names=FALSE)
theta gTheta Mean StdDev Bias CoefVar 5 5.6191465 5.4416722 0.3635378 -0.1235865 0.0668063 10 9.1646990 9.5316279 0.4917673 0.2588187 0.0515932 15 4.0946148 3.3421426 0.3119862 -0.0683187 0.0933492 20 1.1014405 0.9810001 0.2246999 -0.1225092 0.2290519 25 0.2255788 0.1496337 0.0677919 0.0602422 0.4530522
Although, the coefficient of variation of \(\hat{g}(\theta)\) is still large, the \(g(\theta)\) estimates are reasonable.
We can compare the empirical standard deviations and biases of \(g(\hat{\alpha})\) with the approximation given by the formulas in the paper.
library(ggplot2)library(cowplot)theme_set(theme_get() + theme(panel.grid.major = element_line(colour = "gray90", size = 0.2), panel.grid.minor = element_line(colour = "gray98", size = 0.5)))p1 <- ggplot(data = as.data.frame(results[[1]]$stats)) + geom_line(mapping = aes(x = theta, y = SE.g), color = "black", linetype = "solid") + geom_line(mapping = aes(x = simData$theta, y = simData$StdDev), color = "red", linetype = "dashed") + labs(x = expression(theta), y = "Std. Dev")p2 <- ggplot(data = as.data.frame(results[[1]]$stats)) + geom_line(mapping = aes(x = theta, y = Bias.g), color = "black", linetype = "solid") + geom_line(mapping = aes(x = simData$theta, y = simData$Bias), color = "red", linetype = "dashed") + labs(x = expression(theta), y = "Bias")plot_grid(plotlist = list(p1, p2), ncol = 2)
The approximation is quite good for the standard deviations, but a little too small for the biases.
Here we are given the word counts for the entire Shakespeare canon in the data set
bardWordCount. We assume the \(i\)th distinct word appeared \(X_i \sim Poisson(\Theta_i)\) times in the canon.
data(bardWordCount)str(bardWordCount)
## num [1:100] 14376 4343 2292 1463 1043 ...
We take the support set \(\mathcal{T}\) for \(\Theta\) to be equally spaced on the log-scale and the sample space for \(\mathcal{X}\) to be \((1,2,\ldots,100).\)
lambda <- seq(-4, 4.5, .025)tau <- exp(lambda)
Using a regularization parameter of
c0=2 we can deconvolve the data to get \(\hat{g}.\)
result <- deconv(tau = tau, y = bardWordCount, n = 100, c0=2)stats <- result$stats
The plot below shows the Empirical Bayes deconvoluation estimates for the Shakespeare word counts.
ggplot() + geom_line(mapping = aes(x = lambda, y = stats[, "g"])) + labs(x = expression(log(theta)), y = expression(g(theta)))
The quantity \(R(\alpha)\) in the paper (Efron, Biometrika 2015) can be extracted from the
stats list; in this case for a regularization parameter of
c0=2 we can print its value:
print(result$S)
## [1] 0.005534954
The
stats list contains other estimates quantities as well.
As noted in the paper citing this package, about 44 percent of the total mass of \(\hat{g}\) lies below \(\Theta = 1\), which is an underestimate. This can be corrected for by defining \[ \tilde{g} = c_1\hat{g} / (1 - e^{-\theta_j}), \] where \(c_1\) is the constant that normalizes \(\tilde{g}\).
When there is truncation at zero, as is the case here, the
deconvolveR package now returns an additional column in
stats[, "tg"] which contains this correction for
thinning. (The default invocation of
deconv assumes zero truncation for the Poisson family, argument
ignoreZero = FALSE).
d <- data.frame(lambda = lambda, g = stats[, "g"], tg = stats[, "tg"], SE.g = stats[, "SE.g"])indices <- seq(1, length(lambda), 5)ggplot(data = d) + geom_line(mapping = aes(x = lambda, y = g)) + geom_errorbar(data = d[indices, ], mapping = aes(x = lambda, ymin = g - SE.g, ymax = g + SE.g), width = .01, color = "blue") + labs(x = expression(log(theta)), y = expression(g(theta))) + ylim(0, 0.006) + geom_line(mapping = aes(x = lambda, y = tg), linetype = "dashed", color = "red")
|
I hope that the title isn't too provocative. ;-)
Bill Z. has brought my attention to a December 2012 nuclear physics paper that was updated 3 days ago,
The detailed calculations are concerned with the Hoyle state. What is it and what did the authors of the new paper conclude?
In 1954, Fred Hoyle noticed that we were lucky about a seemingly technical coincidence in nuclear physics that was apparently needed for us to exist. Note that in the baryonic (proton- and neutron-based i.e. visible) matter in the Universe around us, hydrogen and helium are the dominant elements.
It's no coincidence. It was mostly hydrogen (\(Z=1\)) and helium (\(Z=2\)) – and just some lithium (\(Z=3\)), aside from negligible trace amounts of heavier elements (\(Z\geq 4\)), that was directly produced during the Big Bang nucleosynthesis in the first three minutes after the Big Bang. One may reconstruct the temperature in the Universe during these early formative stages of our Cosmos and calculate, using the statistical methods, the ratios of the concentrations of these three light elements. The results seem to match the observations of hydrogen, helium, lithium rather impressively – and this agreement is one of the important pieces of evidence supporting the Big Bang paradigm.
Looking from a practical perspective, are the three lightest elements enough to get everything we need to be happy? Well, lithium may be helpful for some laptop and cell phone batteries and helium is useful at most for funny tricks to change your voice into the voice of the Smurfs (fine, it's also great as the gas in balloons, either for kids or adults, and as the coolant in NMR and the LHC). We could also describe helium as the main waste product of the thermonuclear reactions in the Sun and other stars if it weren't too disrespectful.
Hydrogen is useful for a huge fraction of compounds we need and love. But where is the rest? It's obvious that the three elements aren't enough to build life and the civilization as we know it. In particular, two other major heavier elements behind the miraculous project of life – carbon and oxygen – seem to be absent. Yes, these are the same two elements found in the gas that they call a pollution but we call it life.
Aside from hydrogen, carbon, and oxygen, the three major elements, nitrogen, phosphorus, and sulfur are three more elements that are paramount for life we know. Other elements such as silicon or calcium or fluorine (because I mentioned the dentists) may be helpful to shape our bodies at various moments and/or to create various intelligent gadgets but they're no longer a universal "must". Where do all these elements come from?
The heavier elements are abundantly enough produced by helium burning in stars that have gone red giants. Our carbon, oxygen, and other elements arose from the production in these red giant stars that once existed but they are no more. In 7.5 billion years or so, our beloved Sun will become a red giant, too. It will devour the Earth and other planets – that's the less dramatic part of the story – but it will also produce completely new carbon, oxygen, and other elements that may be incorporated into the bodies of a future extraterrestrial civilization.
Fine. How do the red giants produce carbon (\(Z=6\))?
Note that six is a multiple of two: we call these numbers "even". So it has the right number of protons to arise from several, namely three, nuclei of helium which seem abundant. Moreover, the ordinary carbon nuclei we need have 6 protons and 6 neutrons, the same number, so it seems appropriate to combine three helium-4 nuclei to create carbon-12:\[
\Large {}^4_2{\rm He} + {}^4_2{\rm He} + {}^4_2{\rm He} \rightarrow {}^{12}_6{\rm C}
\] It's somewhat unlikely for three helium-4 nuclei – which I will call the alpha-particles just like everyone else – to hit each other and produce the carbon nucleus directly. So the reaction actually proceeds in two steps, with an unstable level of beryllium-8 or \({}^8_4{\rm Be}\) in between. This beryllium-8 nucleus combines with another alpha-particle to get the desired carbon but this second step has too low a rate.
We wouldn't get enough carbon in this way (if it were just a generic fusion of these nuclei) and it's actually known that something special is going on. There is a resonance, a \(0^+\) state of carbon-12 known as the Hoyle state. Fred Hoyle actually predicted – using the apparent abundance of carbon as the only input – that it should be somewhere over there and indeed, the prediction was soon experimentally confirmed.
There exists a state of carbon-12 whose mass/energy is equal to the mass/energy of three free alpha-particles plus \(\varepsilon=397.47(18)\keV\); we say that the state is \(\varepsilon\) above the three-alpha threshold (a threshold, in general, is the minimum mass/energy of an unstable/composite object that allows the corresponding state to decay to particular products without violating the energy conservation law).
In the relevant region of the parameter space, the reaction rate for the carbon-12 production – via the Hoyle resonance – may be approximated by \[
r_{3\alpha} = \Gamma_\gamma (N_\alpha/k_B T)^3 \exp(-\varepsilon/k_B T).
\] You see that up to an overall normalization constant, this is equal to the third power of the number (density) of alpha-particles per unit volume (because three of them have to meet) and a Boltzmannian factor that exponentially decreases with energy. This \(\varepsilon\) shouldn't be too high because the exponential suppression could be severe. It shouldn't be too small, either, for other reasons. In the past, it was argued that a 15% deviation of \(\varepsilon\) from the known value could still allow enough carbon etc. for life.
Now, what have they found about the dependence of this accident on the light quark mass?
They use a novel numerical method,
nuclear lattice simulations, to calculate the dependence. It's something like lattice QCD except that it seems to work with some composite pion fields and the low-energy emergent nuclear physics mess instead of the fundamental QCD fields. The light quark mass is translated to the mass of the pion \(M_\pi\) and they discuss the dependence of the energies of several energy levels on either the light quark mass or the pion mass which is almost the same dependence.
I don't want to bore you with all the details – you may read the original paper, it's just 4 pages long. Instead, let me repost a graph summarizing some partial results of their analysis:
On the \(x\)-axis, they depicted the relative change of the binding energy of the alpha-particle; on the \(y\)-axis, you see the corresponding (much larger) relative change of the three energies related to the helium-4, beryllium-8, and carbon-18 nuclei, namely of\[
\eq{
\Delta E_h &= E_{12}^* - E_8 - E_4\\
\Delta E_b &= E_8 - 2E_4\\
\varepsilon &= E_{12}^* - 3E_4 = \Delta E_h + \Delta E_b
}
\] Because of the final relationship for \(\varepsilon\), it's not surprising that the yellow curve is in between the other two. But what may be surprising is that these two curves – and therefore all three curves – have pretty much the same slope. What does it mean? It means that the several fine-tunings are actually not independent from each other.
You could think that for the nuclear factory to work and produce the elements, you may need several miracles – several anthropic conditions, in this case three – and therefore God has to be even greater than the size He would adopt if there were just one miracle. God's omnipotence seems like the third power of a generic god's power (or three times? It depends whether His omnipotence is quantified on the log scale).
However, the new paper shows that this ain't the case. The three coincidences aren't independent from each other. Pretty much because of mathematical identities, they're more or less equivalent to
one coincidenceonly. If one identity for the nuclear level energies holds, the other two will probably hold as well, with a highly acceptable accuracy. It means that one miracle is enough and life is much more likely than what you would expect if you thought that the three conditions are independent of each other.
Now, you could claim that we still need meta-God to explain the mathematical "metamiracle" that the three conditions are actually almost equivalent to each other. If you did so, you would cover these questions by lots of exciting religious fog. But the matter of fact is that they can actually explain this "metamiracle" – at least in a preliminary way – in terms of completely non-mysterious, irreligious arguments, too. The same slopes kind of follow from the alpha-cluster structure of beryllium-8 and carbon-12 nuclei. I won't present this derivation in its full glory but the alpha-particle-based compositeness of the two nuclei sort of rationally explains why the two slopes are almost the same.
Even though several levels and energy differences are involved, the authors de facto show that there is only one "miracle" we need for a sufficient production of the heavier elements behind life. Moreover, the tolerated error for \(\alpha_{\rm elmg}\) as well as \(m_q\) could be around 2 percent or so: the fine-tuning isn't extreme.
Although this very topic may make you "wish" that there is some evidence for the Intelligent Design and/or a stunningly convincing role for the anthropic principle, and the very fact that a paper about this metaphysical and mysterious question was written could manipulate you into a more spiritual thinking, I would say that the actual results of their analysis imply exactly the opposite conclusion. Different "miracles" aren't really independent from each other and they're not "terribly unlikely miracles", anyway. Good luck at the 1-in-50 level seems to be enough for the amount of carbon to be just fine. At most, you may need two such 1-in-50 fine-tunings – one for the fine-structure constant and one for the light quark mass – except that I think that only some combination of them will have a high enough impact on the essential processes needed for the elements of life to arise.
Now, you could still argue that 1-in-50 is a low chance. The probability \(p=0.02\) or so is pretty small, some of you could say, and this strengthens some arguments in favor of God, Intelligent Design, the anthropic principle, or something along these lines. Well, perhaps. I don't think it's a right way to think about this coincidence. Why?
First, \(p=0.02\) is equivalent to a "bump just a little bit larger than a 2-sigma bump". To make extraordinary claims about God or the anthropic principle – and one really doesn't know which of these (or other metaphysical) explanations "follows" from the "miracle" – and justify them by not-so-extraordinary evidence such as 2-sigma bumps seems to betray the lack of evidence. Extraordinary claims require extraordinary evidence and this ain't one.
Second, this not-so-extreme probability \(p=0.02\) is the \(p\)-value before the look-elsewhere effect of a particular type is included. What I want to say is that we're computing the probability that a particular system of nuclear furnaces will be able to produce a particular type of life (determined by its elements etc.). However, there could very well be other types of life – perhaps \({\mathcal O}(50)\) types of life – that may arise in the same parameter space which means that the probability that at least one of these types of life will be allowed for a "random" choice of the values of parameters may approach 100 percent.
These observations of coincidences that are needed for life are intriguing but we shouldn't get carried away for two basic reasons. First, as argued above, the probabilities that we get a tolerable value of the parameters (values compatible with life) aren't extremely tiny and we should treat these "modestly suggestive" low probability just like any other 2-sigma bumps in physics. They're not enough to settle a question, they're not enough for a paradigm shift.
Second, it's pretty much guaranteed that if we calculate the odds that some "conditions constraining parameters that makes the theory friendly to life as we know it" are obeyed, it's unsurprising that the answer will probably be Yes because our type of life does exist, after all. The proposition that "conditions apparently needed for this life to arise with a significant probability were satisfied" is pretty much tautologicaly true. These conditions simply aren't independent of some empirical known facts. We're just measuring the answer to the question "Does life exist?" using a different, perhaps more contrived, procedure. But the fact that many such questions have "Yes" answers isn't a miracle; it's pretty much tautologically guaranteed because these questions were cherry-picked by their equivalence to the existence of life (or some of the aspects of this existence).
If the probabilities arising in similar anthropic coincidences were much tinier, i.e. much more extreme than 2-sigma bumps, I could be impressed. The tiny cosmological constant could be an indication of this sort. However, we may only argue that the "probability that the cosmological constant is below \(10^{-120}m_{\rm Planck}^4\) is of order \(10^{-120}\)" if we adopt a uniform probability distribution for cosmological constants in the interval comparable to \((0,m_{\rm Planck}^4)\).
While some plausible models that make the uniform distribution look natural exist, they're not "inevitably true" and it's still easy to imagine that this uniform probability distribution is a completely naive, wrong expectation. If we replace it by another one – one that follows from a slightly sophisticated mechanism and one that gives tiny values of the cosmological constant with far higher probabilities – the "unavoidably impressive" miracle goes away once again. If you wanted to convince me that there is a miracle that harbors strong evidence in favor of the anthropic reasoning or God or anything like that, you would have to show me a coincidence that has a tiny probability according to the right calculation of probabilities (a calculation which takes all mathematically guaranteed correlations such as those above into account) and a nicely justifiable probability distribution for the parameters.
If you used a quasi-uniform one, you would have to convince me that it's reasonable to expect that the distribution is quasi-uniform for that situation. It would have to be so reasonable – almost inevitable – that, in fact, I would find your anthropic principle or God more likely as an explanation than the mundane possibility that there simply exists a better argument or "better theory" telling you that a non-uniform distribution is actually a much more sensible (and likely) one. Or a better theory that simply allows you to calculate the observed value. How strong evidence is needed to prefer God over the "better theory" depends on subjective prior probabilities but be sure that 2-sigma or even 3-sigma bumps are way too small for people like me to pick God or His best pal, the anthropic anti-God, instead of a "better, so far unknown, theory".
If you can't show me such a thing, I would keep on insisting that there doesn't exist any tangible evidence to believe the anthropic/religious paradigm and because these things aren't mathematically elegant or explaining any true pre-existing mysteries in physics, they don't really deserve to become a part of physics, at least at this moment.
|
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
|
Difference between revisions of "LaTeX:Symbols"
(→Operators)
(→See Also)
(18 intermediate revisions by 9 users not shown) Line 2: Line 2:
This article will provide a short list of commonly used LaTeX symbols.
This article will provide a short list of commonly used LaTeX symbols.
− − − − − − − − −
== Finding Other Symbols ==
== Finding Other Symbols ==
Line 17: Line 8:
<ul>
<ul>
<li>
<li>
−
[http://detexify.kirelabs.org/classify.html Detexify] is an app which allows you to draw the symbol you'd like and shows you the <math>\LaTeX</math> code for it!
+
[http://detexify.kirelabs.org/classify.html Detexify] is an app which allows you to draw the symbol you'd like and shows you the <math>\LaTeX</math> code for it!
<br/><br/></li>
<br/><br/></li>
<li>
<li>
−
MathJax (what allows us to use <math>\LaTeX</math> on the web) maintains a [http://docs.mathjax.org/en/latest/tex.html#supported-latex-commands list of supported commands].
+
MathJax (what allows us to use <math>\LaTeX</math> on the web) maintains a [http://docs.mathjax.org/en/latest/tex.html#supported-latex-commands list of supported commands].
<br/><br/></li>
<br/><br/></li>
Line 96: Line 87:
| <math>\mid</math>||\mid||<math>\bumpeq</math>||\bumpeq||
| <math>\mid</math>||\mid||<math>\bumpeq</math>||\bumpeq||
|}
|}
−
Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a
+
Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a examples, plus other negations; it works for many of the others as well.
−
{| class="latextable"
+ +
{| class="latextable"
!Symbol!!Command!!Symbol!!Command!!Symbol!!Command
!Symbol!!Command!!Symbol!!Command!!Symbol!!Command
|-
|-
Line 104: Line 96:
| <math>\nsim</math>||\nsim||<math>\ncong</math>||\ncong||<math>\nparallel</math>||\nparallel
| <math>\nsim</math>||\nsim||<math>\ncong</math>||\ncong||<math>\nparallel</math>||\nparallel
|-
|-
−
| <math>\not<</math>||\not<||<math>\not></math>||\not>||<math>\not=</math>||\not=
+
| <math>\not<</math>||\not<||<math>\not></math>||\not>||<math>\not=</math>||\not=
|-
|-
| <math>\not\le</math>||\not\le||<math>\not\ge</math>||\not\ge||<math>\not\sim</math>||\not\sim
| <math>\not\le</math>||\not\le||<math>\not\ge</math>||\not\ge||<math>\not\sim</math>||\not\sim
Line 117: Line 109:
|}
|}
−
To use other relations not listed here, such as =, >, and <, in LaTeX, you
+
To use other relations not listed here, such as =, >, and <, in LaTeX, you use the symbols on your keyboard.
==Greek Letters==
==Greek Letters==
Line 340: Line 332:
See that there's a dot after <tt>\right</tt>. You must put that dot or the code won't work.
See that there's a dot after <tt>\right</tt>. You must put that dot or the code won't work.
+ + + + + + + + + Line 389: Line 390:
==See Also==
==See Also==
*[[LaTeX:Commands | Next: Commands]]
*[[LaTeX:Commands | Next: Commands]]
− Latest revision as of 19:05, 24 June 2019
LaTeX About - Getting Started - Diagrams - Symbols - Downloads - Basics - Math - Examples - Pictures - Layout - Commands - Packages - Help
This article will provide a short list of commonly used LaTeX symbols.
Contents Finding Other Symbols
Here are some external resources for finding less commonly used symbols:
Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! MathJax (what allows us to use on the web) maintains a list of supported commands. The Comprehensive LaTeX Symbol List. Operators Relations
Symbol Command Symbol Command Symbol Command \le \ge \neq \sim \ll \gg \doteq \simeq \subset \supset \approx \asymp \subseteq \supseteq \cong \smile \sqsubset \sqsupset \equiv \frown \sqsubseteq \sqsupseteq \propto \bowtie \in \ni \prec \succ \vdash \dashv \preceq \succeq \models \perp \parallel \mid \bumpeq
Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a couple examples, plus many other negations; it works for many of the many others as well.
Symbol Command Symbol Command Symbol Command \nmid \nleq \ngeq \nsim \ncong \nparallel \not< \not> \not= or \neq \not\le \not\ge \not\sim \not\approx \not\cong \not\equiv \not\parallel \nless \ngtr \lneq \gneq \lnsim \lneqq \gneqq
To use other relations not listed here, such as =, >, and <, in LaTeX, you must use the symbols on your keyboard, they are not available in .
Greek Letters
Symbol Command Symbol Command Symbol Command Symbol Command \alpha \beta \gamma \delta \epsilon \varepsilon \zeta \eta \theta \vartheta \iota \kappa \lambda \mu \nu \xi \pi \varpi \rho \varrho \sigma \varsigma \tau \upsilon \phi \varphi \chi \psi \omega
Symbol Command Symbol Command Symbol Command Symbol Command \Gamma \Delta \Theta \Lambda \Xi \Pi \Sigma \Upsilon \Phi \Psi \Omega Arrows
Symbol Command Symbol Command \gets \to \leftarrow \Leftarrow \rightarrow \Rightarrow \leftrightarrow \Leftrightarrow \mapsto \hookleftarrow \leftharpoonup \leftharpoondown \rightleftharpoons \longleftarrow \Longleftarrow \longrightarrow \Longrightarrow \longleftrightarrow \Longleftrightarrow \longmapsto \hookrightarrow \rightharpoonup \rightharpoondown \leadsto \uparrow \Uparrow \downarrow \Downarrow \updownarrow \Updownarrow \nearrow \searrow \swarrow \nwarrow
(For those of you who hate typing long strings of letters, \iff and \implies can be used in place of \Longleftrightarrow and \Longrightarrow respectively.)
Dots
Symbol Command Symbol Command \cdot \vdots \dots \ddots \cdots \iddots Accents
Symbol Command Symbol Command Symbol Command \hat{x} \check{x} \dot{x} \breve{x} \acute{x} \ddot{x} \grave{x} \tilde{x} \mathring{x} \bar{x} \vec{x}
When applying accents to i and j, you can use \imath and \jmath to keep the dots from interfering with the accents:
Symbol Command Symbol Command \vec{\jmath} \tilde{\imath}
\tilde and \hat have wide versions that allow you to accent an expression:
Symbol Command Symbol Command \widehat{7+x} \widetilde{abc} Others Command Symbols
Some symbols are used in commands so they need to be treated in a special way.
Symbol Command Symbol Command Symbol Command Symbol Command \textdollar or $ \& \% \# \_ \{ \} \backslash
(Warning: Using $ for will result in . This is a bug as far as we know. Depending on the version of this is not always a problem.)
European Language Symbols
Symbol Command Symbol Command Symbol Command Symbol Command {\oe} {\ae} {\o} {\OE} {\AE} {\AA} {\O} {\l} {\ss} !` {\L} {\SS} Bracketing Symbols
In mathematics, sometimes we need to enclose expressions in brackets or braces or parentheses. Some of these work just as you'd imagine in LaTeX; type ( and ) for parentheses, [ and ] for brackets, and | and | for absolute value. However, other symbols have special commands:
Symbol Command Symbol Command Symbol Command \{ \} \| \backslash \lfloor \rfloor \lceil \rceil \langle \rangle
You might notice that if you use any of these to typeset an expression that is vertically large, like
(\frac{a}{x} )^2
the parentheses don't come out the right size:
If we put \left and \right before the relevant parentheses, we get a prettier expression:
\left(\frac{a}{x} \right)^2
gives
And with system of equations:
\left\{\begin{array}{l}x+y=3\\2x+y=5\end{array}\right.
Gives
See that there's a dot after
\right. You must put that dot or the code won't work.
In addition to the
\left and
\right commands, when doing floor or ceiling functions with fractions, using
\left\lceil\frac{x}{y}\right\rceil
and
\left\lfloor\frac{x}{y}\right\rfloor
give both
And, if you type this
\underbrace{a_0+a_1+a_2+\cdots+a_n}_{x}
Gives
Or
\overbrace{a_0+a_1+a_2+\cdots+a_n}^{x}
Gives
\left and \right can also be used to resize the following symbols:
Symbol Command Symbol Command Symbol Command \uparrow \downarrow \updownarrow \Uparrow \Downarrow \Updownarrow Multi-Size Symbols
Some symbols render differently in inline math mode and in display mode. Display mode occurs when you use \[...\] or $$...$$, or environments like \begin{equation}...\end{equation}, \begin{align}...\end{align}. Read more in the commands section of the guide about how symbols which take arguments above and below the symbols, such as a summation symbol, behave in the two modes.
In each of the following, the two images show the symbol in display mode, then in inline mode.
Symbol Command Symbol Command Symbol Command \sum \int \oint \prod \coprod \bigcap \bigcup \bigsqcup \bigvee \bigwedge \bigodot \bigotimes \bigoplus \biguplus
|
According to the BHK interpretation of intuitionistic logic we have that:
A proof of $\exists x \in A . \phi(x)$ consists of a pair $(a, p)$ where $a \in A$ and $p$ is a proof of $\phi(a)$. A proof of $\forall x \in A . \phi(x)$ is a method which takes as input $a \in A$ and outputs a proof of $\phi(a)$. A proof of $\psi \Rightarrow \xi$ is a method which takes proofs of $\psi$ to proofs of $\xi$.
Here "method" should be understood as an unspecified, pre-mathematical notion. It could be algorithm, or continuous map, or mental process, or Turing machine, etc.
The axiom of choice can be stated, for any sets $A$, $B$ and relation $\rho$ on $A \times B$, as:\begin{equation}(\forall x \in A . \exists y \in B . \rho(x,y)) \Rightarrow (\exists f \in B^A . \forall x \in A . \rho(x, f(x)).\tag{AC}\end{equation}This is equivalent to the usual formulation (exercise, or ask if you do not see why). Let us unravel what it means to have a proof of the above principle. First, a proof of$$\forall x \in A . \exists y \in B . \rho(x,y) \tag{1}$$is a method $C$ which takes as input $a \in A$ and outputs a pair $$C(a) = (C_1(a), C_2(a))$$ such that $C_1(a) \in B$ and $C_2(a)$ is a proof of $\rho(a, C_1(a)).$ Second, a proof of$$\exists f \in B^A . \forall x \in A . \rho(x, f(x)) \tag{2}$$is a pair $(g, D)$ such that $g$ is a function from $A$ to $B$ and $D$ is a proof of $\forall x \in A . \rho(x, g(x))$.
Therefore a proof of (AC) above is a method $M$ which takes the method $C$ which proves (1) and outputs a pair $(f, D)$ which proves (2). Is there such an $M$? It looks like we can take $f = C_1$ and $D = C_2$, and viola the axiom of choice is proved constructively! Well, not quite. We were asked to provide a
function $f : A \to B$ but we provided a method $C_1$. Is there a difference? That depends on the exact meaning of "method" and "function". There are several possibilities, see below.
The important thing is that now we can understand what Bishop meant by "a choice is implied by the very meaning of existence". If we ignore the difference between "method" and "function" then under the BHK interpretation choice holds because of the constructive meaning of $\exists$: to exist is to construct, and to construct a $y \in B$ depending on $x \in A$ is to give a method/function that constructs, and therefore
chooses, for each $x \in A$ a particular $y \in B$.
It remains to consider whether a "method" $C_1$ taking inputs in $A$ and giving outputs in $B$ is the same thing as a function $f : A \to B$. The answer depends on the exact formal setup that we use to express the BHK interpretation:
Martin-Löf type theory
In Martin-Löf type theory there is no difference between "method" and "function", and therefore choice is valid there (but the exact argument outlined above).
Bishop constructive mathematics
In Bishop constructive mathematics a set is given by an explanation of how its elements are constructed, and when two such elements are equal. For instance, a real number is constructed as sequence of rational numbers satisfying the Cauchy condition, and two such sequences are considered equal when they coincide in the usual sense. This means, in particular, that two different constructions may represent the same element (both $n \mapsto 1/n$ and $n \mapsto 2^{-n}$ represent the real number "zero").
Now, importantly, we distinguish between
operations and functions. The former is a mapping from a set $A$ to a set $B$, and the latter a mapping which respects equality (we say that it is extensional). To see the difference, consider the operation from $\mathbb{R}$ to $\mathbb{Q}$ which computes from a given $x \in \mathbb{R}$ a rational $q \in \mathbb{Q}$ such that $x < q$: since $x$ is a Cauchy sequence, we may take $q = x_i + 42$ for a large enough $i$ (which can be determined explicitly once we make our definition of reals a bit more specific). The operation $x \mapsto q$ does not respect equality: by taking a different Cauchy sequence $x'$ which represents the same real, we get a rational upper bound $q'$ which is not equal to $q$. In fact, in Bishop constructive mathematics it is impossible to construct an extensional operation that computes rational upper bounds of reals.
In Bishop constructive mathematics
method is understood as operation, and function as extensional operation. Choice is then valid only in some instances, but not in general. In particular, if $A$ has the property that every element is canonically represented by a single construction, then every operation from $A$ to $B$ is automatically extensional, and choice from $A$ to $B$ is valid. An example is $A = \mathbb{N}$ because each natural number is represented by precisely one construction: $0$, $S(0)$, $S(S(0))$, ...
The moral of the story is that the devil hides important details in the passage from informal, pre-mathematical notions to their mathematically precise formulation.
|
Background: I am a theoretical computer scientist (PhD candidate) and have done graduate level courses in Algebra.
I want to understand the following theorem from the book "Symmetric Bilinear Forms" by J. Milnor and D. Husemoller.
Theorem (9.5, pp 46) For any dimension n there exists a positive definite inner product space $X$ of type $1$ and rank $n$ with $\min_{x \in X \setminus {0}} x.x \geq n ($n \rightarrow \infty$)$.
In particular, the theorem implies the existence of a self-dual lattice with shortest vector \geq \sqrt{n}.
The proof of this theorem is a byproduct of Siegel's theorem, which can be seen as a bound on the "average" number of solutions to a quadratic equation. The equation of interest for me is x.x = k i.e., the number of vectors of a particular length $k$ from an inner-product space $X$.
The above mentioned book does not give a proof for the Siegel's theorem. Additionally, the proof of Theorem 9.5 uses some results over p-adic integers and others on genus of bilinear form spaces to show that the number of solutions of the equation x.x=k, summed for k \in {1, \dots, n} is <2 if averaged over all distinct linear product space in the genus $I_n$.
My aim is to understand this proof.
What are the books which I should start with to understand the proof ? The Milnor book is too short and seems to be aimed at experts. Note that I do not understand the p-adic integers and so it is very difficult for me to make sense of the proof.
|
Regression (OLS) - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Regression (OLS)
$z$ test for the difference between two proportions
Paired sample $t$ test
Sign test
Two sample $t$ test - equal variances assumed
Independent variables Independent variable Independent variable Independent variable Independent variable One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables One categorical with 2 independent groups 2 paired groups 2 paired groups One categorical with 2 independent groups Dependent variable Dependent variable Dependent variable Dependent variable Dependent variable One quantitative of interval or ratio level One categorical with 2 independent groups One quantitative of interval or ratio level One of ordinal level One quantitative of interval or ratio level Null hypothesis Null hypothesis Null hypothesis Null hypothesis Null hypothesis $F$ test for the complete regression model: $\pi_1 = \pi_2$
$\pi_1$ is the unknown proportion of "successes" in population 1; $\pi_2$ is the unknown proportion of "successes" in population 2
$\mu = \mu_0$
$\mu$ is the unknown population mean of the difference scores; $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0
$\mu_1 = \mu_2$
$\mu_1$ is the unknown mean in population 1, $\mu_2$ is the unknown mean in population 2
Alternative hypothesis Alternative hypothesis Alternative hypothesis Alternative hypothesis Alternative hypothesis $F$ test for the complete regression model: Two sided: $\pi_1 \neq \pi_2$
Right sided: $\pi_1 > \pi_2$
Left sided: $\pi_1 < \pi_2$
Two sided: $\mu \neq \mu_0$
Right sided: $\mu > \mu_0$
Left sided: $\mu < \mu_0$
Two sided: $\mu_1 \neq \mu_2$
Right sided: $\mu_1 > \mu_2$
Left sided: $\mu_1 < \mu_2$
Assumptions Assumptions Assumptions Assumptions Assumptions all individuals in the population. Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another Test statistic Test statistic Test statistic Test statistic Test statistic $F$ test for the complete regression model:
Note 2: if only one independent variable ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1$
$z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
$p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2
Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1$
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
$\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to H0, $s$ is the sample standard deviation of the difference scores, $N$ is the sample size (number of difference scores).
The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$
$W = $ number of difference scores that is larger than 0 $t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
$\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s_p$ is the pooled standard deviation, $n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to H0.
The denominator $s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.
Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$
Sample standard deviation of the residuals $s$ n.a. n.a. n.a. Pooled standard deviation $\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $ - - - $s_p = \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2}{n_1 + n_2 - 2}}$ Sampling distribution of $F$ and of $t$ if H0 were true Sampling distribution of $z$ if H0 were true Sampling distribution of $t$ if H0 were true Sampling distribution of $W$ if H0 were true Sampling distribution of $t$ if H0 were true Sampling distribution of $F$: Approximately standard normal $t$ distribution with $N - 1$ degrees of freedom The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $p$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $p = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $np = n \times 0.5$ and standard deviation $\sqrt{np(1-p)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately a standard normal distribution if the null hypothesis were true.
$t$ distribution with $n_1 + n_2 - 2$ degrees of freedom Significant? Significant? Significant? Significant? Significant? $F$ test: Two sided: Two sided: If $n$ is small, the table for the binomial distribution should be used:
Two sided:
If $n$ is large, the table for standard normal probabilities can be used:
Two sided:
Two sided: $C\%$ confidence interval for $\beta_k$ and for $\mu_y$; $C\%$ prediction interval for $y_{new}$ Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$ $C\%$ confidence interval for $\mu$ n.a. $C\%$ confidence interval for $\mu_1 - \mu_2$ Confidence interval for $\beta_k$: Regular (large sample): $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)
The confidence interval for $\mu$ can also be used as significance test.
- $(\bar{y}_1 - \bar{y}_2) \pm t^* \times s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$
where the critical value $t^*$ is the value under the $t_{n_1 + n_2 - 2}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)
The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
Effect size n.a. Effect size n.a. Effect size Complete model: - Cohen's $d$:
Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0$
- Cohen's $d$:
Standardized difference between the mean in group $1$ and in group $2$: $$d = \frac{\bar{y}_1 - \bar{y}_2}{s_p}$$ Indicates how many standard deviations $s_p$ the two sample means are removed from each other
n.a. n.a. Visual representation n.a. Visual representation - - - ANOVA table n.a. n.a. n.a. n.a. - - - - n.a. Equivalent to Equivalent to Equivalent to Equivalent to - When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels One sample $t$ test on the difference scores
Repeated measures ANOVA with one dichotomous within subjects factor
Two sided sign test is equivalent to One way ANOVA with an independent variable with 2 levels ($I$ = 2):
OLS regression with one categorical independent variable with 2 levels:
Example context Example context Example context Example context Example context Can mental health be predicted from fysical health, economic class, and gender? Is the proportion smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic. Is the average difference between the mental health scores before and after an intervention different from $\mu_0$ = 0? Do people tend to score higher on mental health after a mindfulness course? Is the average mental health score different between men and women? Assume that in the population, the standard deviation of mental health scores is equal amongst men and women. SPSS SPSS SPSS SPSS SPSS Analyze > Regression > Linear... SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
Analyze > Compare Means > Paired-Samples T Test... Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples... Analyze > Compare Means > Independent-Samples T Test... Jamovi Jamovi Jamovi Jamovi Jamovi Regression > Linear Regression Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
T-Tests > Paired Samples T-Test Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
T-Tests > Independent Samples T-Test Practice questions Practice questions Practice questions Practice questions Practice questions
|
This nice problem was in the analysis section of Putnam and Beyond: prove
\begin{align*}
\lim_{n\to \infty} n^2 \int_0^{1/n} x^{x+1} \, dx = 1/2. \end{align*}
The solution is quite nice, and simply relies on the fact that $\lim_{n\to 0^+} x^x = 1$, hence for $n$ large enough, we can approximate the integral with $\int_0^{1/n} x\, dx$ instead.
There’s an easy generalization of this problem: \begin{align*} \lim_{n\to \infty} n^{k+1} \int_0^{1/n} x^{x+k} \, dx = 1/(k + 1). \end{align*}
Generalizing this fact, we don’t even need the composite exponential as the proof just need a $f(x)$ to be a function such that $\lim_{x\to 0^+} f(x) = 1$ with an integral bound approaching $0$.
|
Let $P_{n}$ be the product of the numbers in row of Pascal's Triangle. Then evaluate $$ \lim_{n\rightarrow \infty} \dfrac{P_{n-1}\cdot P_{n+1}}{P_{n}^{2}}$$
closed as off-topic by heropup, TZakrevskiy, user147263, dustin, Newb Mar 11 '15 at 0:19
This question appears to be off-topic. The users who voted to close gave this specific reason:
" This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – heropup, TZakrevskiy, Community, dustin, Newb
Nice question :)
$P_{n} = \prod_{k=0}^{n}\binom{n}{k} $
$= \prod_{k=0}^{n} \dfrac{n!}{(n-k)!\cdot k!}$
$ = n!^{n+1} \prod_{k=0}^{n} \dfrac{1}{k!^{2}}$
$ \therefore P_{n+1} = (n+1)!^{n+2} \prod_{k=0}^{n+1} \dfrac{1}{k!^{2}}$
$ \Rightarrow \dfrac{P_{n+1}}{P_{n}}=\dfrac{(n+1)^{n}}{n!} ,\dfrac{P_{n}}{P_{n-1}}=\dfrac{(n)^{n-1}}{(n-1)!} $
Now the question asks for,
$\lim_{n\rightarrow \infty} \dfrac{P_{n-1}P_{n+1}}{P_{n}^{2}} $
So we have ,
$ \lim_{n\rightarrow \infty} \dfrac{P_{n-1}P_{n+1}}{P_{n}^{2}} = \lim_{n\rightarrow \infty} \dfrac{(n-1)!(n+1)^{n}}{n!\times n^{n-1}}$
$ = \lim_{n\rightarrow \infty} \dfrac{(n+1)^{n}}{n\times n^{n-1}}$
$ = \lim_{n\rightarrow \infty} \left ( \dfrac{n+1}{n} \right )^{n} $
$ = \lim_{n\rightarrow \infty} \left ( 1 + \frac{1}{n} \right )^{n}$
$ = e $
Btw is where did you get this question from ?
The product of terms in the nth line of a Pascal triangle is given by the product of binomial
$$ P_n = \prod\limits_{k=0}^n \binom{n}{k}$$
So your expression evaluates to
$$\lim_{n\rightarrow\infty} \prod_{k=0}^n \prod_{k'=0}^{n+1} \prod_{k''=0}^{n-1}\frac{(n-1)! (n+1)!((n-k)!)^2(k!)^2}{(n!)^2(n-k'-1)! (n-k''+1)!} = \lim_{n\rightarrow\infty}(n+1)\left(\frac{n+1}{n}\right )^n\frac{n!}{(n+1)!} = e$$
Does this makes sense?
We have that $P_n = \prod_{i=0}^n \binom n i = \prod_{i=0}^n \frac {n!}{i! (n-i)!}$.
So $\frac { P_{n-1} P_{n+1} } {P_n} = (n+1) \prod_{i=1}^{n-1} \frac{(n-1)! (n+1)! i!^2 (n-i)!^2} {n!^2 i!^2 (n-i-1)! (n-i+1)!} = (n+1) \prod_{i=1}^{n-1} \frac{n+1}{n} \frac{n-i}{n-i+1} = \frac {(n+1)^n} {n^{n-1}} \frac {1} {n}$.
Thus, we get that $\frac { P_{n-1} P_{n+1} } {P_n} = (1+1/n)^n$ which is equal to $e$ when $n \rightarrow \infty$
|
I'm reading Boyd's
Convex Optimization textbook. In particular, I'm currently focusing on Chapter 5 (Duality). There is a frequent recurrence of two examples: Minimum volume covering ellipsoid\begin{align*} minimize & \quad log \; det \; X^{-1}\\ subject \; to & \quad a_i^T X a_i \leq 1 \\ \end{align*} Entropy Maximization: \begin{align*} minimize & \quad \displaystyle\sum_{i=1}^n x_i \;\;log \;x_i \\ subject \; to & \quad Ax \leq b \\ & \quad {\bf 1}^Tx=1 \end{align*}
I understand the interpretation of (1) as the minimum volume covering ellipsoid, but when would you ever want to solve this? E.g., if doing machine learning, you might want to do something like this to training data for the purpose of outlier detection, but such a model would surely be overfit, and it would seem better to incorporate a more graceful probabilistic decay from the boundary (you might consider setting the discovered ellipsoid to be equal to, say, the 95th percentile probability contour in a multivariate Gaussian model; but instead of doing this, it would seem wiser to just maximize a multivariate Gaussian likelihood directly). So when might one want to solve this problem?
As for (2), I can vaguely imagine situations in which one might want to find a maximum entropy probability distribution (as suggested by the cost function and second constraint) which satisfy some constraints, but what is a realistic example where one might want to impose the linear inequality constraint $Ax \leq b$?
|
I would like help with the conjecture that the function $f:\mathbb{R}_{\ge 0}^n\to\mathbb{R}_{\ge 0}^n $ with $f(x) = x \circ (Ax)$ where
∘ is the Hadamard product (equivalently, $f_i(x) = x_i \sum_j a_{ij}x_j$ )
and where A is a symmetric real matrix with elements $a_{ij} \ge 1$,
is invertible (locally would be enough, though I suspect globally on its domain, nonnegative components).
I did not get anywhere looking at the Jacobian or trying to show that f is one-to-one.
The function is such that for a non negative scalar k, $f(kx) = k^2 f(x)$, so it maps lines starting at the origin to lines starting at the origin (more precisely, rays in the nonnegative orthant with endpoint at the origin).
|
Finding equilibrium points of a continuous-time model \(\frac{dx}{dt} = G(x)\) can be done in the same way as for a discrete-time model, i.e., by replacing all \(x\)’s with \(x_{eq}\)’s (again, note that these could be vectors). This actually makes the left hand side zero, because \(x_{eq}\) is no longer a dynamical variable but just a static constant. Therefore, things come down to just solving the following equation
\[0=G(x_{eq}) \]
with regard to \(x_{eq}\). For example, consider the following logistic growth model:
\[\frac{dx}{dt} =rx \left(1-\dfrac{x}{K} \right) \label{7.1}\]
Replacing all the \(x\)’s with \(x_{eq}\)’s, we obtain
\[0 =rx_{eq} \left(1-\dfrac{x}{K} \right) \label{7.3}\]
\[x_{eq} =0, K \label{7.4}\]
It turns out that the result is the same as that of its discrete-time counterpart(see Eq.(5.1.6)).
Exercise \(\PageIndex{1}\)
Find the equilibrium points of the following model:
\[\frac{dx}{dt} =x^{2} -rx +1 \label{7.5}\]
Exercise \(\PageIndex{2}\): Simple Pendulum
Find the equilibrium points of the following model of a simple pendulum:
\[\frac{d^{2} \theta}{dt^{2}} = -\frac{g}{L} \sin{\theta}\]
Exercise \(\PageIndex{3}\): Susceptible-Infected-Recovered model
The following model is called a
Susceptible-Infected-Recovered (SIR) model, a mathematical model of epidemiological dynamics. \(S\) is the number of susceptible individuals, \(I\) is the number of infected ones, and \(R\) is the number of recovered ones. Find the equilibrium points of this model.
\[ \begin{align} \frac{dS}{dt} &= -aSI \label{7.7} \\[4pt] \frac{dI}{dt} &= aSI -bI \label{7.8} \\[4pt] \frac{dR}{dt} &=bI \label{7.9} \end{align} \]
|
One can solve Poisson’s problem $-\Delta u = f$ in $d$ dimensions with homogeneous Dirichlet boundary conditions using a mixed formulation as explained below:
Let $\sigma = \nabla u$, then for a sufficiently smooth function $\tau$, by Green’s theorem
\begin{align*} (\sigma, \tau) &= (\nabla u, \tau) \\ &= -(u, \textrm{div } \tau). \end{align*} Again, choosing $v$ a function sufficiently smooth, we have \begin{align*} f = -\textrm{div } \sigma \implies (f, v) = (-\textrm{div } \sigma, v). \end{align*} This gives the saddle-point problem: find $(u, \sigma) \in V \times M$ such that \begin{align*} (\sigma, \tau) + (u, \textrm{div } \tau) &= 0\\ (\textrm{div } \sigma, v) &= -(f, v) \end{align*} hold for all $(\tau, v) \in V \times M$. Note that we don’t have to take a derivative of $u$, hence it’s natural to try $M = L^2$, but what about the space $V$?
One very easy choice to guess is $V = [H^1(\Omega)]^d$ as we want the divergence to be all defined, but unfortunately this doesn’t work as the gradient of the solution to Poisson’s problem can easily not be in $[H^1(\Omega)]^d$.
In order to illustrate this, consider $u =\left(r^{2/3}-r^{5/3}\right) \sin \left(\frac{2 \theta }{3}\right)$ on the domain of the unit circle with bottom left quarter taken out. It’s not hard to see that $u = 0$ on the boundary of the domain, and we can easily find the $f$ such that it satisfies Poisson’s equation. Now, we can either calculate the gradient exactly or argue as follows.
First, recall how to take a gradient in polar coordinates. Note that $\partial_r u \approx r^{-1/3}$ plus higher order terms and that $\frac{1}{r}\partial_\theta u \approx r^{-1/3}$ plus higher order terms also. Now, one can easily calculate the $H^1$ seminorm to see that the derivative is unbounded as we’re integrating over $[0,1]$ with $(r^{-4/3})^2r$ terms (the extra $r$ comes from the change of basis from polar integration).
The above is an example of why the space $H(\textrm{div})$ is needed.
|
It is known that metric TSP can be approximated within $1.5$ and cannot be approximated better than $123\over 122$ in polynomial time. Is anything known about finding approximation solutions in exponential time (for example, less than $2^n$ steps with only polynomial space)? E.g. in what time and space we can find a tour whose distance is at most $1.1\times OPT$?
I've studied the problem and I found the best known algorithms for TSP.
$n$ is the number of vertices, $M$ is the maximal edge weight. All bounds are given up to a polynomial factor of the input size ($poly(n, \log M)$). We denote Asymmetric TSP by ATSP.
1. Exact Algorithms for TSP 1.1. General ATSP
$M2^{n-\Omega(\sqrt{n/\log (Mn)})}$ time and $exp$-space (Björklund).
$2^{2n-t} n^{\log(n-t)}$ time and $2^t$ space for $t=n,n/2,n/4,\ldots$ (Koivisto, Parviainen).
$O^*(T^n)$ time and $O^*(S^n)$ space for any $\sqrt2<S<2$ with $TS<4$ (Koivisto, Parviainen).
$2^n\times M$ time and poly-space (Lokshtanov, Nederlof).
Even for Metric TSP nothing better is known than algorithms above. It is a big challenge to develop $2^n$-time algorithm for TSP with polynomial space (see Open Problem 2.2.b, Woeginger).
1.2. Special Cases of TSP
$1.657^n\times M$ time and exponentially small probability of error(Björklund) for Undirected TSP.
$(2-\epsilon)^n$ and $poly$-space for TSP in graphs with bounded maximal degree and bounded integer weights, $\epsilon$ depends only on degree of graph (Björklund, Husfeldt, Kaski, Koivisto).
$1.251^n$ and $poly$-space for TSP in cubic graphs (Iwama, Nakashima).
$1.890^n$ and $poly$-space for TSP in graphs of degree $4$ (Eppstein).
$1.733^n$ and exponential space for TSP in graphs of degree $4$ (Gebauer).
$1.657^n$ time and $poly$-space for Undirected Hamiltomian Cycle (Björklund).
$(2-\epsilon)^n$ and exponential space for TSP in graphs with at most $d^n$ Hamiltonian cycles (for any constant $d$) (Björklund, Kaski, Koutis).
2. Approximation Algorithms for TSP 2.1. General TSP
Cannot be approximated within any polynomial time computable function unless P=NP (Sahni, Gonzalez).
2.2. Metric TSP
$3 \over 2$-approximation (Christofides).
Cannot be approximated with a ratio better than $123\over 122$ unless P=NP (Karpinski, Lampis, Schmied).
2.3. Graphic TSP
$7\over5$-approximation (Sebo, Vygen).
2.4. (1,2)-TSP
MAX-SNP hard (Papadimitriou, Yannakakis).
$8 \over 7$-approximation (Berman, Karpinski).
2.5. TSP in Metrics with Bounded Dimension
TSP is APX-hard in a $\log{n}$-dimensional Euclidean space (Trevisan).
PTAS for TSP in metrics with bounded doubling dimension (Bartal, Gottlieb, Krauthgamer).
2.6. ATSP with Directed Triangle Inequality
$O(1)$-approximation (Svensson, Tarnawski, Végh)
Cannot be approximated with a ratio better than $75\over 74$ unless P=NP (Karpinski, Lampis, Schmied).
2.7. TSP in Graphs with Forbidden Minors
Linear time PTAS (Klein) for TSP in Planar Graphs.
PTAS for minor-free graphs (Demaine, Hajiaghayi, Kawarabayashi).
$22\frac{1}{2}$-approximation for ATSP in planar graphs (Gharan, Saberi).
$O(\frac{\log g}{\log\log g})$-approximation for ATSP in genus-$g$ graphs (Erickson, Sidiropoulos).
2.8. MAX-TSP
$7\over9$-approximation for MAX-TSP (Paluch, Mucha, Madry).
$7\over8$-approximation for MAX-Metric-TSP (Kowalik, Mucha).
$3\over4$-approximation for MAX-ATSP (Paluch).
$35\over44$-approximation for MAX-Metric-ATSP (Kowalik, Mucha).
2.9. Exponential-Time Approximations
It is possible to compute $(1+\epsilon)$-approximation for MIN-Metric-TSP in time $2^{(1-\epsilon/2)n}$ with exponential space for any $\epsilon\le \frac{2}{5}$, or in time $4^{(1-\epsilon/2)n} n^{\log n}$ with polynomial space for any $\epsilon \leq \frac{2}{3}$ (Boria, Bourgeois, Escoffier, Paschos).
I would be grateful for any additions and suggestions.
A 1.1-approximation can be obtained in time (and space) $O^*(1.932^n)$ by adapting a "truncated" version of Held and Karp's exact $O^*(2^n)$ algorithm. Here $n$ is the number of locations. More in general, a $(1+\epsilon)$-approximation can be found in time $O^*(2^{(1-\epsilon/2)n})$ for all $\epsilon \le 2/5$. This is from:
Nicolas Boria, Nicolas Bougeois, Bruno Escoffier, Vangelis Th. Paschos: Exponential approximation schemas for some graph problems. Available online.
A similar question can be asked for any problem where we have a lower bound $\alpha$ on the approximability and an upper bound $\beta$ and currently $\alpha < \beta$. I am assuming that the questioner is interested in sub-exponential time algorithms. This depends on the unknown "truth". Say the problem is NP-Hard to approximate to within a factor $\gamma$ which is some where in the interval [$\alpha, \beta]$. What this means is that there is a reduction from SAT to the problem such that better than $\gamma$-approximation would allow us to decide the answer to SAT. If we believe the exponential-time hypothesis for SAT then the efficiency of the reduction will give a $\theta$ such that approximating below $\gamma$ is not possible in time less than $2^{n^{O(\theta)}}$. However any thing worse than $\gamma$ is possible in polynomial time. What this means is that we do not
typically (at least in the constant factor range) see improvements in the approximation ratio even when given sub-exponential-time. There are several problems where the best hardness result known is via an inefficient reduction from SAT, that is, the hardness result is under a weaker assumption such as NP not contained in quasi-polynomial time. In such cases one may get a better approximation in sub-exponential time. The only one I know of is the group Steiner tree problem. A recent famous result is the one of Arora-Barak-Steurer on a sub-exponential-time algorithm for unique games: the conclusion we draw from this result is that if UGC is true then the reduction from SAT to UGC has to be some what inefficient, that is, the size of the instance of UGC obtained from the SAT formula has to grow with the parameters in a certain fashion. Of course this is predicated on the exponential-time hypothesis for SAT.
The best tsp for weighted bounded genus graphs is http://erikdemaine.org/papers/ContractionTSP_Combinatorica/.
|
If the spherical approximation is good enough, you should be able to convert the surface areas into radii. I mention that because in my work with light scattering, all the equations are usually written in terms of the radius of the scatterers. It also looks like all the Mie scattering tables are written in terms of the radius of the scatterers as well.
The best I can do in terms of actual formulas is that the scattering is proportional to $1/a^2$, times an intensity factor that you have to look up in a table. $a$ is the radius of the scatterer. (If it seems weird that scattering could go
down as radius increases, see below for an explanation.) Knowing that you're only looking at one angle eliminates the angular portion of the intensity factor, but it's a complicated function of the size of the scatterer due to resonance effects when $a/\lambda \approx 1$. I found two tables of Mie coefficients. The first only has values for $n=1.40$. The second has more indexes of refraction, but might have access restrictions (it initially told me I had access thanks to my university library).
Section 10.4 of Jackson derives some equations for the scattering of electromagnetic radiation from spherical particles, beginning from Maxwell's equations. He eventually leaves off without discussing the full problem. That might be a useful starting point for the theory.
Wikipedia just pointed me to an English translation of the original paper by Mie, which I didn't know existed. I haven't yet had a chance to read it, so I don't know how useful it is.
Everyone seems to refer back to Kerker as the first textbook that contains a full treatment of the Mie problem, but it's difficult to find, and very expensive. I would consider it only if you can get it through your university's library. I have a copy of the Dover edition of van de Hulst, which focuses specifically on the light scattering problem. It appears that there is a Dover edition of Stratton, now, as well.
Why does the scattering intensity go down as the particle's radius increases? The first caveat is that the scattering isn't just proportional to $1/a^2$; the intensity factor is also a function of particle radius. I'm only marginally familiar with the general Mie problem, so I don't fully know all the complications that introduces.
The other issue, and one that I am comfortable with, can be explained with reference to the Rayleigh scattering problem. In the experimetns I'm used to, we plot $1/I$ on the y-axis, and $\sin^2(\theta/2)$ on the x-axis ($I$ is the scattering intensity, and $\theta$ is the scattering angle). Under a set of approximations, that gives a straight line. The y-intercept is proportional to 1 over the molecular weight of the particle. The slope is proportional to the square of the radius of the particle. So for a given molecular weight, increasing the particle's radius will increase the slope of that line. So for a given angle, you have to increase $1/I$, which means that $I$ must go down. I think the underlying explanation for that that the density of the particle decreases, because you have increased the volume while keeping the same mass.
|
Someone showed me a derivation for the area of a circle today. They took a circle of radius $r$ and inscribed a regular polygon in the circle. If you take an $n$-sided polygon, then its area is:
$$\frac{1}{2}r^2\left(\sin{\frac{2\pi}{n}}\right)n$$
If you let $n$ go to infinity, then you get $\pi$$r^2$ as your area.
However, you are using the limit for $\sin x/x$ as $x$ goes to $0$ to derive this. In order to derive that limit, you need to show that $\sin x<x<\tan x$, which is done using the unit circle and comparing the areas of two triangles and a sector. To find the area of the sector, you need to know the area of a circle. Almost appropriately, we've reached a circular logic.
Is there any way around this?
|
$S_n$ acts on $\mathbb{C}^n$ by permutation, and there are two conventions which work (permute basis vectors or permute components), but permuting basis vectors ends up being a bit more natural. Here is a little background first. Given a group $G$ and two sets $A$ and $B$ which $G$ acts on, then the set of functions $A\to B$ is acted upon by $G$ using the action defined by $(gf)(a)=gf(g^{-1}a)$. The inverse makes it so $g(hf)=(gh)f$.
A vector is a function $v:[n]\to \mathbb{C}$, and viewing $[n]=\{1,2,\dots,n\}$ as a set with an $S_n$ action (in particular, the
defining action of $S_n$) and viewing $\mathbb{C}$ as the trivial representation, $S_n$ acts on vectors by $(\sigma v)_i=v_{\sigma^{-1}(i)}$. On standard basis vectors, one can verify that $\sigma e_i=e_{\sigma(i)}$:$$(\sigma e_i)_j=\delta_{i,\sigma^{-1}(j)}=\delta_{\sigma(i),j}=(e_{\sigma(i)})_j$$
Let $V$ be the span of the vectors you describe, so $V$ is the orthogonal complement of the vector $(1,1,\dots,1)$, or in other words the set of vectors whose components sum to $0$. Certainly, permuting the entries of a vector will not change whether the components sum to $0$, so $V$ is a subrepresentation of dimension $n-1$.
That $V$ is simple can be shown by showing that it is cyclic for any nonzero $v\in V$. Just give a process which can take an arbitrary vector and give one of the basis vectors through a sequence of permuting entries and linear combinations, then show that every basis vector can be reached from that basis vector.
Another way is to calculate the character $\chi$ of the representation and show that $(\chi,\chi)=1$. (Easiest by starting with the character of the $\mathbb{C}^n$ representation and subtracting off the trivial representation.)
|
I think this is more probability theory question. The trick is, if $x \sim F(x) $ ($F(x)$ is the cumulative distribution function or CDF in abbreviation form) then the transformation $ u = F(x)$ produces the uniform random variable $u$ with uniform distribution on interval $[0,1]$. you can see link below for proof:
Show Y has a uniform distribution if Y=F(X) where F(x)=P[X $\le$ x] is continuous in x.
Also to construct an arbitrary distribution $x \sim F(x)$ from a uniform random variable $u \sim U([0,1])$, it's enough to form the random variable $x = F^{-1}(u)$ (again $F(x)$ is the CDF of $x$). This one is the result of previous one if you change u with x. Therefore hre you need to apply the transformations to get the desired function:
So we have $h(x) = 6x^5 \Rightarrow F_x(x) = x^6 , x\in [0,1]$.
Then $y = x^6$ has the uniform distribution according to above lemmas.
To get a random variable with distribution $H(X) = 1.8X+0.1 \Rightarrow F_X(X) = 0.9X^2 + 0.1X$ you need to calculate the inverse of $F(X)$ which is $\frac{-0.1+\sqrt{0.81+4X}}{1.8}$ and then introduce the random variable $z = \frac{-0.1+\sqrt{0.81+4X}}{1.8}$ on the uniform random variable obtained from previous step. combining these two gives the transformation $t = \frac{-0.1+\sqrt{0.81+4x^6}}{1.8}$.
Therefore if $x$ is distributed as $6x^5$, then $t$, defined as $t = \frac{-0.1+\sqrt{0.81+4x^6}}{1.8}$ is distributed as $1.8t+0.1$.
|
The lecturer taught this method in my Optimization and Control Theory Class and I wasn't quite there when he named it. Could you help me out?
He gave the following example of the method in class:
Example: Solve $$ \text{max} [ f(x) = x_1 (30 - x_1) + x_2 (50 -2x_2) - 3x_1 - 5x_2 - 10x_3]$$Subject to the constraints: $$ x_1 + x_2 \le x_3 \text{and } x_3 \le 17.25 $$ Solution: Begin by converting constraints to form:
$$g_1(\bar x) = x_1 + x_2 - x_3 \le 0$$ $$g_2(\bar x) = x_3 - 17.25 \le 0$$ Then: $$ L(\bar x, \bar \lambda) = f(\bar x) \pm \left( \lambda_1 g_1(\bar x) + ... + \lambda_m g_m(\bar x) \right) $$
And then he proceeded as follows:
$$\begin{align} D_{\bar x} L: & \frac{\partial L}{\partial x_1} = 30 - 2x_1 - 3 - \lambda_1 = 0 \\ & \frac{\partial L}{\partial x_2} = 50 - 4x_2 - 5 - \lambda_1 = 0\\ & \frac{\partial L}{\partial x_3} = -10 + \lambda_1 - \lambda_2 = 0 \\ \end{align}$$
A system of equations is found and solved with further constraints: $\lambda_1 \ge 0$ and $\lambda_2 \ge 0$:
$$\lambda_1(x_1 + x_2 + x_3) = 0$$ $$\lambda_2 (x_3 - 17.25) = 0$$
(Through trial and error it is found that $\lambda_1 > 0 $ and $ \lambda_2 = 0$ is the best condition to solve this system)
Ultimately we get the solution: $$ x_1 = 8.5$$ $$ x_2 = 8.75$$ $$ x_3 = 17.25$$
And the equation is solved.
-- So I want to read more background on this method. I'd like to know what its called. I notice the 'L' as the name of the function. Could it be Lagrange? Or something of the sort?
|
Mathematische Software – Geogebra
This is the Android Version of GeoGebra. It is a lot more cumbersome to use than the desktop version, but you can get used to it. Although I will be criticizing it a bit below it is a very nice software. It can do far more than an average student would ever need. If you do not know the power of GeoGebra, have a look at its home page. My own programs Euler Math Toolbox or C.a.R. and others in the same category may be more powerful on many areas and more versatile. But GeoGebra offers an all-in-one package with teacher support and a world community. Over the years, they completed the program and added missing features. They even included 3D constructions and augmented reality. For this, we had to use specialized programs like Archimedes 3D or Cabri 3D, both payware. Moreover, they added JavaScript support on webpages so that constructions can still be embedded after Java has been killed in the browser. This free package is to be recommended.
Instead of writing a review, let me point out the shortcomings of CAS with regard to the learning process. No, I won’t be arguing that software hinders the process of acquiring mathematical skills and math has to be done with pencil and paper. Rather the contrary is true. I will be arguing for more software usage. But it needs to be used in an intelligent way. For that, we need intelligent teaching.
Have a look at the example in the image above. It was the first example I tried on my Android device. We are discussing the function
\(f(x) = e^{5x} \, \sin(x)\)
We want to learn its behavior. It is very difficult to get a really good impression on the Android device. You can try to zoom in and out. But without further information, you will not be able to grasp its structure. The software uses the nicest feature of touch screens, the pinch zoom. So you can zoom right into the interesting region as in the image above. Even then, it looks as if the function was zero left of -1.5.
If you zoom out further you see the following.
One can only understand this image if one knows how the two factors look like and have ever studied a dampened oscillation before. So the plot does not really help without the mathematical background. But, on the other hand, if you have the background the plot can be a huge help in asserting the knowledge and confirming it.
Next, I tried to find the first local minimum on the negative axis. You can solve that numerically in the program by touching the graph in the minimum. The software will then display one of these black dots showing all special points of the plot, and you can read the coordinates below the plot. I think this is a very nice way of grasping math and something EMT cannot do that easily. Of course, you can do it on the command line.
>function f(x) &= exp(5*x)*sin(x) 5 x E sin(x) >plot2d(f,-1,0.2); >xm=solve(&diff(f(x),x),0.2) -0.19739555985 >plot2d(xm,f(xm),>points,>add); But let us talk about the CAS aspect of this solution. GeoGebra produces a very interesting solution to this problem.
\(\{ x = 2 \tan^{-1}(\sqrt{26}+5), \, x=2 \tan^{-1}(-\sqrt{26}+5) \}\)
There is a switch to evaluate this numerically. If it is pressed four surprising values appear: -191.31°, -11.31°, 168.69°, 348.69° (rounded). The degrees can most likely be avoided by setting the program to radian mode. The values are correct.
>fzeros(&diff(f(x),x),-200°,360°); %->° [-191.31, -11.3099, 168.69, 348.69]
What do we make of all this?
First of all, the computations, algebraic or numerical, do not make much sense without proper explanations and without the mathematical background. The zeros of the sine function and consequently of the function f simply are the multiples of 180°. Between each zero, the function has at least one extremum, alternatingly a minimal and maximal value. So much is easy to see with the naked eye. In fact, there is exactly one extremal point in each interval. This is harder to see, however. By computing the derivative, we get the extremal points as the solutions of
\(\tan(x) = – \frac{1}{5}\)
Every book about trigonometry contains a plot of the tangent function like the following.
I added the line y=-0.2. So you can easily see that the extremal points repeat in distances of 180°. Problem solved.
Why does GeoGebra produce such a complicated answer involving the square root of 26? That is a mathematical problem in itself and well above the capabilities of high school students. The reason may be connected to replacing sine with 1 minus cosine squared and solving a quadratic equation.
And why does it only show four of the infinitely many solutions? I do not know.
We learn from all this that numerical or algebraic software or plots can be useful. But without mathematical background they are useless.
|
Ms Alexandra Kiňová (23) is expecting the first Czechia's naturally born quintuplets (a package of 5 babies) on Sunday morning (tomorrow; update: they're out fine) which would mean that we match the achievement of the most fertile U.S. state – Utah – from the last week. Cool anniversary:In late January, we celebrated the 30th anniversary of the announcement of the discovery of the W-boson. Today, we celebrate the 30th anniversary of the Z-boson. They were comparably important discoveries to the recent discovery of the God particle. Sport:Viktoria Pilsen defeated Hradec, a much weaker team, 3-to-0 in the last round so we won the top soccer league for the 2nd time (after 2011). Because the Pilsner ice-hockey team has won the top league as well, Pilsen became the 2nd town in Czechia after Prague that collected both titles in the same year (correction: wrong, 3rd town, Ostrava did it in 1981).
The Daily Mail tells us that the pregnancy has been easy so far. Doctors were still talking about "twins" in January and "quadruplets" in April. The probability that a birth produces \(n\)-tuplets goes like \(1/90^{n-1}\) or so but the decrease slows down relatively to this formula for really high representations.
In physics, quintuplets are rare, too. By quintuplets, we mean five-dimensional irreducible representations of groups.
Correct me if I am wrong but I think that among the simple Lie groups, only \(SU(2)=SO(3)\), \(USp(4)=SO(5)\), and \(SU(5)\) have irreducible five-dimensional representations. Let's look at them because looking at all quintuplets in group theory and physics is a rather unusual direction of approach to a subset of wisdom contained across the structure of maths and physics.
First, \(SU(2)\). That's a three-dimensional group of \(2\times 2\) complex matrices \(M\) obeying \(MM^\dagger={\bf 1}\) and \(\det M=1\). The basic isomorphisms behind spinors imply that this group is the same as the group \(SO(3)\) of rotations of the three-dimensional space except that the matrices \(+M\) and \(-M\) have to be identified.
The irreducible representations of \(SU(2)\) are labeled by the spin \(j\) which must be either non-negative integer or positive half-integer (only the former may also be interpreted as proper representations of \(SO(3)\); the latter change their sign after a 360-degree rotation). Because the \(z\)-projection goes from \(m=-j\) to \(m=+j\) with the spacing equal to one, the representation is \((2j+1)\)-dimensional.
The \(j=0\) representation is the trivial singlet that doesn't transform at all; the \(j=1/2\) is the two-dimensional pseudoreal spinor; the \(j=1\) representation is equivalent to the usual 3-dimensional vector; the \(j=3/2\) representation is a gravitino-like four-dimensional "spinvector". And finally, the \(j=2\) representation is the traceless symmetric tensor. What do I mean by that?
Imagine that you consider the tensor product \(V\otimes W\) of two copies of the three-dimensional vector space \(V=W=\RR^3\). The tensor product is composed of objects \(T_{ij}\) where \(i,j\) are vector indices: it's composed of tensors. Clearly, such a tensor has \(3\times 3 = 9\) independent components. They can be split into several pieces:\[
{\bf 3}\otimes {\bf 3} = {\bf 5} \oplus {\bf 1}\oplus {\bf 3}
\] The identity \(3\times 3 = 5+1+3\) is the consistency check that verifies that the representations above have the right dimensions but the boldface identity above says more than just the arithmetic claim about the integers: the two sides are representations of whole groups and the identity says that they're transforming in equivalent ways under all elements of the group. Why is this decomposition right? Well, the tensor \(T_{ij}\) may be divided to the symmetric tensor part which is 6-dimensional and the antisymmetric tensor which is 3-dimensional (it is equal to \(\epsilon_{ijk}v_k\) i.e. equivalent to some vector \(v_k\)).
However, the 6-dimensional symmetric tensor isn't an irreducible representation of \(SO(3)\). The trace \[
\sum_{i=1}^3 T_{ii}
\] is independent of the coordinate system i.e. invariant under rotations and may be separated from the 6-dimensional representation. The trace may be set to zero by removing it i.e. considering\[
T^\text{traceless part}_{ij} = T_{ij} - \frac 13 \delta_{ij} T_{kk}
\] and such a traceless tensor has 5 independent components; it is a quintuplet. The quadrupole moment tensor is one of the most famous applications of this 5-dimensional object. You could think it's just an accident that this number 5 is equal to the number of integers between \(m=-2\) and \(m=+2\); you could claim that the agreement is pure numerology, an agreement between the dimensions of two representations. But it is more than numerology: the representations are completely equivalent. The translation from the components \(T_{ij}\) of the (complexified) traceless tensor and the five complex amplitudes \(c_m\) for \(-2\leq m\leq 2\) is nothing else than a linear change of the basis. It has to be so because for every \(j\), the representation of \(SU(2)\) is unique.
Now, let's talk about \(SO(5)\). Clearly, this group of rotations of the 5-dimensional space has a 5-dimensional vector representation consisting of \(v_i\). But what some readers aren't aware of is that the group \(SO(5)\) may also be identified with the isomorphic \(\ZZ_2\) quotient of a spinor-based group, namely \(USp(4)\). What is this group? It's a unitary (U) symplectic (Sp) group of complex \(4\times 4\) matrices \(M\) that obey\[
MM^\dagger = M^\dagger M = 1, \quad M A M^T = A.
\] Both conditions have to be satisfied. The first condition is the well-known unitarity condition, effectively meaning that \(s_i^* s_i\) is kept invariant (it's the squared Pythagorean length of the vector computed with the absolute values). The other condition is equivalent to keeping the antisymmetric cross-like product of two vector-like objects \(s_i A_{ij} t_j\) invariant where \(A_{ij}\) are elements of the (non-singular) antisymmetric matrix \(A\) above. Note that in this invariant, there is no complex conjugation.
Simple linear redefinitions of the 4 complex components \(s_i\) may always translate your convention for \(A\) mine which is \[
A = \text{block-diag} \zav{ \pmatrix{0&+1\\-1&0}, \pmatrix{0&+1\\-1&0} }
\] You just arrange the right number of the "simplest nonzero antisymmetric matrices" along the (block) diagonal. The two conditions (unitary and symplectic) may be then seen to imply that \(M\) is composed of \(2\times 2\) blocks of this form\[
\pmatrix{ \alpha&+\beta\\ -\beta^*&\alpha^*},\quad \alpha,\beta\in\CC
\] and the addition+matrix-multiplication rules for such matrices are the same rules as the addition+multiplication rules for the quaternions \(\HHH\). So the group \(USp(2N)\) may also be called \(U(N,\HHH)\), the unitary group over quaternions. In particular, \(USp(4)=U(2,\HHH)\). Such a quaternionization is possible with all pseudoreal representations.
So the 4-dimensional complex (actually pseudoreal!) fundamental representation of \(USp(4)\) is complex-4-dimensional (but it is equivalent to its complex conjugate because it's pseudoreal!) and it may be viewed as a spinor of \(SO(5)\). It is no coincidence that \(4\) in \(USp(4)\) is a power of two. How do you get the five-dimensional \(j=1\) vector out of these four-dimensional spinors?
Note that for \(SO(3)\sim SU(2)\), we had\[
{\bf 2}\otimes{\bf 2} = {\bf 3}\oplus {\bf 1}.
\] The tensor product of two spinors produced a vector (triplet; also the symmetric part of the tensor with two spinor indices) and a singlet (the antisymmetric part of the tensor with two 2-valued indices). Similarly, here we have\[
{\bf 4}\otimes{\bf 4} = {\bf 5}\oplus {\bf 1}\oplus {\bf 10}.
\] The decomposition of \(4\times 4 = 16\) to \(6+10\) is the usual decomposition of a "tensor with two spinor indices" to the antisymmetric part and the symmetric part, respectively. The symmetric part may be identified as the
antisymmetrictensor with two vectorindices, note that \(5\times 4 / 2\times 1 = 10\). And the antisymmetric part is actually irreducible here. It's because the invariant for the symplectic groups is antisymmetric, \(a_{ij}\), rather than the symmetric \(\delta_{ij}\) we had for the orthogonal groups, so it's the antisymmetric part that decomposes into two irreducible pieces.
By tensor multiplying \({\bf 4}\) with copies of itself, we may obtain all representations of \(USp(4)\) and \(SO(5)\) by picking pieces of the decomposed tensor products. That's what we mean by saying that the representation \({\bf 4}\) is "fundamental". Whenever an even number of these \({\bf 4}\) factors appears in the tensor product, we obtain honest representations of \(SO(5)\) that are invariant under 360-degree rotations and all these representations may also be given a natural description in terms of tensors with vector indices.
Finally, the special unitary group \(SU(5)\) has an obvious 5-dimensional complex representation. It is a genuinely complex one, i.e. a representation inequivalent to its complex conjugate:\[
{\bf 5}\neq \overline{\bf 5}
\] This representation (and its complex conjugate, of course) is important in the simplest grand unified models in particle physics. One may say that \(SU(5)\) is an obvious extension of the QCD colorful group \(SU(3)\). We keep the first three colors (red, green, blue, so to say) and add two more colors that are interpreted as two lepton species from the same generation. The full collection of fifteen 2-component left-handed spinors per generation (they describe quarks and leptons; a Dirac spinor is composed of two 2-component spinors; the right-handed neutrino is not included among the fifteen) is interpreted as \[
{\bf 5}\oplus\overline{\bf 10},
\] the direct sum of the fundamental quintuplet of \(SU(5)\) we have already mentioned and the antisymmetric "tensor" with \(5\times 4 / 2\times 1\) components. Note that the counting of the components is the same as it was for the representation of \(SO(5)\) above. However, the 10-dimensional representation of \(SU(5)\) is a complex one, inequivalent to its complex conjugate (I won't explain why the bar appears in the decomposition above, it's a technicality). The list of 15 spinors may be extended to 16, \(10+5+1\), if we add one right-handed neutrino and this \({\bf 16}\) is then the spinor representation of \(SO(10)\), a somewhat larger group that is capable of being the grand unified group (it is no accident that 16 is a power of two: that's what spinors always do).
The number 5 may be thought of as the first "irregular" integer of a sort but it is still small and special enough and is therefore linked to many special things in maths and physics. In maths, five is special because the square root of five appears in the golden ratio; and a pentagram may be constructed by a pair of compasses and a ruler (these two facts are actually related). Quadrupole moments, moments of inertia, five-dimensional rotations, and grand unifications are among the physical topics in which 5-dimensional representations are used as "elementary building blocks".
I hope that Ms Kiňová's birth will be as smooth as her pregnancy.
|
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision difference_between_beta_coefficients_and_partial_correlation_coefficients [2019/07/12 23:21]
hkimscil
difference_between_beta_coefficients_and_partial_correlation_coefficients [2019/07/17 10:34] (current)
hkimscil
Line 6: Line 6: $$ {\LARGE \beta_{x1}} $$ $$ {\LARGE \beta_{x1}} $$ + $$ \text{Beta:} \quad \beta_{x_1} = \frac{r_{yx_1} - r_{yx_2}r_{x_1x_2} }{1-r_{x_1x_2}^2}$$ + $$ \text{Partial r:} \quad r_{yx_1.x_2} = \frac{r_{yx_1} - r_{yx_2}r_{x_1x_2} }{\sqrt{ (1-r_{yx_2}^2)(1-r_{x_1x_2}^2) }} $$
|
When working with LaTeX, it is recommended to start each sentence on a new line. The reasons can be found in Axel Brandenburg’s computing tips and this stack overflow page so I won’t repeat them here. However, as an emacs user, I always want emacs do the formatting for me.
The existing solutions are summarized here. The most promising Emacs Lisp macro is provided by Chris Conway, which was cribbed from Luca de Alfaro. Their method does the job well. However, suppose I have a paragraph like the following (which is taken from this paper)
Once we obtain \citeauthor{Grad1949}'s coefficients, we can use them to compute the flux terms. \citeauthor{Grad1949}'s moment method is linear in the fiducial frame. The linearity naturally form a class of closure schemes. Since we fix the weight $w(\xi)$, the only freedoms in the closures are the energy scale $\theta$ and the fiducial reference frame corresponds to $U^\alpha$.
their
fill-sentence macro has no effect because no line-break is placed between sentences.
So this morning I finally sat down and worked on the problem. I first needed to understand out how the standard
fill-paragraph macro works:
$ gunzip -c /opt/local/share/emacs/23.3/lisp/textmodes/fill.el.gz | less
Note that my GNU Emacs 23.3 was installed by MacPorts, your path may be different. Scanning through the codes, I realized that all those
fill-paragraph and
fill-region macros go back to
fill-region-as-paragraph (line 608 in the source). Hacking this function/macro may provide a good solution.
I copied the whole
fill-region-as-paragraph function into my
~/.emacs and started playing around it. The final product is now on my github repository. I highlight the most important changes here
... ;; FROM, and point, are now before the text to fill, ;; but after any fill prefix on the first line. (fill-delete-newlines from to justify nosqueeze squeeze-after) (if (not newline-after-sentence) (fill-one-line from to justify) ;; original innner loop ;; Insert a line break after each sentence (goto-char from) (while (< (point) to) (forward-sentence) (if (< (point) to) (fill-newline))) ;; This is the actual filling loop. (goto-char from) (let (sentbeg sentend) (while (< (point) to) (setq sentbeg (point)) (end-of-line) (setq sentend (point)) (fill-one-line sentbeg sentend justify) ;; original innner loop (forward-line))))) ...
From line 152 to 154, the macros inserts line-breaks after sentences. The loop from line 158 to 163 then fills sentence line-by-line. You can also look at the
diff for more details.
Well, I should warn you that this is my first experience on Emacs Lisp. The macros seem to run correctly on GNU Emacs 23.3.1 but they surely contain bugs. Use and test them at your own risk, but please feel free to leave comment or bug report. I really hope this
will become something useful for everybody. If you are ready to take the risk, you can append this hack to your
~/.emacs and override the original
fill-region-as-paragraph macro:
$ curl https://raw.githubusercontent.com/chanchikwan/fill/master/hack.el >> ~/.emacs
Now, apply
fill-paragraph (or simply
M-q) in emacs results
Once we obtain \citeauthor{Grad1949}'s coefficients, we can use them to compute the flux terms. \citeauthor{Grad1949}'s moment method is linear in the fiducial frame. The linearity naturally form a class of closure schemes. Since we fix the weight $w(\xi)$, the only freedoms in the closures are the energy scale $\theta$ and the fiducial reference frame corresponds to $U^\alpha$.
which is exactly what I want.
|
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
|
There is a concept of
Rates of Growth in Calculus.
Definition :- f(x) <<< g(x) as x $\rightarrow$ $\infty$ means growth rate of f(x) is very slow as compared to g(x) when x $\rightarrow$ $\infty$ ie. $\frac{f(x)}{g(x)} \rightarrow 0 \, \, as \, \, x\rightarrow \infty$
It is true when f and g are non-negative.
So, we can say that :-
1) $if \lim_{x\rightarrow \infty } \frac{f(x)}{g(x)} = \infty \, \, then\,\, f(x) >>> g(x)$ ie. growth rate of f(x) is higher than g(x)
(or)
2) $if \lim_{x\rightarrow \infty } \frac{f(x)}{g(x)} = 0 \, \, then \,\,g(x) >>> f(x)$ ie. growth rate of g(x) is higher than f(x)
Based on this fact we can say that :-
Rates of Growth of the following functions is :-
lnx << x
p << e x << $e^{x^{2}}$ for p>0
and Rates of Decay should be :-
$\frac{1}{lnx}$ >> $\frac{1}{x^{p}}$ >> e
-x >> $e^{-x^{2}}$ , p>0
----------------------------------------------------------------------------------
Now , Rates of growth of Running Time of algorithms is same as order of growth of running time of algorithms. It is already mentioned in Cormen
According to Cormen ,
f(n) is asymptotically larger than g(n) if f(n) = $\omega$ (g(n))
(or)
$if \lim_{n\rightarrow \infty } \frac{f(n)}{g(n)} = \infty \, \, exist \, then\,\, f(n) \,\,becomes\,\, arbitrary \,\, large \,\, as\,\, compared\,\, to\,\, g(n) \,\,as\,\, n \,\,tends \,\,to\,\, infinity $
----------------------------------------------------------------------------------
Now , In this Question ,
On comparing f
2 and f 3
$\lim_{n \rightarrow \infty } \frac{n^{\frac{3}{2}}}{nlogn} = \lim_{n \rightarrow \infty } \frac{n^{\frac{1}{2}}}{logn} , when \,n \neq \infty \,\,$$Now , using \,L'H\hat{o}pital \, Rule , \lim_{n \rightarrow \infty } \frac{n^{\frac{1}{2}}}{logn} = \infty$
I have taken natural log above because it does not matter and we can convert one base to another and then solve. It will give same answer
So, f
2 >>> f 3
Now , On comparing f
1 and f 4 :-
$\lim_{n \rightarrow \infty } \frac{2^{n}}{n^{logn}}$
Since , 2
n = (e ln2) n = e n*ln2
and n
lnn = (e ln n) ln n = $e^{(ln n)^{2}}$
Since , exponential is an increasing function. So, comparing 2
n and n ln n is same as comparing n*ln2 and (ln n) 2
So, By using $L'H\hat{o}pital's \, Rule $
$\lim_{n\rightarrow \infty } \frac{n*ln2}{(ln n)^{2}} = \infty$
So, n*ln2 > (ln n)
2
and $\lim_{n \rightarrow \infty } \frac{2^{n}}{n^{logn}}$ = $\infty$
So, now we can say f
1 >>>f 4
Now, these 2 comparison of functions are enough to eliminate options in this question.
|
I was trying to derive the Bernoulli equation from the above equation
for
time independent flow
If you are studying something
time independent then you just let $\frac{\partial}{\partial t}$ to be zero:
$$\frac{\partial}{\partial x_j} \left [ \frac{1}{2} \rho v^2 v_j + \rho h v_j + \rho \phi v_j \right ]=0$$
Next step is to get rid of $\rho$. Bernoulli's equation doesn't contain $\rho$, does it? We'll need mass balance for the stationary state:
$$\frac{\partial \rho v_j}{\partial x_j} = 0$$
which together with the first equation leads to:$$\rho v_j \frac{\partial}{\partial x_j} \left [ \frac{1}{2} v^2 + h + \phi \right ]=0$$
Now one should recall that Bernoulli's law is valid only along streamlines/pathlines, which do coincide for a steady flow. If something called $A$ is constant along the vector field $\boldsymbol v$, then it should suffice the equation
$$v_j \frac{\partial A}{\partial x_j} = 0$$
Actually it is just a derivative of $A$ along the vector field $\boldsymbol v$ --- if the derivative is zero then $A$ is constant along the vector field.
Thus we got the Bernoulli's law:
$$\frac{1}{2} v^2 + h + \phi = const$$
along the streamline/pathline for the stationary case.
|
In D. Tong's notes on string theory (pdf) section 4.1.1 he explains a trick for deriving the stress-energy tensor which arises from translations in the base manifold of the field theory (in this case the worldsheet). The problem is that I don't understand exactly how the procedure works. I need to look at some worked examples.
Can anyone share some references in which I can read about this in full detail, with perhaps some worked examples?
EDIT:Perhaps I should explain a little bit more where I'm standing.
Usually, to derive the energy momentum tensor we make a translation in the base manifold, say $x^\mu$ in the usual QFT notation. $$x^\mu\to x'^\mu=x^\mu+\epsilon^\mu$$ without changing the field in a direct way: $$\phi(x)\to \phi'(x')=\phi(x)$$ $$\Rightarrow \delta\phi(x)=-\epsilon^\mu\partial_\mu\phi(x)$$ So in the variation of the action is $$\delta S=\int_R d^4 x [\frac{\partial \mathcal{L}}{\partial\phi}\delta\phi+\frac{\partial \mathcal{L}}{\partial\partial_\mu\phi}\delta\partial_\mu\phi]+\int_{\partial R}d\sigma_\mu \mathcal{L}\epsilon^\mu$$ where the second integral comes from the change of variables $x\to x'$. Thus after integrating by parts the first integral we get the Euler-Lagrange equations which give zero and we are left with $$\int_{\partial R}d\sigma_\mu [\mathcal{L}\epsilon^\mu-\frac{\mathcal{L}}{\partial\partial_\mu \phi}\epsilon^\nu\partial_\nu \phi(x)]=\int_{\partial R}d\sigma_\mu J^\mu$$ where $J^\mu=\mathcal{L}\epsilon^\mu-\frac{\mathcal{L}}{\partial\partial_\mu \phi}\epsilon^\nu\partial_\nu \phi(x)$ has to be conserved by imposing $\delta S=0$. From $J$ we extract the energy-stress tensor: $$\Theta^{\mu\nu}=\frac{\partial\mathcal{L}}{\partial\partial_\mu \phi}\partial^\nu \phi-\mathcal{L}\eta^{\mu\nu}$$ So, still in QFT notation, what Tong says is to promote $\epsilon$ to a function of $x$ so that the surface integral (using Stokes theorem): $$\delta S=\int_Rd^4x \partial_\mu J^\mu=\int_Rd^4x \partial_\mu( \Theta^{\mu\nu}\epsilon_\nu)=\int_Rd^4x [\partial_\mu( \Theta^{\mu\nu})\epsilon_\nu+ \Theta^{\mu\nu}\partial_\mu\epsilon_\nu]$$ but this is not quite the same as eq. 4.3.
|
Probabilistic bug hunting
Have you ever run into a bug that, no matter how careful you are trying toreproduce it, it only happens sometimes? And then, you think you've got it, andfinally solved it - and tested a couple of times without any manifestation. Howdo you know that you have tested enough? Are you sure you were not "lucky" inyour tests?
In this article we will see how to answer those questions and the mathbehind it without going into too much detail. This is a pragmatic guide.
The Bug
The following program is supposed to generate two random 8-bit integer and printthem on stdout:
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
/* Returns -1 if error, other number if ok. */
int get_random_chars(char *r1, char*r2)
{
int f = open("/dev/urandom", O_RDONLY);
if (f < 0)
return -1;
if (read(f, r1, sizeof(*r1)) < 0)
return -1;
if (read(f, r2, sizeof(*r2)) < 0)
return -1;
close(f);
return *r1 & *r2;
}
int main(void)
{
char r1;
char r2;
int ret;
ret = get_random_chars(&r1, &r2);
if (ret < 0)
fprintf(stderr, "error");
else
printf("%d %d\n", r1, r2);
return ret < 0;
}
On my architecture (Linux on IA-32) it has a bug that makes it print "error"instead of the numbers sometimes.
The Model
Every time we run the program, the bug can either show up or not. It has anon-deterministic behaviour that requires statistical analysis.
We will model a single program run as aBernoulli trial, with successdefined as "seeing the bug", as that is the event we are interested in. We havethe following parameters when using this model:
\(n\): the number of tests made; \(k\): the number of times the bug was observed in the \(n\) tests; \(p\): the unknown (and, most of the time, unknowable) probability of seeing the bug.
As a Bernoulli trial, the number of errors \(k\) of running the program \(n\)times follows abinomial distribution\(k \sim B(n,p)\). We will use this model to estimate \(p\) and to confirm thehypotheses that the bug no longer exists, after fixing the bug in whicheverway we can.
By using this model we are implicitly assuming that all our tests are performedindependently and identically. In order words: if the bug happens more ofter inone environment, we either test always in that environment or never; if the buggets more and more frequent the longer the computer is running, we reset thecomputer after each trial. If we don't do that, we are effectively estimatingthe value of \(p\) with trials from different experiments, while in truth eachexperiment has its own \(p\). We will find a single value anyway, but it has nomeaning and can lead us to wrong conclusions.
Physical analogy
Another way of thinking about the model and the strategy is by creating aphysical analogy with a box that has an unknown number of green and red balls:
Bernoulli trial: taking a single ball out of the box and looking at its color - if it is red, we have observed the bug, otherwise we haven't. We then put the ball back in the box. \(n\): the total number of trials we have performed. \(k\): the total number of red balls seen. \(p\): the total number of red balls in the box divided by the total number of green balls in the box.
Some things become clearer when we think about this analogy:
If we open the box and count the balls, we can know \(p\), in contrast with our original problem. Without opening the box, we can estimate \(p\) by repeating the trial. As \(n\) increases, our estimate for \(p\) improves. Mathematically: \[p = \lim_{n\to\infty}\frac{k}{n}\] Performing the trials in different conditions is like taking balls out of several different boxes. The results tell us nothing about any single box. Estimating \(p\)
Before we try fixing anything, we have to know more about the bug, starting bythe probability \(p\) of reproducing it. We can estimate this probability bydividing the number of times we see the bug \(k\) by the number of times wetested for it \(n\). Let's try that with our sample bug:
$ ./hasbug
67 -68
$ ./hasbug
79 -101
$ ./hasbug
error
We know from the source code that \(p=25%\), but let's pretend that we don't, aswill be the case with practically every non-deterministic bug. We tested 3times, so \(k=1, n=3 \Rightarrow p \sim 33%\), right? It would be better if wetested more, but how much more, and exactly what would be better?
\(p\) precision
Let's go back to our box analogy: imagine that there are 4 balls in the box, onered and three green. That means that \(p = 1/4\). What are the possible resultswhen we test three times?
Red balls Green balls \(p\) estimate 0 3 0% 1 2 33% 2 1 66% 3 0 100%
The less we test, the smaller our precision is. Roughly, \(p\) precision willbe at most \(1/n\) - in this case, 33%. That's the step of values we can findfor \(p\), and the minimal value for it.
Testing more improves the precision of our estimate.
\(p\) likelihood
Let's now approach the problem from another angle: if \(p = 1/4\), what are theodds of seeing one error in four tests? Let's name the 4 balls as 0-red,1-green, 2-green and 3-green:
The table above has all the possible results for getting 4 balls out of thebox. That's \(4^4=256\) rows, generated by this python script.The same script counts the number of red balls in each row, and outputs thefollowing table:
k rows % 0 81 31.64% 1 108 42.19% 2 54 21.09% 3 12 4.69% 4 1 0.39%
That means that, for \(p=1/4\), we see 1 red ball and 3 green balls only 42% ofthe time when getting out 4 balls.
What if \(p = 1/3\) - one red ball and two green balls? We would get thefollowing table:
k rows % 0 16 19.75% 1 32 39.51% 2 24 29.63% 3 8 9.88% 4 1 1.23%
What about \(p = 1/2\)?
k rows % 0 1 6.25% 1 4 25.00% 2 6 37.50% 3 4 25.00% 4 1 6.25%
So, let's assume that you've seen the bug once in 4 trials. What is the value of\(p\)? You know that can happen 42% of the time if \(p=1/4\), but you also knowit can happen 39% of the time if \(p=1/3\), and 25% of the time if \(p=1/2\).Which one is it?
The graph bellow shows the discrete likelihood for all \(p\) percentual valuesfor getting 1 red and 3 green balls:
The fact is that,
given the data, the estimate for \(p\)follows a beta distribution\(Beta(k+1, n-k+1) = Beta(2, 4)\)(1)The graph below shows the probability distribution density of \(p\):
The R script used to generate the first plot is here, theone used for the second plot is here.
Increasing \(n\), narrowing down the interval
What happens when we test more? We obviously increase our precision, as it is atmost \(1/n\), as we said before - there is no way to estimate that \(p=1/3\) when weonly test twice. But there is also another effect: the distribution for \(p\)gets taller and narrower around the observed ratio \(k/n\):
Investigation framework
So, which value will we use for \(p\)?
The smaller the value of \(p\), the more we have to test to reach a given confidence in the bug solution. We must, then, choose the probability of error that we want to tolerate, and take the smallest value of \(p\) that we can. A usual value for the probability of error is 5% (2.5% on each side). That means that we take the value of \(p\) that leaves 2.5% of the area of the density curve out on the left side. Let's call this value \(p_{min}\). That way, if the observed \(k/n\) remains somewhat constant, \(p_{min}\) will raise, converging to the "real" \(p\) value. As \(p_{min}\) raises, the amount of testing we have to do after fixing the bug decreases.
By using this framework we have direct, visual and tangible incentives to testmore. We can objectively measure the potential contribution of each test.
In order to calculate \(p_{min}\) with the mentioned properties, we haveto solve the following equation:
\[\sum_{k=0}^{k}{n\choose{k}}p_{min} ^k(1-p_{min})^{n-k}=\frac{\alpha}{2} \]
\(alpha\) here is twice the error we want to tolerate: 5% for an error of 2.5%.
That's not a trivial equation to solve for \(p_{min}\). Fortunately, that'sthe formula for the confidence interval of the binomial distribution, and thereare a lot of sites that can calculate it:
Is the bug fixed?
So, you have tested a lot and calculated \(p_{min}\). The next step is fixingthe bug.
After fixing the bug, you will want to test again, in order toconfirm that the bug is fixed. How much testing is enough testing?
Let's say that \(t\) is the number of times we test the bug after it is fixed.Then, if our fix is not effective and the bug still presents itself witha probability greater than the \(p_{min}\) that we calculated, the probabilityof
not seeing the bug after \(t\) tests is:
\[\alpha = (1-p_{min})^t \]
Here, \(\alpha\) is also the probability of making atype I error,while \(1 - \alpha\) is the
statistical significance of our tests.
We now have two options:
arbitrarily determining a standard statistical significance and testing enough times to assert it. test as much as we can and report the achieved statistical significance.
Both options are valid. The first one is not always feasible, as the cost ofeach trial can be high in time and/or other kind of resources.
The standard statistical significance in the industry is 5%, we recommend eitherthat or less.
Formally, this is very similar to astatistical hypothesis testing.
Back to the Bug Testing 20 times
This file has the results found after running our program 5000times. We must never throw out data, but let's pretend that we have tested ourprogram only 20 times. The observed \(k/n\) ration and the calculated\(p_{min}\) evolved as shown in the following graph:
After those 20 tests, our \(p_{min}\) is about 12%.
Suppose that we fix the bug and test it again. The following graph shows thestatistical significance corresponding to the number of tests we do:
In words: we have to test 24 times after fixing the bug to reach 95% statisticalsignificance, and 35 to reach 99%.
Now, what happens if we test more before fixing the bug?
Testing 5000 times
Let's now use all the results and assume that we tested 5000 times before fixingthe bug. The graph bellow shows \(k/n\) and \(p_{min}\):
After those 5000 tests, our \(p_{min}\) is about 23% - much closerto the real \(p\).
The following graph shows the statistical significance corresponding to thenumber of tests we do after fixing the bug:
We can see in that graph that after about 11 tests we reach 95%, and after about16 we get to 99%. As we have tested more before fixing the bug, we found ahigher \(p_{min}\), and that allowed us to test less after fixing thebug.
Optimal testing
We have seen that we decrease \(t\) as we increase \(n\), as that canpotentially increases our lower estimate for \(p\). Of course, that value candecrease as we test, but that means that we "got lucky" in the first trials andwe are getting to know the bug better - the estimate is approaching the realvalue in a non-deterministic way, after all.
But, how much should we test before fixing the bug? Which value is an idealvalue for \(n\)?
To define an optimal value for \(n\), we will minimize the sum \(n+t\). Thisobjective gives us the benefit of minimizing the total amount of testing withoutcompromising our guarantees. Minimizing the testing can be fundamental if eachtest costs significant time and/or resources.
The graph bellow shows us the evolution of the value of \(t\) and \(t+n\) usingthe data we generated for our bug:
We can see clearly that there are some low values of \(n\) and \(t\) that giveus the guarantees we need. Those values are \(n = 15\) and \(t = 24\), whichgives us \(t+n = 39\).
While you can use this technique to minimize the total number of tests performed(even more so when testing is expensive), testing more is always a good thing,as it always improves our guarantee, be it in \(n\) by providing us with abetter \(p\) or in \(t\) by increasing the statistical significance of theconclusion that the bug is fixed. So, before fixing the bug, test until you seethe bug at least once, and then at least the amount specified by thistechnique - but also test more if you can, there is no upper bound, speciallyafter fixing the bug. You can then report a higher confidence in the solution.
Conclusions
When a programmer finds a bug that behaves in a non-deterministic way, heknows he should test enough to know more about the bug, and then even moreafter fixing it. In this article we have presented a framework that providescriteria to define numerically how much testing is "enough" and "even more." Thesame technique also provides a method to objectively measure the guarantee thatthe amount of testing performed provides, when it is not possible to test"enough."
We have also provided a real example (even though the bug itself is artificial)where the framework is applied.
As usual, the source code of this page (R scripts, etc) can be found anddownloaded in https://github.com/lpenz/lpenz.github.io
|
Regression (OLS) - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Regression (OLS)
$z$ test for the difference between two proportions
Paired sample $t$ test
Sign test
Spearman's rho
Independent variables Independent variable Independent variable Independent variable Independent variable One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables One categorical with 2 independent groups 2 paired groups 2 paired groups One of ordinal level Dependent variable Dependent variable Dependent variable Dependent variable Dependent variable One quantitative of interval or ratio level One categorical with 2 independent groups One quantitative of interval or ratio level One of ordinal level One of ordinal level Null hypothesis Null hypothesis Null hypothesis Null hypothesis Null hypothesis $F$ test for the complete regression model: $\pi_1 = \pi_2$
$\pi_1$ is the unknown proportion of "successes" in population 1; $\pi_2$ is the unknown proportion of "successes" in population 2
$\mu = \mu_0$
$\mu$ is the unknown population mean of the difference scores; $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0
$\rho_s = 0$
$\rho_s$ is the unknown Spearman correlation in the population.
In words:
there is no monotonic relationship between the two variables in the population
Alternative hypothesis Alternative hypothesis Alternative hypothesis Alternative hypothesis Alternative hypothesis $F$ test for the complete regression model: Two sided: $\pi_1 \neq \pi_2$
Right sided: $\pi_1 > \pi_2$
Left sided: $\pi_1 < \pi_2$
Two sided: $\mu \neq \mu_0$
Right sided: $\mu > \mu_0$
Left sided: $\mu < \mu_0$
Two sided: $\rho_s \neq 0$
Right sided: $\rho_s > 0$
Left sided: $\rho_s < 0$
Assumptions Assumptions Assumptions Assumptions Assumptions all individuals in the population. Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Note: this assumption is only important for the significance test, not for the correlation coefficient itself. The correlation coefficient itself just measures the strength of the monotonic relationship between two variables.
Test statistic Test statistic Test statistic Test statistic Test statistic $F$ test for the complete regression model:
Note 2: if only one independent variable ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1$
$z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
$p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2
Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1$
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
$\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to H0, $s$ is the sample standard deviation of the difference scores, $N$ is the sample size (number of difference scores).
The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$
$W = $ number of difference scores that is larger than 0 $t = \dfrac{r_s \times \sqrt{N - 2}}{\sqrt{1 - r_s^2}} $
where $r_s$ is the sample Spearman correlation and $N$ is the sample size. The sample Spearman correlation $r_s$ is equal to the Pearson correlation applied to the rank scores.
Sample standard deviation of the residuals $s$ n.a. n.a. n.a. n.a. $\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $ - - - - Sampling distribution of $F$ and of $t$ if H0 were true Sampling distribution of $z$ if H0 were true Sampling distribution of $t$ if H0 were true Sampling distribution of $W$ if H0 were true Sampling distribution of $t$ if H0 were true Sampling distribution of $F$: Approximately standard normal $t$ distribution with $N - 1$ degrees of freedom The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $p$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $p = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $np = n \times 0.5$ and standard deviation $\sqrt{np(1-p)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately a standard normal distribution if the null hypothesis were true.
Approximately a $t$ distribution with $N - 2$ degrees of freedom Significant? Significant? Significant? Significant? Significant? $F$ test: Two sided: Two sided: If $n$ is small, the table for the binomial distribution should be used:
Two sided:
If $n$ is large, the table for standard normal probabilities can be used:
Two sided:
Two sided:
$C\%$ confidence interval for $\beta_k$ and for $\mu_y$; $C\%$ prediction interval for $y_{new}$ Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$ $C\%$ confidence interval for $\mu$ n.a. n.a. Confidence interval for $\beta_k$: Regular (large sample): $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)
The confidence interval for $\mu$ can also be used as significance test.
- - Effect size n.a. Effect size n.a. n.a. Complete model: - Cohen's $d$:
Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0$
- - n.a. n.a. Visual representation n.a. n.a. - - - - ANOVA table n.a. n.a. n.a. n.a. - - - - n.a. Equivalent to Equivalent to Equivalent to n.a. - When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels One sample $t$ test on the difference scores
Repeated measures ANOVA with one dichotomous within subjects factor
Two sided sign test is equivalent to - Example context Example context Example context Example context Example context Can mental health be predicted from fysical health, economic class, and gender? Is the proportion smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic. Is the average difference between the mental health scores before and after an intervention different from $\mu_0$ = 0? Do people tend to score higher on mental health after a mindfulness course? Is there a monotonic relationship between physical health and mental health? SPSS SPSS SPSS SPSS SPSS Analyze > Regression > Linear... SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
Analyze > Compare Means > Paired-Samples T Test... Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples... Analyze > Correlate > Bivariate... Jamovi Jamovi Jamovi Jamovi Jamovi Regression > Linear Regression Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
T-Tests > Paired Samples T-Test Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
Regression > Correlation Matrix Practice questions Practice questions Practice questions Practice questions Practice questions
|
Are there any analytical proofs for the 2nd law of thermodynamics?
Or is it based entirely on empirical evidence?
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
It's simple to "roughly prove" the second law in the context of statistical physics. The evolution $A\to B$ of macrostate $A$, containing $\exp(S_A)$ microstates, to macrostate $B$, containing $\exp(S_B)$ microstates, is easily shown by the formula for the probability "summing over final outcomes, averaging over initial states", to be $\exp(S_B-S_A)$ higher than the probability of the inverse process (with velocities reversed). Because $S_B-S_A$ is supposed to be macroscopic, such as $10^{26}$ for a kilogram of matter, the probability in the wrong direction is the exponential of minus this large difference and is zero for all practical purposes.
The more rigorous versions of this proof are always variations of the 1872 proof of the so-called H-theorem by Ludwig Boltzmann:
This proof may be adjusted to particular or general physical systems, both classical ones and quantum ones. Please ignore the invasive comments on the Wikipedia about Loschmidt's paradoxes and similar stuff which is based on a misunderstanding. The H-theorem is a proof that the thermodynamic arrow of time - the direction of time in which the entropy increases - is inevitably aligned with the logical arrow of time - the direction in which one is allowed to make assumptions (the past) in order to evolve or predict other phenomena (in the future).
Every Universe of our type has to have a globally well-defined logical arrow of time: it has to know that the future is being directly evolving (although probabilistically, but with objectively calculable probabilities) from the past. So any universe has to distinguish the future and the past logically, it has to have a logical arrow of time, which is also imprinted to our asymmetric reasoning about the past and the future. Given these qualitative assumptions that are totally vital for the usage of logic in any setup that works with a time coordinate, the H-theorem shows that a particular quantity can't be decreasing, at least not by macroscopic amounts, for a closed system.
It was first found empirically, and later derived from various more theoretical assumptions.
There is a proof in Section 7.2 of Chapter 7: Phenomenological Thermodynamics of Classical and Quantum Mechanics via Lie algebras, based on a few axioms for thermodynamics, and a proof in Chapter 9 that these laws follow from the standard assumptions in statistical mechanics.
The reversibility objections (Loschmidt's paradox) are unjustified since the Poincare recurrence theorem assumes that the system in question is bounded, which is (most likely) not the case for the real universe.
If we assume time evolution is unitary and hence reversible, and the total size of the phase space subject to constraints based upon the total energy and other conserved quantities is finite, then the only conclusion is Poincaré recurrences cycling ergodically through the entire phase space. Boltzmann fluctuations to states of lower entropy might occur with exponentially suppressed probabilities, but the entropy would increase both toward its past and future. This is so not the second law as Boltzmann's critics never tire of pointing out.
The H-theorem depends upon the stosszahlansatz assumption that separate events in the past are uncorrelated, but that is statistically exceedingly improbable assuming a uniform probability distribution.
If the total size of the phase space is infinite, Carroll and Chen proposed that in eternal inflation there can be some state with finite entropy with entropy increasing in both time directions.
To me, the most likely scenario is to drop the assumption of unitarity and replace that with time evolution using Kraus operators acting upon the density matrix.
The problem when you include gravity or other long range forces, is that thermodynamics becomes non extensive. For instance, the energy of the union of two systems is not the sum of the energies of the individual systems.
To handle those cases, generalized entropies have been proposed. By generalized it means that these formalisms allow for long range forces and non-extensivity, for certain parameters of the definition of entropy, but reduces to the classical extensive entropy for certain value of the parameter. One of such extended entropies is Tsallis entropy. It depends on a parameter $q$, and for $q=1$ it reduces to the standard classical entropy.
It has been shown that this entropy works well in some gravitational systems, where it predicts the correct distribution of temperatures and densities, for instance in a polytropic model of a self-gravitating system. It has also been shown that this entropy satisfies the second law for any parameters $q$ in the classical case, and at least for $q\in(0,2]$ in the quantum case.
In the strict sense of the question: no. Physics is science based on empirical evidence. But this applies to all laws of physics. E.g. if by tomorrow you find and confirm experimental evidence which contradict current theories, you have to expand the theories (or invent new ones), and you gain insight in the domain of applicability of your old theory (which still stays valid in its domain).
Of course you might be able to derive/prove the second law from certain assumptions, but if you were to find an experiment where the second law doesn't hold, then you start to know the limitations of your assumptions.
There is actually a very simple derivation of the Second Law in classical thermodynamics, assuming only classical mechanics and the First Law. Here is a brief sketch -- whether this constitutes a "proof" depends largely on taste, the level of rigor desired, and how comfortable you are with thermo-style derivations.
The First Law of Thermodynamics is:
\begin{align} dU = dq + dw \end{align}
where the differentials refer to changes of the system. By convention we have defined a gain of energy or heat by the system as positive, work done on the system as positive, and work done by the system on the surroundings as negative.
Without loss of generality, we assume only pressure-volume work. The work done by the system is quantified by the amount of work done in the surroundings, and so the relevant pressure is the
external pressure $P_{ext}$ in the surroundings that the system is pushing against. Then, the work done by the system is
\begin{align} dw = -P_{ext} dV \end{align}
If the system is expanding against the surroundings, $dV \ge 0$, and according to classical mechanics the internal pressure of the system must be greater than the external pressure of the surroundings, i.e.
\begin{align} P_{int} \ge P_{ext} \end{align}
For a
reversible change, the internal and external pressures are equal ($P_{int} = P_{ext}$), and so the work done by the system in a reversible process is
\begin{align} dw_{rev} = -P_{int} dV \end{align}
Therefore,
\begin{align} P_{int} dV &\ge P_{ext} dV \\ -P_{int} dV &\le -P_{ext} dV \\ dw_{rev} &\le dw \end{align}
which means that the magnitude of work done by the system on the surroundings is maximal during a reversible process. Combining this result with the First Law gives:
\begin{align} dq_{rev} &\ge dq \end{align}
We now define the state function entropy $S$ classically as
\begin{align} dS = \frac{dq_{rev}}{T} \end{align}
From the previous inequality for reversible heat, we see that
\begin{align} dS = \frac{dq_{rev}}{T} \ge \frac{dq}{T} \end{align}
which is the generalized Clausius inequality. This is a
complete mathematical statement of the Second Law of Thermodynamics. All consequences of the Second Law can be derived from it, including the proposition that heat always spontaneously flows from hot to cold.
The one missing part is that we did not establish that entropy $S$ is a state function, but this is easy and can be found in any introductory thermodynamics treatment (e.g. [1]).
|
If you assume that the $\lambda$-calculus is a good model of functional programming languages, then one may think: the $\lambda$-calculus has aseemingly simple notion oftime-complexity: just countthe number of $\beta$-reductionsteps $(\lambda x.M)N \rightarrow M[N/x]$.
But is this a good complexity measure?
To answer thisquestion, we should clarify what we mean by complexity measure in thefirst place. One good answer is given by the
Slot and van EmdeBoas thesis: any good complexity measure should havea polynomialrelationship to the canonical notion of time-complexity defined usingTuring machines. In other words, there should be a 'reasonable'encoding $tr(.)$ from $\lambda$-calculus terms to Turing machines, such for some polynomial $p$, it is the case that foreach term $M$ of size $|M|$: $M$ reduces to a value in $p(|M|)$ $\beta$-reduction steps exactlywhen $tr(M)$ reduces to a value in $p(|tr(M)|)$ steps of a Turing machine.
For a long time, it was unclear if this can be achieved in the λ-calculus. The main problems are the following.
There are terms that produce normal forms (in a polynomial number of steps) that are of exponential size. Even writing down the normal forms takes exponential time. The chosen reduction strategy plays an important role. For example there exists a family of terms which reduces in a polynomial number of parallel β-steps (in the sense of optimal λ-reduction), but whose complexity is non-elementary (meaning worse then exponential).
The paper "Beta Reduction is Invariant, Indeed" by B. Accattoli and U. Dal Lago clarifies the issue by showing a 'reasonable' encoding that preserves the complexity class
P of polynomial time functions, assuming leftmost-outermost call-by-name reductions. The key insight is the exponential blow-up can only happen for 'uninteresting' reasons which can be defeated by proper sharing. In other words, the class P is the same whether you define it counting Turing machine steps or (leftmost-outermost) $\beta$-reductions.
I'm not sure what the situation is for other evaluation strategies.I'm not aware that a similar programme has been carried out for space complexity.
|
Image Dimensions Contents Describing the fields of the Canvas Properties Dialog
The user access the image dimensions in the Canvas Properties Dialog.
The 'Others' tab
Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well).
The 'Image' tab
Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit:
The on-screen size(?) The fields Widthand Heighttell synfigstudio how many pixels the image shall cover at a zoom level of 100%. The physical size The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images. The mysterious Image Area Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \scriptstyle\text{span}=\sqrt{\Delta x^2 + \Delta y^2}}). The unit seems to be not pixels but units, which are at 60 pixels each. If the ratio of the image size and image area dimensions are off, for example circles will appear as an ellipse (see image). These settings seem to influence how large one Image Sizepixel is being rendered. This might be useful when one has to deal with non-square output pixels. Effects of the Image Area
Somehow the image area setting seems to be saved when copy&pasting between image, see also bug #2116947.
Possible intended effects of out-of-ratio image areas
As mentioned above, different ratios might be needed when then output needs to be specified in pixels, but those pixels are not squares. That might happen for several kinds of media, such as videos encoded in some PAL formats or for dvds. For further reading, look at Wikipedia.
Still, it is probably consensus that the image, as shown on screen while editing should look as closely as possible like when viewed by the final audience. So, while specifying a different output resolution at
rendering time may well be wanted, synfigstudio should (for the majority of monitors) show square pixels, i.e. circles should stay circles.
|
I am solving the following exam problem.
Problem: An iterative scheme is given by $$ x_{n+1}= \frac{1}{5}\left(16-\frac{12}{x_n} \right).$$ Such a scheme with suitable initial approximation $x_0$ will
(a) not converge (b) converge to $1.6$ (c) converge to $1.8$ (d) converge to $2$
My attempt. By defining $g(x) = 1/5(16-12/x )$, I found that fixed points of $g(x)$ are $2$ and $6/5$. Thus, given iterative scheme will converge either to $2$ or $6/5$.
Using fixed point theorem I need to find the interval [a, b] show that $g(x)\in [a, b]$ and is continuous and derivative of g(x) should exit on $(a, b)$. Further I have to check that $|g^{'}(x)| \leq r$, $r <1$.
First confusion: I want to know that how to find such interval $[a, b]$ satisfying these conditions? Second confusion: Here we see that at fixed point $2$, $|g^{'}(2)| <1$. My confusion is then can we conclude that sequence $x_n$ will converge to $2$?
Thank you for your help.
|
Considering their distance from their parent stars, might Oort cloud object such as comets be exchanged between passing stars (assuming that other stars have similar Oort clouds)?
You can exclude the 'considering the distance' piece - of course Oort Cloud objects could transfer between different gravitational fields.
However what is it you think will make this transfer? Without some sort of gravitational impetus why would one of these objects leave the solar system? And if you do manage to slingshot one out of the solar system at a high enough speed to exit the Sun's gravity, remember that most directions end up very far away from any other solar systems.
Tl;Dr sure, but not very likely
TL;DR In response to your comment that "note that [link to answer] supports the assertion that passing stars can influence Oort cloud object" I will talk about whether this could happen to comets in the Oort cloud that surrounds the solar system.
It can happen but the stars that exist today don't pass by close enough to yank away a comet at once. However many star passages could eventually do it. In this answer I attempt to present a way to think about this problem. Skip to the last paragraph to get the directly to my answer to your question without the extra.
In my answer here I clearly state that many stars have their own Oort cloud and that if they pass by each other close enough the stars will exchange comets. This is a direct answer to your question. It is believed to happen a lot in young star clusters, but you have to realize that older stars are often separated from other stars by a great distance which prohibits this type of exchange.
Now I will discuss influence by stars on the comets in the Oort cloud (the usual one, that surrounds the solar system). This is the topic of chapter of 5.2,
Stellar Perturbations, in Julio Angel Fernández's book Comets. It is possible to approximate the influence of a passing star with some reasonable simplifications. I will try to retell Fernández argument here below.
Let's say that a comet is located at a heliocentric distance $r$. Since Oort cloud comets travel very slowly compared to stars, $0.1\rm{km\cdot s^{-1}}$ versus $30 \rm{km\cdot s^{-1}}$, we can assume that the comet is at rest in the heliocentric frame. If we neglect any influence of the star when it is further away than $10^5 \rm{AU}$ from the closest approach to the Sun we only have to be concerned about the time it takes for a star to travel $2\times 10^5\rm{AU}$ (imagine the star moving through the Sun), and during this time the comet has only travelled approximately $10^3\rm{AU}$. The star can be taken to travel in a straight line since it is only slightly perturbed by the Sun. This leads, and as with everything else this comes from Fernández text, to the integral
$$ \Delta v=\int_{-\infty}^\infty F\times \rm{dt}=-\frac{2GM}{VD} $$
where $\Delta v$ is the change of velocity of the comet, $G$ is the universal gravitational constant, $M$ is the mass of the star, $V$ is the velocity of the star and $D$ is the distance of closest approach between the star to the comet. However, we can't forget either that the Sun is also influencing the comet. If the comet is much closer to the Sun than the star the influence of the star can be neglected and vice versa. Since in this question we are dealing with the case "is it possible" I will assume that the comet is far out in the Oort cloud. Under these conditions we get another expression (after taking the Sun into account), i.e.
$$ | \Delta v | \approx \frac{2GMr\rm{cos}(\beta)}{VD_\odot^2} $$
where $\beta$ is the angle between the vector between the sun and the closest point of approach to the sun of the star and the vector from the sun to the comet. $D_\odot$ is the distance between the sun and the star at the closest point of approach.
All this math is somewhat superfluous in the current context. I wanted to show you that it is possible to reason analytically about these things. Your question is whether a comet can be yanked away from its orbit in the Oort cloud and be captured by a passing star. The last formula presented here shows that for stars that actually exist now (not to say that stars or other small bodies have never passed by close to the solar system or even gone through it) the change of velocity imparted on the comet by the star is far to small for this to happen. However the change of velocity will accumulate over many star passages, and over a long time it will change the orbit of the comet in a meaningful way. Long periodic comets (LP comets) are comets that travel into the solar system in an orbit that is a very narrow ellipsis so that its perihelion (closest approach to the sun) is small but the aphelion (furthest point away from the sun) can be a great distance outside Oort cloud. Long periodic comets meet their end in different ways. Some passes to close to the sun and melt, others collide with planets, especially the big gas planets, and some get catapulted out of the solar system by a close approach to for example Jupiter. It is possible however, because long periodic comets can have orbits that extend beyond the Oort cloud where they are less influenced by the sun and more influenced by passing stars, that they might be yanked away and join another star eventually, although I still don't think it is likely. It would be possible I think to use the same kind of math to approximate the change of velocity that stars incur on LP comets to see if it is feasible, but I haven't done it.
|
Forgive me in advance, this may get overcomplicated. I am going to give you the facts as scaled down as I can but still sufficiently detailed. I think providing you with what we have and allowing you to infer from it is the best way to avoid misrepresenting the answer.
Here is the General Relativity equation that describes how gravity interacts with everything else:$$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi G_NT_{\mu\nu}$$On the left side is gravity; it is described by terms that have to do with how space curves, expands, and contracts. On the right side is matter, radiation, etc. Pretty much all forms of energy.
It was determined through observation that the universe as we know it is expanding and that the rate of expansion is accelerating. To account for the acceleration of the expansion, the best-fitting theory includes a constant term:$$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}(R-2\Lambda)=8\pi G_NT_{\mu\nu}$$Alternatively, one could choose to express that constant term on the right side of the equation. It makes no physical difference. Accordingly, you can interpret this term as a modification to how gravity affects spacetime (if added to the LHS) or as an additional energy term with a negative pressure (if added to the RHS).
Now, as per original GR, we have equations describing the expansion of the universe:$$\frac{\ddot a}{a}=-\frac{4\pi G_N}{3}\sum_i(\rho_i+3p_i)$$$a$ represents how much the universe has expanded, $\ddot a$ is the acceleration of the expansion, and $\rho_i$ and $p_i$ are the energy density and the pressure for the $i$th form of energy (the rest are constants). We put in matter, dark matter, radiation, and we even treat the background curvature of the universe as a form of energy here. What we find is that when we treat dark energy as an energy and set its pressure to be equal to negative of its energy density, our equations very closely match the observations. That is sufficient reason to like doing that. However, we also find (via a separate equation) that the energy density of something whose pressure equals the negative of the energy density remains constant for all time. This is unavoidable, it is one of the best fitting models so far.
Furthermore, the constant energy density presents other effects. The energy density of matter or radiation decreases with time. For instance, and this should be intuitive, the energy density of matter decreases like $a^{-3}$. That is, it drops like the cube of the expansion of the universe. And why not? as the universe expands, the volume increases like the expansion cubed; energy density is energy over volume, so it decreases like expansion cubed. Radiation goes like $a^{-4}$, and other energies decrease at varying rates. A constant density means that after a long time, it becomes the dominant term in the above equation. Effectively, after a long time:$$\frac{\ddot a}{a}=\frac{8\pi G_N}{3}\rho_{DE}$$This barrage of equations might mean nothing at all to you. That is fine. This is an answer to what dark energy is, why it has a constant energy density, and whether it has gravity.
As for whether or not dark energy can fall into a black hole, that is more complicated. From the point of view of modifying gravity, dark energy is not something that can fall into a black hole. However, from the point of view of being an additional energy term, one might think it must be able to fall into a black hole. Truthfully, I don't know that answer but I know it makes an insignificant difference. Dark energy is too weak to have an effect on the scale of black holes. Even the small gravity of the Sun is enough to negate the effect of dark energy throughout the solar system and probably a bit further. As you can see from the last equation, at late times the acceleration of the expansion is positive. Dark energy never brings the universe to a Big Crunch. For matter, radiation, etc, the acceleration terms at late times all are negative. Dark energy is preventing a Big Crunch.
In response to your last questions. This was never "proven". It was postulated decades ago and has been confirmed by many experiments but none that prove it beyond a doubt. As for what caused the Big Bang. The Big Bang was not an event, it was a moment of time. The Big Bang represents the point in time where the equation for $a$ (remember that's the term that represents how much the universe has expanded) goes to zero. That's it. It has no more need for a cause than has any moment of time after it.
|
By the mean value theorem it's easy to show that $|a_{n+1}-a_{n}| \leq \frac{5}{6}|a_{n}-a_{n-1}|$ for every n.
Next, I thought of saying $|a_{n+1}-a_{n}| \leq ... \leq (\frac{5}{6})^{n}|a_{1}| \to 0$ and somehow show that ** if $M_{n}$ is the closed interval whose end points are $a_{n}$ and $a_{n-1}$ then $a_{n+1} \in M_{n}$ which implies $M_{n+1} \subseteq M_{n}$ and then to finish with Cantor's intersection theorem that gives us convergence of $a_{n}$.
But I'm not even sure if ** is correct and I haven't even used the fact that $a_{0} = 0$.
EDIT: Following the tip and some more thought I've come up with the following:
For every $m\gt n$: $|a_{m}-a_{n}|\leq|a_{m}-a_{m-1}+a_{m-1}-...+a_{n-1}-a_{n}|\leq$
$\leq\sum_{k=n}^{m-1}|a_{k+1}-a_{k}|\leq|a_{1}|\sum_{k=n}^{m-1}(\frac{5}{6})^{k}\le$ $\le|a_{1}|\sum_{k=n}^{\infty}(\frac{5}{6})^{k}=|a_{1}|\frac{(\frac{5}{6})^{n}}{\frac{1}{6}}=6|a_{1}|(\frac{5}{6})^{n} \to 0$ and from here it's easy to show that the sequence is Cauchy.
Please correct me if I made an error.
|
Suppose I have an Lagrangian $$\mathcal{L} = \frac{1}{2}g_{ab} \bar{\psi}^a \Gamma^k \partial_k \psi^b $$ and I want to show it's invariance under the infinitesimal Lorentz transformations $$\delta \psi^a = -\Lambda_{mn}x^m \partial^n \psi^a + \frac{1}{2} \Lambda_{mn} \Sigma^{mn}\psi^a ,$$ $$\delta \bar{\psi}^a = -\Lambda_{mn}x^m \partial^n \bar{\psi}^a - \frac{1}{2} \Lambda_{mn} \bar{\psi}^a \Sigma^{mn},$$ where $\Lambda_{mn}$ are the components of an infinitesimal Lorentz transformation and hence antisymmetric, and $\Sigma^{mn}$ are the generators of the spinor representation of the Lorentz group. I proceed the usual way and after some algebra get that $$\delta \mathcal{L} = -\frac{1}{2}g_{ab}\Lambda_{mn}[x^m \partial^n \bar{\psi}^a \Gamma^k \partial_k \psi^b + x^m \bar{\psi}^a \Gamma^k \partial_k(\partial^n \psi^b) + \bar{\psi}^a \Gamma^m \partial^n \psi^b ].$$ This needs to be written as a total derivative, but I can't seem to achieve this. For example if I try $$\partial^m (\Lambda_{mn} x^n \mathcal{L}),$$ I get the first two, but not the third term. Can anyone tell me how to proceed?
It was pointed out by @Peter Anderson in the comment that you forgot the transformation of the derivative, which in infinitesimal form should read$$\delta \partial_n = - g^{lm} \Lambda_{mn}\partial_l$$which comes from the Lorentz transformation$$\partial_n \to g^{lm}(L^{-1})_{mn} \partial_l$$(the metric is there to keep the indices in agreement with OP's choice) which expands to$$ g^{lm}(L^{-1})_{mn} = \delta^l_n - g^{lm} \Lambda_{mn} + ... $$where I'm inferring, from your other transformation laws, that you are applying
active Lorentz transformations, i.e. under the symbolic perspective$$x \to L x$$with a field transforming as$$\phi(x) \to M \phi(L^{-1} x)$$being $M$ a representation of the Lorentz group.
If you use this you will get a new term in the variation of your Lagrangian $$ \delta \mathcal{L} -\frac{1}{2} g_{ab} \Lambda_{mn} \overline \psi^a \Gamma^n g^{lm}\partial_l \psi^b$$ and this new term with the last term of your variation hold $$-\frac{1}{2} g_{ab} \Lambda_{mn}\left[ \overline \psi^a \Gamma^m \partial^n \psi^b+\overline \psi^a \Gamma^n \partial^m \psi^b \right]$$ and so you have a contraction between an anti-symmetric tensor $\Lambda_{mn}$ with a symmetrised quantity $\Gamma^m \partial^n+\Gamma^n \partial^m$ and hence these two terms vanish. You are then left with the first two, which you already said you can rewrite them in a total derivative form.
|
In Carroll's Appendix B, he says
You will often hear it proclaimed that GR is a "diffeomorphism invariant" theory. What this means is that, if the universe is represented by a manifold $M$ with metric $g_{\mu \nu}$ and matter fields $\psi$, and $\phi : M \to M$ is a diffeomorphism, then the sets $(M, g_{\mu \nu}, \psi)$ and $(M, \phi^* g_{\mu \nu}, \phi^* \psi)$ represent the same physical situation. ... This state of affairs forces us to be very careful; it is possible that two purportedly distinct configurations (of matter and metric) in GR are actually "the same," related by a diffeomorphism.
I completely agree that two pseudo-Riemannian manifolds $R' = (M', g')$ and $R = (M, g)$, where $M', M$ are smooth manifolds and $g', g$ are metric tensors, are physically equivalent iff there exists a diffeomorphism $\phi:M' \to M$ such that $g' = \phi^* g$ (and, if there are matter fields, $\psi' = \phi^* \psi$, but for simplicity I'll focus on the vacuum case). However, I think his use of the phrase "related by a diffeomorphism" to describe this relation is a bit misleading. In the standard mathematical usage, a "diffeomorphism" is an isomorphism between
smooth manifolds (the $M$ and $M'$) and doesn't "touch" the metrics at all. One can consider two Riemannian manifolds $(M, g)$ and $(M, g')$ which are diffeomorphic but have completely independent metric structures. For example, consider the flat unit disk in the $x$-$y$ plane and the upper unit hemisphere, both embedded into $\mathbb{R}^3$ and inheriting the usual Euclidean 3-D metric via the standard pullback mechanism. These Riemannian manifolds are diffeomorphic in the standard mathematical sense, but not "related by a diffeomorphism" in the sense that Carroll describes in the quotation above, because the metrics are not related by the relevant diffeomorphism pullback.
The relation between pseudo-Riemannian manifolds that Carroll describes, in which the metrics "agree" via the relevant diffeomorphism pullback, appears to be what mathematicians call an isometry, which is a very special case of a diffeomorphism. A (mathematicians') diffeomorphism is the natural notion of isomorphism between
smooth manifolds, but a (mathematicians') isometry is the natural notion of isomorphism between (pseudo-)Riemannian manifolds - not only the smooth structure but also the metric structure gets "carried over" appropriately. Question 1: Am I correct that using standard mathematical terminology, the transformation that Carroll is describing an "isometry" rather than a general "diffeomorphism"?
Putting aside Carroll's particular choice of phrasing, I believe that isometry between pseudo-Riemannian manifolds (in the standard mathematical usage linked to above, which is not the same as the usual physicists' usage) is actually the correct notion of physical equivalence in GR, rather than general diffomorphism. As discussed here, general diffeomorphisms do not map geodesics (which are physical and coordinate-independent) to geodesics - only isometries do. Moreover, in my disk and hemisphere example above, the former manifold is flat and the latter is curved, so on the latter surface initially parallel geodesics meet, triangle corners add up to more than $180^\circ$, etc. These non-isometric manifolds clearly correspond to distinct physical states, even though they are diffeomorphic.
Question 2: Am I correct that two Riemannian manifolds correspond to the same physical state iff they are isometric, not merely diffeomorphic (again, under the standard mathematical definitions of "diffeomorphism" and "isometry", not under Carroll's definitions)?
Those are my physics questions. If the answers to both questions #1 and #2 are "yes", then I have a closely related usage question. It seems to me that Carroll's usage of the word "diffeomorphism" is not a personal quirk or sloppy language, but is standard in the physics community. Many times, I've heard physicists say that diffeomorphic Riemannian manifolds are physically equivalent, or that GR is "diffeomophism-invariant".
Question 3(a): When physicists talk about a "diffeomorphism" in the context of general relativity, are they usually using the word in the standard mathematical sense, or in Carroll's sense, which mathematicians would instead call an "isometry"?
If the answer is "in Carroll's sense", then that means that the mathematics and physics (or at least GR) communities use the word "diffeomorphism" in inequivalent ways. This wouldn't surprise me, except in that if so, I've never heard anyone mention that fact.
Question 3(b): Physicists often say that the theory of general relativity is "diffeomorphism-invariant". Am I correct that this is true under the physicists' usage, but under the mathematicians' usage, GR is not diffeomorphism-invariant but only isometry-invariant?
|
This section shows how to calculate the masses and moments of two- and three- dimensional objects in Cartesian \((x,y,z)\) coordinates.
Mass
We saw before that the double integral over a region of the constant function 1 measures the area of the region. If the region has uniform density 1, then the mass is the density times the area which equals the area. What if the density is not constant. Suppose that the density is given by the continuous function
\[\text{Density} = \rho(x,y).\]
In this case we can cut the region into tiny rectangles where the density is approximately constant. The area of mass rectangle is given by
\[\begin{align} \text{Mass} &= (\text{Density})(\text{Area}) \nonumber \\[4pt] &= (\rho (x,y)) (\Delta{x} \Delta{y}) \end{align}.\]
You probably know where this is going. If we add all to masses together and take the limit as the rectangle size goes to zero, we get a double integral.
Definition: Mass of a Two-Dimensional lamina
Let \(\rho(x,y)\) be the density of a lamina (flat sheet) \(R\) at the point \((x,y)\). Then the total mass of the lamina is the double integral
\[ \text{Mass}_{\text{lamina}} = \iint \rho (x,y)\, dy\,dx \label{lamina}\]
or written as an integral over an area (\(A\)):
\[\text{Mass}_{\text{lamina}} =\iint_{a}^{b} \,\rho\, dA\]
Example \(\PageIndex{1}\)
A rectangular metal sheet with \(2 < x < 5\) and \(0 < y < 3\) has density function
\[\rho(x,y) = x + y. \nonumber\]
Set up the double integral that gives the mass of the metal sheet.
Solution
We just have to evaluate the integral in Equation \ref{lamina}
\[ \int_2^5 \int _0^3 (x+y)\, dy\,dx.\nonumber\]
Extending this to three-dimensional solids requires redefining \(\rho (x,y,z)\) to be the density (mass per unit volume) of an object occupying a region \(D\) in space. The integral over \(D\) gives us the mass of the object. To see why, imagine partitioning the object into \(n\) mass elements. And when summing these mass elements up, it is the total mass.
\[\begin{align*} M &= \lim_{n\rightarrow\infty}\sum_{k=1}^n \Delta m_k \\[4pt] &=\lim_{n\rightarrow\infty}\sum_{k=1}^n \rho (x_k,y_k,z_k)\Delta V_k\\[4pt] &=\iiint_{a}^{b}\rho(x, y,z) dV \end{align*}.\]
The integral of \(\rho (x,y,z)\) gives us the mass of the object.
Definition: Mass of a Three-Dimensional Solid
Let \(\rho(x,y,z)\) be the density of a solid \(R\) at the point \((x,y,s)\). Then the total mass of the solid is the triple integral
\[ \text{Mass}_{\text{solid}} = \iiint \rho (x,y,z)\, dy\,dx, dz \label{solid}\]
or written as an integral over an volume (\(V\)):
\[\text{Mass}_{\text{solid}}=\iiint_{a}^{b}\rho\, dV\]
Moments and Center of Mass
The moments about an axis are defined by the product of the mass times the distance from the axis.
\[M_x=(\text{Mass}(y))\]
\[M_y=(\text{Mass}(x)) \]
If we have a region \(R\) with density function \(\rho (x,y)\), then we do the usual thing. We cut the region into small rectangles for which the density is constant and add up the moments of each of these rectangles. Then take the limit as the rectangle size approaches zero. This will give us the total moment.
Definition: Moments of Mass and Center of Mass
Suppose that \(\rho (x,y)\) is a continuous density function on a lamina \(R\). Then the moments of mass are
\[ M_x = \int_0^1\int_0^2 k(x^2+y^2) y\, dy\, dx\]
and
\[ M_y = \int_0^1\int_0^2 k(x^2+y^2) x\, dy \,dx\]
and if \(M\) is the mass of the lamina, then the
center of mass is
\[ (\bar{x},\bar{y})=\left ( \dfrac{M_y}{M},\dfrac{M_x}{M}\right ).\]
Example \(\PageIndex{2}\)
Set up the integrals that give the center of mass of the rectangle with vertices \((0,0)\), \((1,0)\), \((1,1)\), and \((0,1)\) and density function proportional to the square of the distance from the origin. Use a calculator or computer to evaluate these integrals.
Solution
The mass is given by
\[M = \int_0^1\int_0^2 k(x^2+y^2)\, dy\, dx = \dfrac{2k}{3}.\nonumber\]
The moments are given by (definition 2a):
\[ M_x = \int_0^1\int_0^2 k(x^2+y^2) y\, dy\, dx\nonumber\]
and
\[ M_y = \int_0^1\int_0^2 k(x^2+y^2) x\, dy\, dx.\nonumber\]
These evaluate to
\[M_x = \dfrac{5k}{12}\nonumber\]
and
\[M_y = \dfrac{5k}{12}.\nonumber\]
It should not be a surprise that the moments are equal since there is complete symmetry with respect to \(x\) and \(y\). Finally, we divide to get
\[(x,y) = (\dfrac{5}{8},\dfrac{5}{8}).\nonumber\]
This tells us that the metal plate will balance perfectly if we place a pin at \((\frac{5}{8},\frac{5}{8})\).
Moments of Inertia
We often call \(M_x\) and \(M_y\) the first moments. They have first powers of \(y\) and \(x\) in their definitions and help find the center of mass. We define the
moments of inertia (or second moments) by introducing squares of \(y\) and \(x\) in their definitions. The moments of inertia help us find the kinetic energy in rotational motion. Below is the definition
Definition: Moments of Inertia
Suppose that \(\rho (x,y)\) is a continuous density function on a lamina \(R\). Then the
moments of inertia are
\[I_x = \iint_R \rho(x,y) y^2 \, dy\, dx\]
\[I_y = \iint_R \rho(x,y) x^2 \, dy\, dx.\]
Exercise \(\PageIndex{1}\)
Find the moments of inertia for the square metal plate in Example \(\PageIndex{2}\).
First Moment
The first moment of a
3-D solid region \(D\) about a coordinate plane is defined as the triple integral over \(D\) of the distance from a point \((x,y,z)\) in \(D\) to the plane multiplied by the density of the solid at that point. First moments about the coordinate planes:
\[M(yz)=\iiint_{a}^{b}\delta x\, dV\]
\[M(xz)=\iiint_{a}^{b}\delta y\,dV\]
\[M(xy)=\iiint_{a}^{b}\delta z \,dV\]
The first moment about the y-axis is the double integral over the region \(R\) forming the 2-D plate of the distance from the axis multiplied by the density.
\[M(y)=\iint_{a}^{b}\delta x\; dV\]
\[M(x)=\iint_{a}^{b}\delta y\; dV\]
Center of Mass
We defined
center of mass located in \(\bar{x}, \bar{y}, \bar{z}\). Then it is found from the first moments:
\[\bar{x} =\dfrac{M(y)}{M}\]
\[\bar{y} =\dfrac{M(x)}{M}.\]
Contributors Shengqiao Luo (UCD)
Integrated by Justin Marshall.
|
In this section we are going to cover the integration of a line over a 3-D scalar field. When you learned on dimensional integrals, we integrated functions of \(y\) with respect to \(x\) and assumed that \(z\), the third dimension, does not change. If, however, the third dimension does change, the line is not linear and there is there is no way to integrate with respect to one variable. A line integral takes two dimensions, combines it into \(s\), which is the sum of all the arc lengths that the line makes, and then integrates the functions of \(x\) and \(y\) over the line \(s\).
Definition of a Line Integral
By this time you should be used to the construction of an integral. We break a geometrical figure into tiny pieces, multiply the size of the piece by the function value on that piece and add up all the products. For one variable integration the geometrical figure is a line segment, for double integration the figure is a region, and for triple integration the figure is a solid.
The geometrical figure of the day will be a curve. If we have a function defined on a curve we can break up the curve into tiny line segments, multiply the length of the line segments by the function value on the segment and add up all the products. As always, we will take a limit as the length of the line segments approaches zero. This new quantity is called the
line integral and can be defined in two, three, or higher dimensions.
Suppose that a wire has as density \(f(x,y,z)\) at the point \((x,y,z)\) on the wire. Then the line integral will equal the total mass of the wire. Below is the definition in symbols.
Definition: Line Integrals
Let \(f\) be a function defined on a curve \(C\) of finite length. Then the
line integral of \(f\) along \(C\) is
\[\int_C \; f(x,y) ds= \lim_{n \rightarrow \infty} \sum_{i=1}^{n} f(x_i,y_i)\Delta s_i\]
(for two dimensions)
\[\int_C \; f(x,y,z) ds= \lim_{n \rightarrow \infty} \sum_{i=1}^{n} f(x_i,y_i,z_i)\Delta s_i\]
(for three dimensions)
A scalar field has a value associated to each point in space. Examples of scalar fields are height, temperature or pressure maps. In a two-dimensional field, the value at each point can be thought of as a height of a surface embedded in three dimensions. The line integral of a curve along this scalar field is equivalent to the area under a curve traced over the surface defined by the field.
requirement simple integrals For line integrals 1 an equation of the function \(f(x)\) AKA \(y=\) an equation of the function \(f(x,y)\) AKA \(z=\) 2 the equation of the path in parametric form \(( x(t),y(t) )\) 3 bounds in terms of \(x=a\) and \(x=b\) the bounds in terms of \(t=a\) and \(t=b\)
The length of the line can be determined by the sum of its arclengths
\[\lim_{n \to \infty }\sum_{i=1}^{n}\Delta_i =\int _a^b d(s)=\int_a^b\sqrt {\left ( \dfrac{dx}{dt} \right )^2+\left ( \dfrac{dy}{dt} \right )^2}dt\]
note that the arc length can also be determined using the vector components \( s(t)=x(t)i+y(t)j+z(t)k \)
\[ds= \left | \dfrac{ds}{dt} \right|=\sqrt {\left ( \dfrac{dx}{dt} \right )^2+\left ( \dfrac{dy}{dt} \right )^2+\left ( \dfrac{dz}{dt} \right )^2} dt =\left |\dfrac{dr}{dt}\right | dt\]
so a line integral is sum of arclength multiplied by the value at that point
\[\lim_{n\rightarrow \infty}\sum_{i=1}^{n}f(c_i)\Delta s_i=\int_a^b f(x,y)ds=\int_a^b f(x(t),y(t))\sqrt {\left ( \dfrac{dx}{dt} \right )^2+\left ( \dfrac{dy}{dt} \right )^2}dt\]
where \(c_i\) are partitions from \(a\) to \(b\) spaced by \(ds_i\). Here is a visual representation of a line integral over a scalar field.
Figure \(\PageIndex{1}\): line integral over a scalar field. Image used with permission (Public Domain; Lucas V. Barbosa)
All these processes are represented step-by-step, directly linking the concept of the line integral over a scalar field to the representation of integrals, as the area under a simpler curve. A breakdown of the steps:
The color-coded scalar field \(f\) and a curve \(C\) are shown. The curve \(C \) starts at \(a\) and ends at \(b\). The field is rotated in 3D to illustrate how the scalar field describes a surface. The curve \(C\), in blue, is now shown along this surface. This shows how at each point in the curve, a scalar value (the height) can be associated. The curve is projected onto the plane \(XY\) (in gray), giving us the red curve, which is exactly the curve \(C\) as seen from above in the beginning. This is red curve is the curve in which the line integral is performed. The distances from the projected curve (red) to the curve along the surface (blue) describes a "curtain" surface (in blue). The graph is rotated to face the curve from a better angle The projected curve is rectified (made straight), and the same transformation follows on the blue curve, along the surface. This shows how the line integral is applied to the arc length of the given curve. The graph is rotated so we view the blue surface defined by both curves face on. This final view illustrates the line integral as the familiar integral of a function, whose value is the "signed area" between the X axis (the red curve, now a straight line) and the blue curve (which gives the value of the scalar field at each point). Thus, we conclude that the two integrals are the same, illustrating the concept of a line integral on a scalar field in an intuitive way. Evaluating Line Integrals
This definition is not very useful by itself for finding exact line integrals. If data is provided, then we can use it as a guide for an approximate answer. Fortunately, there is an easier way to find the line integral when the curve is given parametrically or as a vector valued function. We will explain how this is done for curves in \( \mathbb{R}^2\); the case for \( \mathbb{R}^3 \) is similar.
Let
\[ \textbf{r}(t) = x(t) \hat{\textbf{i}} + y(t) \hat{\textbf{j}} \]
be a differentiable vector valued function. Then
\[ds = ||r'(t)||\; dt = \sqrt{(x'(t))^2+(y'(t))^2}. \]
We are now ready to state the theorem that shows us how to compute a line integral.
Theorem: Line Integrals of Vector Valued Functions
Let
\[\textbf{r}(t) = x(t) \hat{\textbf{i}} + y(t) \hat{\textbf{j}} \; \; \; \; a \leq t \leq b \]
be a differentiable vector valued function that defines a smooth curve \(C\). Then
\[\int_C \; f(x,y) \; ds= \int_a^b f(x(t),y(t)) \sqrt{(x'(t))^2+(y'(t))^2} \; dt \]
and for three dimensions, if
\[\textbf{r}(t)= x(t) \hat{\textbf{i}} + y(t) \hat{\textbf{j}} + z(t) \hat{\textbf{k}} \;\;\;\; a \leq t \leq b\]
then
\[\int_C \; f(x,y) \; ds= \int_a^b f(x(t),y(t),z(t))\ \sqrt{(x'(t))^2+(y'(t))^2+(z'(t)^2)} \; dt . \]
Example \(\PageIndex{1}\)
Find the line integral
\[ \int_c (1+ x^2y) ds \nonumber \]
where \(C\) is the ellipse
\[r(t) = (2\cos \,t) \hat{\textbf{i}} + (3\sin\, t) \hat{\textbf{j}} \nonumber \]
for \( 0 \le t \le 2\pi\).
You may use a calculator or computer to evaluate the final integral.
Solution
We find
\[ds = \sqrt{(-2 \sin t) + (3 \cos t)^2} \; dt = \sqrt{4 \sin^2 t + 9 \cos^2 t}\; dt . \nonumber\]
We have the integral
\[\int_0^{2\pi} (1+(2 \cos t)^2)(3 \sin t ))\sqrt{4\sin^2 t + 9 \cos^2 t} \; dt. \nonumber \]
With the help of a machine, we get 15.87.
Work
The main application of line integrals is finding the work done on an object in a force field. If an object is moving along a curve through a force field \(F\), then we can calculate the total work done by the force field by cutting the curve up into tiny pieces. The work done \(W\) along each piece will be approximately equal to
\[dW = \vec{F} \cdot T\vec{ds}.\]
Now recall that
\[T = \dfrac{ r'(t) }{ ||r'(t)|| }\]
and that
\[\vec{ds} = ||\vec{r}'(t)||dt.\]
Hence
\[dW = \vec{F} \cdot \vec{r}'(t) dt.\]
As usual, we add up all the small pieces of work and take the limit as the pieces get small to end up with an integral.
Definition: Work
Let \(F\) be a vector field and \(C\) be a curve defined by the vector valued function \(\textbf{r}\). Then the work done by \(F\) on an object moving along \(C\) is given by
\[\text{Work} = \int_C F \cdot dr = \int_a^b F(x(t),y(t), z(t)) \cdot \textbf{r}'(t) \; dt. \]
Example \(\PageIndex{2}\): Work
Find the work done by the vector field
\[\vec{F}(x,y,z) = x \hat{\textbf{i}} + 3xy \hat{\textbf{j}} - (x + z) \hat{\textbf{k}} \nonumber\]
on a particle moving along the line segment that goes from \((1,4,2)\) to \((0,5,1)\)
Solution
We first have to parameterize the curve. We have
\[\textbf{r}(t) = \langle1,4,2\rangle + [\langle0,5,1\rangle - \langle1,4,2\rangle ]t = \langle1-t,4+t, 2-t\rangle \nonumber\]
and
\[ \textbf{r}'(t) = -\hat{\textbf{i}} + \hat{\textbf{j}} - \hat{\textbf{k}}. \nonumber \]
Taking the dot product, we get
\[ \begin{align*} F \cdot \textbf{r}'(t) &= -x + 3xy + x + z \\ &= 3xy + z \\ &= 3(1-t)(4+t) + (2-t) \\ &= -3t^2 - 10t +14. \end{align*} \]
Now we just integrate
\[\int_0^1 (-3t^2 -10t +14)\; dt = \big[-t^3 - 5t^2 + 14t \big]_0^1 = 8. \nonumber \]
Notice that work done by a force field on an object moving along a curve depends on the direction that the object goes. In fact the opposite direction will produce the negative of the work done in the original direction. This is clear from the fact that everything is the same except the order which we write a and b.
Line Integrals in Differential Form
We can rewrite \(\textbf{r}'(t) \; dt \) as
\[ \dfrac{d\textbf{r}}{dt} dt = \left(\dfrac{dx}{dt} \hat{\textbf{i}} +\dfrac{dy}{dt} \hat{\textbf{j}} +\dfrac{dz}{dt} \hat{\textbf{k}} \right) dt\]
\[= dx \hat{\textbf{i}} + dy \hat{\textbf{j}} + dz \hat{\textbf{k}}. \]
So that if
\[\vec{ F} = M \hat{\textbf{i}} + N \hat{\textbf{j}} + P \hat{\textbf{k}} \]
then
\[F \cdot \textbf{r}'(t) \; dt = M \; dx + N \; dy + P \; dz. \]
This is called the
differential form of the line integral.
Example \(\PageIndex{3}\)
Find
\[ \int_c y\, dx + z\, dy \nonumber \]
where \(C\) is the part of the helix
\[ \textbf{r}(t) = \sin t \, \hat{\textbf{i}} + \cos t\, \hat{\textbf{j}} + t \, \hat{\textbf{k}} \nonumber \]
from \( 0 \leq t \leq 2\pi \).
Solution
We have
\[\textbf{r}'(t) = \cos t \hat{\textbf{i}} - \sin t \hat{\textbf{j}} + \hat{\textbf{k}} \nonumber \]
so that
\[y \; dx + z \; dz = (\cos^2 t - t\sin t ) dt). \nonumber \]
This leads us to the integral
\[ \int_0^{2\pi} \left( \cos^2 t - t\, \sin t\right) \, dt \nonumber\]
with a little bit of effort (using integration by parts) we solve this integral to get \( 3\pi \)
Example \(\PageIndex{4}\)
Integrate \( f(x,y,z)= -\sqrt{x^2+y^2} \; \) over \(s(t)=(a\: \cos(t))j+(a\, \sin(t))k \: \) with \( 0\leq t \leq 2\pi \).
Solution
First we separate the equation for the line into two parametric equations
\[x=0\; \; \; y=a\: \cos (t)\; \; \; z=a\: \sin (t). \nonumber\]
Next we find \(ds\) (Note: if dealing with 3 variables we can take the arc length the same way as with two variables)
\[\sqrt {\left ( \dfrac{dx}{dt} \right )^2+\left ( \dfrac{dy}{dt} \right )^2+\left ( \dfrac{dz}{dt} \right )^2}dt \nonumber \]
\[\sqrt {\left ( 0 \right )^2+\left ( -a\: \sin(t) \right )^2+\left ( a\: \cos(t) \right )^2}dt \nonumber\]
\[d(s)=a. \nonumber\]
Then we substitute our parametric equations into \(f(x,y,z)\) to get the function into terms of \(t\)
\[f(x,y,z)=-\sqrt{x^2+y^2}\: \rightarrow\: -\sqrt{(0)^2+(a\: \sin (t))^2}\: \rightarrow \: \: -(\pm a\: \sin(t)) \nonumber \]
note that from \( 0 \rightarrow \pi \;\) only \(\; -(a\: \sin(t)) \;\) exists. Likewise from \( \pi \rightarrow 2\pi \;\) only \( -(-a\: \sin(t)) \) exists.
Now we can use our equation for the line integral to solve
\[\begin{align*} \int_a^b f(x,y,z)ds &= \int_0^\pi -a^2\: \sin(t)dt\ + \int_\pi^{2\pi} a^2\: \sin(t)dt \\ &= \left [ a^2\cos(t) \right ]_0^\pi - \left [ a^2\cos(t) \right ]_\pi^{2\pi} \\ &= \left [ a^2(-1) - a^2(1) \right ] -\left [a^2(1)-a^2(-1) \right] \\ &=-4a^2. \end{align*}\]
Example \(\PageIndex{5}\)
Integrate over the curve
\[f(x,y)=\dfrac{x^3}{y},\;\;\; \text{line:} \; y=\dfrac{x^2}{2}, \;\;\;0\leq x\leq 2. \nonumber\]
Solution
Since all of the equations contain \(x\), there is no need to convert to parametric and solve for \(t\), rather we can just solve for \(x\).
both \(x\) and \(y\) is given so there is no need to convert. The next step would be to find \(d(s)\) in terms of \(x\).
\[x=x, \; y=\dfrac{x^2}{2} \nonumber\]
\[d(x)=1 \; \; \; d(y)=x \nonumber\]
\[d(s)=\sqrt {\left ( \dfrac{dx}{dx} \right )^2+\left ( \dfrac{dy}{dx} \right )^2}dx \nonumber\]
\[d(s)=\sqrt{1^2+x^2}dx \nonumber\]
Next we convert the function into a function of \(x\) by substituting in \(y\)
\[ f(x,y)=\dfrac{x^3}{y} \; \rightarrow \; f(x)=\dfrac{x^3}{\dfrac{x^2}{2}} \; \rightarrow \; f(x)= 2x. \nonumber\]
Now that we have all the individual parts, the next step is to put it into the equation
\[\int_0^2 2x(\sqrt{1+x^2})dx \nonumber\]
we can solve using u substitution
\[u=x^2+1 \: \: \: du=2x\;dx \nonumber\]
\[\begin{align*} \int_{0^2+1}^{2^2+1} \sqrt{u} &= \left [\dfrac{2}{3} u^\dfrac {3}{2} \right ]_1^5 \\ &=\dfrac{2}{3} (5\sqrt{5} - 1). \end{align*}\]
Example \(\PageIndex{6}\)
Find the area of one side of the "wall" standing orthogonally on the curve \(2x+3y =6\;,0\leq\;x\;\leq 6 \) and beneath the curve on the surface \(f(x,y) = 4+3x+2y.\)
Solution
First, convert \(2x+3y=6\) into parametric form:
\[\text{let}\; x=t \;\;\text{and}\;\; y=\dfrac{6-2x}{3} \:= 2-\dfrac{2t}{3}. \nonumber\]
Next, take the rate of change of the arc length (\(ds\)):
\[\dfrac{dx}{dt}=1 \;\;\;\dfrac{dy}{dt}=\dfrac{2}{3} \nonumber\]
\[ ds=\sqrt{\left (\dfrac{dx}{dt} \right )^2+\left (\dfrac{dy}{dt} \right )^2}dt=\sqrt{1^2+\left (\dfrac{2}{3} \right )^2}dt=\sqrt{13/9} \; dt=\dfrac{\sqrt{13}}{3}dt. \nonumber\]
Solve \(f(x,y)\) in terms of \(t\):
\[f(x,y)=4+3x+2y\;\;\; f(x(t),y(t))=4+3t+2(\dfrac{6-2x}{3}).\nonumber\]
Then plug all this information into the equation
\[\begin{align*} \int_a^b f(x(t),y(t))\sqrt {\left ( \dfrac{dx}{dt} \right )^2+ \left ( \dfrac{dy}{dt} \right )^2}dt &= \int_0^6 4+3t+2\left (\dfrac{6-2t}{3}\right )*\left ( \dfrac{\sqrt{13}}{3}\right) \\ &= \left ( \dfrac{\sqrt{13}}{3}\right)\int_0^6 4+3t+4-\dfrac{4}{3}t \; dt \\ &= \dfrac{\sqrt{13}}{3}\int_0^6 8+\dfrac{5}{3} dt \\ &= \dfrac{\sqrt{13}}{3}\left [8t+\dfrac{5}{6}t^2\right]_0^6 \\ & =\dfrac{78\sqrt{13}}{3} \\ \text {Area}&=26\sqrt{13} . \end{align*} \]
Contributors Danny Nguyen (UCD), Michael Rea (UCD) Larry Green (Lake Tahoe Community College)
Integrated by Justin Marshall.
|
SolidsWW Flash Applet Sample Problem 2
Line 327: Line 327:
== License ==
== License ==
−
The Flash applets are protected under the following license:
+
The Flash applets are protected under the following license:
[http://creativecommons.org/licenses/by-nc/3.0/ Creative Commons Attribution-NonCommercial 3.0 Unported License].
[http://creativecommons.org/licenses/by-nc/3.0/ Creative Commons Attribution-NonCommercial 3.0 Unported License].
Revision as of 10:15, 10 August 2011 Flash Applets embedded in WeBWorK questions solidsWW Example Sample Problem 2 with solidsWW.swf embedded
A standard WeBWorK PG file with an embedded applet has six sections:
A tagging and description section, that describes the problem for future users and authors, An initialization section, that loads required macros for the problem, A problem set-up sectionthat sets variables specific to the problem, An Applet link sectionthat inserts the applet and configures it, (this section is not present in WeBWorK problems without an embedded applet) A text section, that gives the text that is shown to the student, and An answer and solution section, that specifies how the answer(s) to the problem is(are) marked for correctness, and gives a solution that may be shown to the student after the problem set is complete.
The sample file attached to this page shows this; below the file is shown to the left, with a second column on its right that explains the different parts of the problem that are indicated above. A screenshot of the applet embedded in this WeBWorK problem is shown below:
There are other example problems using this applet: solidsWW Flash Applet Sample Problem 1 solidsWW Flash Applet Sample Problem 3 And other problems using applets: Derivative Graph Matching Flash Applet Sample Problem USub Applet Sample Problem trigwidget Applet Sample Problem solidsWW Flash Applet Sample Problem 1 GraphLimit Flash Applet Sample Problem 2 Other useful links: Flash Applets Tutorial Things to consider in developing WeBWorK problems with embedded Flash applets
PG problem file Explanation ##DESCRIPTION ## Solids of Revolution ##ENDDESCRIPTION ##KEYWORDS('Solids of Revolution') ## DBsubject('Calculus') ## DBchapter('Applications of Integration') ## DBsection('Solids of Revolution') ## Date('7/31/2011') ## Author('Barbara Margolius') ## Institution('Cleveland State University') ## TitleText1('') ## EditionText1('2011') ## AuthorText1('') ## Section1('') ## Problem1('') ########################################## # This work is supported in part by the # National Science Foundation # under the grant DUE-0941388. ##########################################
This is the
The description is provided to give a quick summary of the problem so that someone reading it later knows what it does without having to read through all of the problem code.
All of the tagging information exists to allow the problem to be easily indexed. Because this is a sample problem there isn't a textbook per se, and we've used some default tagging values. There is an on-line list of current chapter and section names and a similar list of keywords. The list of keywords should be comma separated and quoted (e.g., KEYWORDS('calculus','derivatives')).
DOCUMENT(); loadMacros( "PGstandard.pl", "AppletObjects.pl", "MathObjects.pl", );
This is the
The
TEXT(beginproblem()); $showPartialCorrectAnswers = 1; Context("Numeric"); $a = random(2,10,1); $xy = 'x'; $func1 = "$a*sin(pi*x/8)"; $func2 = '2'; $xmax = Compute("8"); $shapeType = 'circle'; $correctAnswer =Compute("128*$a");
This is the
The solidsWW.swf applet will accept a piecewise defined function either in terms of x or in terms of y. We set
######################################### # How to use the solidWW applet. # Purpose: The purpose of this applet # is to help with visualization of # solids # Use of applet: The applet state # consists of the following fields: # xmax - the maximum x-value. # ymax is 6/5ths of xmax. the minima # are both zero. # captiontxt - the initial text in # the info box in the applet # shapeType - circle, ellipse, # poly, rectangle # piece: consisting of func and cut # this is a function defined piecewise. # func is a string for the function # and cut is the right endpoint # of the interval over which it is # defined # there can be any number of pieces # ######################################### # What does the applet do? # The applet draws three graphs: # a solid in 3d that the student can # rotate with the mouse # the cross-section of the solid # (you'll probably want this to # be a circle # the radius of the solid which # varies with the height #########################################
<p> This is the
Those portions of the code that begin the line with
################################### # Create link to applet ################################### $appletName = "solidsWW"; $applet = FlashApplet( codebase => findAppletCodebase ("$appletName.swf"), appletName => $appletName, appletId => $appletName, setStateAlias => 'setXML', getStateAlias => 'getXML', setConfigAlias => 'setConfig', maxInitializationAttempts => 10, #answerBoxAlias => 'answerBox', height => '550', width => '595', bgcolor => '#e8e8e8', debugMode => 0, submitActionScript => '' );
You must include the section that follows
################################### # Configure applet ################################### $applet->configuration(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='3' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0xff6699</theColor> <profile> <piece func='$func1' cut='8'/> </profile> </plot></xml>}); $applet->initialState(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='3' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0xff6699</theColor> <profile> <piece func='$func1' cut='8'/> </profile> </plot></xml>}); TEXT( MODES(TeX=>'object code', HTML=>$applet->insertAll( debug=>0, includeAnswerBox=>0, )));
The lines
The configuration of the applet is done in xml. The argument of the function is set to the value held in the variable
The code
Answer submission and checking is done within WeBWorK. The applet is intended to aid with visualization and is not used to evaluate the student submission.
TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer. <br/>It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT
The text between the
BEGIN_TEXT $BR $BR Find the volume of the solid of revolution formed by rotating the curve \[y=$a\sin\left(\frac{\pi x}{8}\right)\] for \(x=0\) to \(8\) about the \(y\)-axis. \{ans_rule(35) \} $BR END_TEXT Context()->normalStrings;
This is the
###################################### # # Answers # ## answer evaluators ANS( $correctAnswer->cmp() ); ENDDOCUMENT();
This is the
The
License
The Flash applets developed under DUE-0941388 are protected under the following license: Creative Commons Attribution-NonCommercial 3.0 Unported License.
|
I am building a model which predicts angles as output. What are the different kinds of outputs that can be used to predict angles?
For example,
output the angle in radians cyclic nature of the angles is not captured output might be outside $\left[-\pi, \pi \right)$ output the sine and the cosine of the angle outputs might not satisfy $\sin^2 \theta + \cos^2 \theta = 1$
What are the pros and cons of different methods?
|
"La Madre Terra" by Pietro Cascella, made for ICRANet. You can see Einstein's equation, which he translated into the metaphor "marble = wood". My guess is that it symbolizes Einstein's idea that everything (matter, life, not just the earth) emerges from the perfect geometry of spacetime. I took this photo at the Marco Besso Foundation exhibition in Rome, during the XIV-th Marcel Grossman conference.
Wednesday, November 25, 2015 Tuesday, October 20, 2015
My paper Quantum Measurement and Initial Conditions, recently published in International Journal of Theoretical Physics:
The arXiv link.Quantum measurement finds the observed system in a collapsed state, rather than in the state predicted by the Schrödinger equation. Yet there is a relatively spread opinion that the wavefunction collapse can be explained by unitary evolution (for instance in the decoherence approach, if we take into account the environment). In this article it is proven a mathematical result which severely restricts the initial conditions for which measurements have definite outcomes, if pure unitary evolution is assumed. This no-go theorem remains true even if we take the environment into account. The result does not forbid a unitary description of the measurement process, it only shows that such a description is possible only for very restricted initial conditions. The existence of such restrictions of the initial conditions can be understood in the four-dimensional block universe perspective, as a requirement of global self-consistency of the solutions of the Schrödinger equation. Thursday, June 11, 2015
The results of this year's FQXi essay contest are out.
The theme was 2015 "Trick or Truth: the Mysterious Connection Between Physics and Mathematics".
This is the list of the winning essays:
Sylvia Wenmackers • Marc Séguin • Matthew Saul Leifer • Cristinel Stoica • Tim Maudlin • Lee Smolin • Ken Wharton • Derek K Wise • Tommaso Bolognesi • Alexey Burov, Lev Burov • Sophia Magnusdottir • Noson S. Yanofsky • Nicolas Fillion • David Garfinkle • Christine Cordula Dantas • Philip Gibbs • Ian Durham • Anshu Gupta Mujumdar, Tejinder Singh • Sara Imari Walker
Sylvia Wenmackers • Marc Séguin • Matthew Saul Leifer • Cristinel Stoica • Tim Maudlin • Lee Smolin • Ken Wharton • Derek K Wise • Tommaso Bolognesi • Alexey Burov, Lev Burov • Sophia Magnusdottir • Noson S. Yanofsky • Nicolas Fillion • David Garfinkle • Christine Cordula Dantas • Philip Gibbs • Ian Durham • Anshu Gupta Mujumdar, Tejinder Singh • Sara Imari Walker
Friday, May 8, 2015
Here are the top 5 essays from the 40 finalists of this year's FQXi essay contest, based on the community ratings.
Unofficially, since FQXi didn't announce yet which of the more than 200 essays are the 40 finalists, although the announcement was expected since April 22. My essay is on the fourth place.
The finalists will be judged by a jury, who will decide the awards until June 6, 2015.
The finalists will be judged by a jury, who will decide the awards until June 6, 2015.
Tuesday, April 21, 2015 Monday, March 16, 2015 The Monty Hall problem
The Monty Hall problem is inspired by an American television game show. There are three doors, and behind one of them, the host of the show, Monty, hides a car. Each of the other two doors hides a goat.
The contestant is asked to pick a door, so that if she finds the car, she wins the game (and the car). Since there are three doors, chances are $1/3$ that she picked the door behind which is the car. But Monty doesn't open yet the door, but he opens one of the remaining doors, revealing a goat. He then asks the contestant either to keep her original choice, or to switch to the other unopened door. The problem is, what should the contestant do?
The first instinct of anybody may be to think that since there are only two remaining doors, it doesn't matter if you switch the door or not, because the chances are $1/2$ in both ways. However, Marilyn vos Savant explained that if the contestant switches the doors, the chances are $2/3$. while if she doesn't switch them, the chances are $1/3$. This is counterintuitive, and the legend says that not even Paul Erdős understood it. You can find on Wikipedia some solutions of this puzzle.
An equivalent puzzle
I will present another, simpler puzzle, and show that it is equivalent to the Monty Hall problem.
Consider again three doors, one hiding a car. The contestant is asked to pick either one of the three doors, or two of them. What is the best choice?
Obviously, the contestant should better choose two doors, rather than one. Since if she thinks that the car is behind door number three, choosing also door number one will only double the chances to win.
But how is this related to the Monty Hall problem? Well, it is, because if you play the Monty Hall problem, you can pick two doors, but don't tell Monty, you just tell you picked the remaining one. When Monty asks if you want to switch, then you switch to the other two doors, and since one is already open, you choose the remaining one. This means that choosing a door and switching is equivalent to choosing the other two doors.
So the Monty Hall problem is actually equivalent to having to choose one or two doors. Not switching is equivalent to choosing one door, and switching is equivalent to choosing two doors. So switching gives indeed probability $2/3$.
Saturday, March 14, 2015
Bertrand Russell said that there are no round squares. But there are. Here are two solutions.
A circle-square
This is a square that is circle:
To make it, first make a paper circle and a paper square, with equal perimeters:
Fold them a bit:
Then paste their edges together:
The common boundary forms a square that is circle. It is a square, because in the blue surface it has right angles and equal straight edges. It is a circle, because in the red surface its points are at equal distance from a point. In fact, its points are at equal distance from the center even in space, because the red surface is ruled, and all the lines pass through the same point. So the common boundary is also a line on the surface of a sphere.
Round squares in non-Euclidean geometry
Consider for example the geometry on a sphere. On a sphere, polygons are made of the straightest lines on the sphere, which are arcs of the big circles. So, there are squares on a sphere
Image from Wikipedia
Image from Wikipedia
So, is it a circle? Is a square? Is a circle and a a square!
The problem In how many ways you can arrange $p$ coins in a sequence of $q$ towers?(it doesn't matter whether the coins can be flipped or rotated).
For example, here is one way to arrange $12$ coins into a sequence of $5$ towers. The problem asks to count all these ways.
Motivation
I arrived at this problem by being inspired by my yesterday's post, A combinatorial problem with balls and boxes. The problem was to count the number of ways you can place $k$ balls in $n$ boxes. The answer is
$n-1+k$ choose $k$, which is $\displaystyle{\frac{(n-1+k)!}{(n-1)!k!}}$.
So I asked myself, since the result is of the form
"$p$ choose $q$", couldn't I modify the problem so that the result will be the sum over $q$, which is known to be $2^p$? But to do this, boxes and balls should be replaced with objects of the same nature, and playing the role of a box or a ball to be determined by the configuration.
I will tell you a solution by reducing to the problem with boxes and balls, and then a simpler, direct solution.
Solution based on the balls and boxes problemLet's identify two distinct roles in a sequence of towers of coins. We color each coin that starts a tower with blue, and the others with red, as below.
We can now consider that the blue coins are boxes, and the red coins are balls, and reduce to the previous problem. The number of possible ways to put $k$ balls in $n$ boxes is $n-1+k$ choose $k$, which is also $n-1+k$ choose $n-1$
.In our case, the number of boxes equals the number of towers, so it is $n$, and the number of balls is $p-n$. So, the number of possible ways to arrange $p$ coins in $n$ towers is $n-1+(p-n)=p-1$ choose $n-1$. Since we can have any number of towers, from $n=1$ to $n=p$, we have to sum accordingly, and the total number is $\sum_{n=1}^p \left(
\begin{array}{c}
p-1\\n-1
\end{array} \right)=\sum_{q=0}^{p-1} \left(
\begin{array}{c}
p-1\\q
\end{array} \right)=2^{p-1}.$
This solution is based on the problem of balls in boxes, which inspired the very problem. But since we've got $2^{p-1}$, shouldn't be a simpler and direct way to count all possible configurations? Simpler solutionRather than coloring the coins as previously, let's color the even towers with red, and the odd towers with blue.
We see now that any sequence of colors of the $p$ coins starting with blue corresponds to a way to arrange them in towers, and conversely. For example, the above arrangement corresponds to the sequence BBRRRRBBBRBB. The first coin has to be blue, but each of the other $p-1$ can be chosen in two ways. Hence, the number of all such sequences is $2^{p-1}$.
Friday, March 13, 2015 The problem
Combinatorial problems can be simply to state, and difficult to solve. But this one has a surprisingly simple solution, if you reframe it a bit.
The problem is:
in how many ways you can place $k$ identical balls in $n$ distinct boxes?We assume that each box is large enough so that you can place all balls in it, so we have to count also the cases with empty boxes. You have to place all balls in boxes.
Yesterday, a friend and fellow physicist told me the problem, he needed to solve it in order to count some quantum states, but this is not relevant here. He solved it before, but forgot how. He found an ingenious way to see what happens if we add a new box or a ball. This would lead to some recurrence formula, which involved summing both over the number of balls, and the number of boxes. So he asked me to help him with these calculations. This is a problem of induction, which anyone should be able to resolve in high school, but I considered that all these calculations were too tedious for me, especially since I wanted to have lunch. So I replied that I would rather prefer to find a direct way to the solution, by framing it differently.
Before reading the solution, I would like to ask you to solve it yourself.
The solutionWe can reframe the problem like this. We can arrange the boxes one next to another, like the carts of a train. Then we get something like this:
Now we can invent a notation for each configuration: we denote every space between boxes with a square, and every ball with a circle. Here's what we get:
The sequence starts with a separator, because the first box is empty. Then there are four balls in the second box. There are two successive separators because the third box is empty. Then there's a box with two balls, and the last contains only one ball.
$n-1+k$ choose $k$, which is $\displaystyle{\frac{(n-1+k)!}{(n-1)!k!}}$.
You may try to solve it by double induction, and at the end the result may look more complicated, unless you are able to apply some formulas to bring it in this simple form.
|
There are two kinds of vector multiplications-
Dot (Scalar) Product:
The dot product of two vectors gives a scalar, that means only the magnitude is left, no direction. Mathematically, it is equal to the product of the magnitude of two vectors times the cosine of the angle between the two. ie. $$\vec{v} \cdot \vec{u}=|\vec{v}||\vec{u}|Cos\theta$$
The geometric interpretation: The dot product of $\vec{a}$ with unit vector $\hat{u}$, denoted $\vec{a}⋅\hat{u}$, is defined to be the projection of $\vec{a}$ in the direction of $\vec{a}$, or the amount that $\vec{a}$ is pointing in the same direction as unit vector $\hat{u}$. Let's assume for a moment that $\vec{a}$ and $\hat{u}$ are pointing in similar directions. Then, you can imagine $\vec{a}⋅\hat{u}$ as the length of the shadow of $\vec{a}$ onto $\hat{u}$ if their tails were together and the sun was shining from a direction perpendicular to $\hat{u}$. By forming a right triangle with $\vec{a}$ and this shadow, you can use geometry to calculate that $$\vec{a}⋅\hat{u}=|\vec{a}|Cos\theta$$
Cross (Vector) Product:
The cross product of two vectors gives a vector, that means the answer has a magnitude and a direction. The magnitude of the resultant vector is given by the product of the magnitude of the two vectors times the sine of the angle between them. The cross product is always perpendicular to both vectors, and has magnitude zero when the vectors are parallel and maximum magnitude when they are perpendicular. $$\vec{v} \times \vec{u}=|\vec{v}||\vec{u}|Sin\theta \hat{r}$$ where $\hat{r}$ is the unit vector in the direction of the resultant vector. The direction of this can be found out using the gif below or the right hand thumb rule.
The geometrical interpretation: The magnitude of the cross product can be interpreted as the positive area of the parallelogram having $\vec{v}$ and $\vec{u}$ as sides.
|
In some book about continuum mechanics I read that from principle of virtual work follows balance of rotational momentum when $\delta \boldsymbol{r} = \boldsymbol{\delta \varphi} \times \boldsymbol{r}, \; \boldsymbol{\delta \varphi} = \boldsymbol{\mathsf{const}}$ ($\boldsymbol{r}$ is location vector, $\delta \boldsymbol{r}$ is its variation, $\boldsymbol{\delta \varphi}$ is not variation, just denoted as it for some reason like being small enough for infinitesimal $\delta \boldsymbol{r}$). Then there is written without any explaination $\boldsymbol{\nabla} \delta \boldsymbol{r} = - \boldsymbol{E} \times \boldsymbol{\delta \varphi}$. I know that $\boldsymbol{E}$ is bivalent “metric unit identity” tensor (the one which is neutral to dot product operation), and that $\boldsymbol{\nabla} \boldsymbol{r} = \boldsymbol{E}$. And that $\boldsymbol{a} \times \boldsymbol{E} = \boldsymbol{E} \times \boldsymbol{a} \:\: \forall\boldsymbol{a}$, no minus here. To get minus, transposing is needed: $\left( \boldsymbol{E} \times \boldsymbol{\delta \varphi} \right)^{\mathsf{T}} \! = - \boldsymbol{E} \times \boldsymbol{\delta \varphi}$. Thus I can’t get why $\boldsymbol{\nabla} \delta \boldsymbol{r} = - \boldsymbol{E} \times \boldsymbol{\delta \varphi}$ has minus sign.
For constant $\boldsymbol{\delta \varphi}$, $\boldsymbol{\nabla} \boldsymbol{\delta \varphi} = {^2\boldsymbol{0}}$ (bivalent zero tensor). Isn’t it true that $\boldsymbol{\nabla} \! \left( \boldsymbol{\delta \varphi} \times \boldsymbol{r} \right) = \boldsymbol{\delta \varphi} \times \boldsymbol{\nabla} \boldsymbol{r} = \boldsymbol{\delta \varphi} \times \boldsymbol{E} = \boldsymbol{E} \times \boldsymbol{\delta \varphi}$? Searching for how to get gradient of cross product of two vectors gives gradient of dot product, divergence ($\boldsymbol{\nabla} \cdot$) of cross product, and many other relations. But no gradient of cross product $\boldsymbol{\nabla} \! \left( \boldsymbol{a} \times \boldsymbol{b} \right) = \ldots$ Is it impossible or unknown how to find it? At least for the case when first vector is constant.
update
As “gradient” I mean tensor product with “nabla” $\boldsymbol{\nabla}$: $\operatorname{^{+1}grad} \boldsymbol{A} \equiv \boldsymbol{\nabla} \! \boldsymbol{A}$, here $\boldsymbol{A}$ may be tensor of any valence (and I don’t use “$\otimes$” or any other symbol for tensor product). Nabla (differential Hamilton’s operator) is $\boldsymbol{\nabla} \equiv (\sum_i)\, \boldsymbol{r}^i \partial_i$, $\:(\sum_i)\, \boldsymbol{r}^i \boldsymbol{r}_i = \boldsymbol{E} \,\Leftrightarrow\, \boldsymbol{r}^i \cdot \boldsymbol{r}_j = \delta^{i}_{j}$ (Kronecker’s delta), $\,\boldsymbol{r}_i \equiv \partial_i \boldsymbol{r}$ (basis vectors), $\,\partial_i \equiv \frac{\partial}{\partial q^i}$, $\:\boldsymbol{r}(q^i)$ is location vector, and $q^i$ $(i = 1, 2, 3)$ are coordinates.
|
Given $A_\infty$-spaces $X$ and $Y$, Boardman and Vogt defined an $A_\infty$-map from $X$ to $Y$ to be a map $f: X \to Y$ of underlying based spaces and an $A_\infty$-structure on the reduced mapping cylinder $$ M_f = Y \cup_{f\times 0} X\wedge (I_+) $$ which extends the given structures on $X$ and $Y$. These do not form a category because composition is only defined up to "contractible choice." However, they do form a weak Kan complex (or $\infty$-category); the $k$-simplices are formed by taking iterated mapping cylinders of a $k$-fold composition and then taking the $A_\infty$-structures on that which restrict to $A_\infty$-structures on all faces.
So we obtain a space of $A_\infty$-maps from $X$ to $Y$.
(Note: an $A_\infty$-map is not assumed to strictly commute with the operad actions on $X$ and $Y$ (when it does, it's called an $A_\infty$-homomorphism.)
We can define similar notions of "map" and "homomorphism" for $A_\infty$-ring spectra.
On the other hand, EKMM defined a category of structured associative ring spectra which is enriched over spectra.
Question: If $R$ and $S$ are EKMM type associative ring spectra which are both fibrant and cofibrant, how does the EKMM version $\hom(R,S)$ relate to the Boardman-Vogt type notions?
More precisely, is it weak equivalent to the spectrum of $A_\infty$-homomorphisms in the Boardman-Vogt sense, or to the spectrum of $A_\infty$-maps in the Boardman-Vogt sense?
|
62 0
Hi,
I'd like to know which property proves the following simple result.
Let p be a prime greatest than 3. r is a quadratic residue of p if there exists a such that: [tex]a^2 \equiv r \pmod{p}[/tex]. Since p is prime, there are [tex]\frac{p-1}{2}[/tex] different residues (not counting 0). Now, if you sum them all, you find: [tex]S_p=\sum_{i=1}^{\frac{p-1}{2}} r_i \equiv 0 \pmod{p}[/tex]. I cannot find any explanation in my Maths books (and remind I'm just an amateur). If p is not prime, this property is false in general. Often, [tex]d \mid p \rightarrow d \mid S_p[/tex]. And sometimes the property is true.
Examples. p=7 S7=7 p=10 S10=2*10 p=22 S22=9*11 p=33 S33=22*11 p=35 S35=7*35 p=43 S43=10*43 p=47 S47=9*47 p=48 S48=340
The PARI/gp program I use is: for(p=5,50,S=0;for(i=1,(p-1)/2,S=S+(i^2%p));print(p," ",S," ",S/p) wich uses the property that [tex]a^2 \equiv (p-a)^2 \pmod{p}[/tex] .
Thanks,
Tony
I'd like to know which property proves the following simple result.
Let p be a prime greatest than 3.
r is a quadratic residue of p if there exists a such that: [tex]a^2 \equiv r \pmod{p}[/tex].
Since p is prime, there are [tex]\frac{p-1}{2}[/tex] different residues (not counting 0).
Now, if you sum them all, you find: [tex]S_p=\sum_{i=1}^{\frac{p-1}{2}} r_i \equiv 0 \pmod{p}[/tex].
I cannot find any explanation in my Maths books (and remind I'm just an amateur).
If p is not prime, this property is false in general. Often, [tex]d \mid p \rightarrow d \mid S_p[/tex]. And sometimes the property is true.
Examples.
p=7 S7=7
p=10 S10=2*10
p=22 S22=9*11
p=33 S33=22*11
p=35 S35=7*35
p=43 S43=10*43
p=47 S47=9*47
p=48 S48=340
The PARI/gp program I use is:
for(p=5,50,S=0;for(i=1,(p-1)/2,S=S+(i^2%p));print(p," ",S," ",S/p)
wich uses the property that [tex]a^2 \equiv (p-a)^2 \pmod{p}[/tex] .
Thanks,
Tony
|
Program Arcade GamesWith Python And Pygame
Searching is an important and very common operation that computers do all the time. Searches are used every time someone does a ctrl-f for “find”, when a user uses “type-to” to quickly select an item, or when a web server pulls information about a customer to present a customized web page with the customer's order.
There are a lot of ways to search for data. Google has based an entire
multi-billion dollar company on this fact. This chapter introduces the two
simplest methods for searching, the
linear search and
the binary search.
Before discussing how to search we need to learn how to read
data from a file. Reading in a data set from a file is
way more
fun than typing it in by hand each time.
Let's say we need to create a program that will allow us to quickly
find the name of a super-villain. To start with, our program needs a database
of super-villains.
To download this data set, download and save this file:
http://ProgramArcadeGames.com/chapters/16_searching/super_villains.txt These are random names generated by the nine.frenchboys.net website, although last I checked they no longer have a super-villain generator.
Save this file and remember which directory you saved it to.
In the same directory as
super_villains.txt,
create, save, and run the following python program:
file = open("super_villains.txt") for line in file: print(line)
There is only one new command in this code
open. Because it is a built-in function like
import. Full details on this function can be found
in
the Python documentation
but at this point the documentation for that command is so technical
it might not even be worth looking at.
The above program has two problems with it, but it provides a simple
example of reading in a file.
Line 1 opens a file and gets it ready to be read. The name of the file
is in between the quotes. The new variable
file is an
object that represents the file being read. Line 3 shows how a
normal
for loop may be used to read through a file line by
line. Think of
file as a list of lines, and the new variable
line will be set to each of those lines as the program runs
through the loop.
Try running the program.
One of the problems with the it is that the text is printed
double-spaced. The reason for this is that each line pulled out of
the file and stored in the variable
line includes the
carriage return as part of the string.
Remember the carriage return and line feed introduced back in Chapter 1?
The
The second problem is that the file is opened, but not closed. This problem isn't as obvious as the double-spacing issue, but it is important. The Windows operating system can only open so many files at once. A file can normally only be opened by one program at a time. Leaving a file open will limit what other programs can do with the file and take up system resources. It is necessary to close the file to let Windows know the program is no longer working with that file. In this case it is not too important because once any program is done running, the Windows will automatically close any files left open. But since it is a bad habit to program like that, let's update the code:
file = open("super_villains.txt") for line in file: line = line.strip() print(line) file.close()
The listing above works better. It has two new additions. On line 4
is a call to the
strip method built into every
String class. This function returns a new string
without the trailing spaces and carriage returns of the original string.
The method does not alter the
original string but instead creates a new one. This line of code would
not work:
line.strip()
If the programmer wants the original variable to reference the new string, she must assign it to the new returned string as shown on line 4.
The second addition is on line 7. This closes the file so that the operating system doesn't have to go around later and clean up open files after the program ends.
It is useful to read in the contents of a file to an array so that the program can do processing on it later. This can easily be done in python with the following code:
# Read in a file from disk and put it in an array. file = open("super_villains.txt") name_list = [] for line in file: line = line.strip() name_list.append(line) file.close()
This combines the new pattern of how to read a file, along with the previously learned pattern of how to create an empty array and append to it as new data comes in, which was shown back in Chapter 7. To verify the file was read into the array correctly a programmer could print the length of the array:
print( "There were",len(name_list),"names in the file.")
Or the programmer could bring the entire contents of the array:
for name in name_list: print(name)
Go ahead and make sure you can read in the file before continuing on to the different searches.
If a program has a set of data in an array, how can it go
about finding where a specific element is? This can be done one of two
ways. The first method is to use a
linear search. This
starts at the first element, and keeps comparing elements until
it finds the desired element (or runs out of elements.)
# --- Linear search key = "Morgiana the Shrew" i = 0 while i < len(name_list) and name_list[i] != key: i += 1 if i < len(name_list): print( "The name is at position", i) else: print( "The name was not in the list." )
The linear search is rather simple. Line 4 sets up an increment variable
that will keep track of exactly where in the list the program needs
to check next. The first element that needs to be checked is zero, so
i is set to zero.
The next line is a bit more complex. The computer needs to keep looping until one of two things happens. It finds the element, or it runs out of elements. The first comparison sees if the current element we are checking is less than the length of the list. If so, we can keep looping. The second comparison sees if the current element in the name list is equal to the name we are searching for.
This check to see if the program has run out of elements
must occur first. Otherwise the program will check against a non-existent
element which will cause an error.
Line 6 simply moves to the next element if the conditions to keep searching are met in line 5.
At the end of the loop, the program checks to see if the end of the
list was reached on line 8. Remember, a list of n elements is numbered
0 to n-1. Therefore if
i is equal to the length of the
list, the end has been reached. If it is less, we found the element.
Variations on the linear search can be used to create several common algorithms. For example, say we had a list of aliens. We might want to check this group of aliens to see if one of the aliens is green. Or are all the aliens green? Which aliens are green?
To begin with, we'd need to define our alien:
class Alien: """ Class that defines an alien""" def __init__(self, color, weight): """ Constructor. Set name and color""" self.color = color self.weight = weight
Then we'd need to create a function to check and see if it has the property that we are looking for. In this case, is it green? We'll assume the color is a text string, and we'll convert it to upper case to eliminate case-sensitivity.
def has_property(my_alien): """ Check to see if an item has a property. In this case, is the alien green? """ if my_alien.color.upper() == "GREEN": return True else: return False
Is at least one alien green? We can check. The basic algorithm behind this check:
def check_if_one_item_has_property_v1(my_list): """ Return true if at least one item has a property. """ i = 0 while i < len(my_list) and not has_property(my_list[i]): i += 1 if i < len(my_list): # Found an item with the property return True else: # There is no item with the property return False
This could also be done with a
for loop. In this case, the loop
will exit early by using a
return once the item has been found. The code is
shorter, but not every programmer would prefer it. Some programmers feel that
loops should not be prematurely ended with a
return or
break statement.
It all goes to personal preference, or the personal preference of the person that is
footing the bill.
def check_if_one_item_has_property_v2(my_list): """ Return true if at least one item has a property. Works the same as v1, but less code. """ for item in my_list: if has_property(item): return True return False
Are all aliens green? This code is very similar to the prior example. Spot the difference and see if you can figure out the reason behind the change.
def check_if_all_items_have_property(my_list): """ Return true if at ALL items have a property. """ for item in my_list: if not has_property(item): return False return True
What if you wanted a list of aliens that are green? This is a combination of our prior code, and the code to append items to a list that we learned about back in Chapter 7.
def get_matching_items(list): """ Build a brand new list that holds all the items that match our property. """ matching_list = [] for item in list: if has_property(item): matching_list.append(item) return matching_list
How would you run all these in a test? The code above can be combined with this code to run:
alien_list = [] alien_list.append(Alien("Green", 42)) alien_list.append(Alien("Red", 40)) alien_list.append(Alien("Blue", 41)) alien_list.append(Alien("Purple", 40)) result = check_if_one_item_has_property_v1(alien_list) print("Result of test check_if_one_item_has_property_v1:", result) result = check_if_one_item_has_property_v2(alien_list) print("Result of test check_if_one_item_has_property_v2:", result) result = check_if_all_items_have_property(alien_list) print("Result of test check_if_all_items_have_property:", result) result = get_matching_items(alien_list) print("Number of items returned from test get_matching_items:", len(result))
For a full working example see:
programarcadegames.com/python_examples/show_file.php?file=property_check_examples.py
These common algorithms can be used as part of a solution to a larger problem, such as find all the addresses in a list of customers that aren't valid.
A faster way to search a list is possible with the
binary search.
The process of a binary search can be described by using the classic number
guessing game “guess a number between 1 and 100” as an example. To
make it easier to understand the process, let's modify the game to be
“guess a number between 1 and 128.” The number range is inclusive, meaning
both 1 and 128 are possibilities.
If a person were to use the linear search as a method to guess the secret number, the game would be rather long and boring.
Guess a number 1 to 128: 1 Too low. Guess a number 1 to 128: 2 Too low. Guess a number 1 to 128: 3 Too low. .... Guess a number 1 to 128: 93 Too low. Guess a number 1 to 128: 94 Correct!
Most people will use a binary search to find the number. Here is an example of playing the game using a binary search:
Guess a number 1 to 128: 64 Too low. Guess a number 1 to 128: 96 Too high. Guess a number 1 to 128: 80 Too low. Guess a number 1 to 128: 88 Too low. Guess a number 1 to 128: 92 Too low. Guess a number 1 to 128: 94 Correct!
Each time through the rounds of the number guessing game, the guesser is able to eliminate one half of the problem space by getting a “high” or “low” as a result of the guess.
In a binary search, it is necessary to track an upper and a lower bound of the list that the answer can be in. The computer or number-guessing human picks the midpoint of those elements. Revisiting the example:
A lower bound of 1, upper bound of 128, mid point of $\dfrac{1+128}{2} = 64.5$.
Guess a number 1 to 128: 64 Too low.
A lower bound of 65, upper bound of 128, mid point of $\dfrac{65+128}{2} = 96.5$.
Guess a number 1 to 128: 96 Too high.
A lower bound of 65, upper bound of 95, mid point of $\dfrac{65+95}{2} = 80$.
Guess a number 1 to 128: 80 Too low.
A lower bound of 81, upper bound of 95, mid point of $\dfrac{81+95}{2} = 88$.
Guess a number 1 to 128: 88 Too low.
A lower bound of 89, upper bound of 95, mid point of $\dfrac{89+95}{2} = 92$.
Guess a number 1 to 128: 92 Too low.
A lower bound of 93, upper bound of 95, mid point of $\dfrac{93+95}{2} = 94$.
Guess a number 1 to 128: 94 Correct!
A binary search requires significantly fewer guesses. Worst case, it can guess a number between 1 and 128 in 7 guesses. One more guess raises the limit to 256. 9 guesses can get a number between 1 and 512. With just 32 guesses, a person can get a number between 1 and 4.2 billion.
To figure out how large the list can be given a certain number of guesses, the
formula works out like $n=x^{g}$ where $n$ is the size of the list and $g$ is the
number of guesses. For example:
$2^7=128$ (7 guesses can handle 128 different numbers) $2^8=256$ $2^9=512$ $2^{32}=4,294,967,296$
If you have the problem size, we can figure out the number of guesses using
the
log function. Specifically, log base 2. If you don't
specify a base, most people will assume you mean the natural log with a base of
$e \approx 2.71828$ which is not what we want. For example, using log base 2 to
find how many guesses: $log_2 128 = 7$ $log_2 65,536 = 16$
Enough math! Where is the code? The code to do a binary search is more complex than a linear search:
# --- Binary search key = "Morgiana the Shrew" lower_bound = 0 upper_bound = len(name_list)-1 found = False # Loop until we find the item, or our upper/lower bounds meet while lower_bound <= upper_bound and not found: # Find the middle position middle_pos = (lower_bound + upper_bound) // 2 # Figure out if we: # move up the lower bound, or # move down the upper bound, or # we found what we are looking for if name_list[middle_pos] < key: lower_bound = middle_pos + 1 elif name_list[middle_pos] > key: upper_bound = middle_pos - 1 else: found = True if found: print( "The name is at position", middle_pos) else: print( "The name was not in the list." )
Since lists start at element zero, line 3 sets the lower bound to zero. Line 4 sets the upper bound to the length of the list minus one. So for a list of 100 elements the lower bound will be 0 and the upper bound 99.
The Boolean variable on line 5 will be used to let the while loop know that the element has been found.
Line 8 checks to see if the element has been found or if we've run out of elements. If we've run out of elements the lower bound will end up equaling the upper bound.
Line 11 finds the middle position. It is possible to get a middle position
of something like 64.5. It isn't possible to look up position 64.5. (Although
J.K. Rowling was rather clever in enough coming up with Platform $9\frac{3}{4}$,
that doesn't work here.)
The best way of handling this is to use the
// operator
first introduced way back in Chapter 5.
This is similar to the
/ operator, but will only return integer results.
For example,
11 // 2 would give 5 as an answer, rather than 5.5.
Starting at line 17 the program checks to see if the guess is high, low, or
correct. If the guess is low, the lower bound is moved up to just past the guess.
If the guess is too high, the upper bound is moved just below the guess. If the
answer has been found,
found is set to
True ending the search.
With the a list of 100 elements, a person can reasonably guess that on average with the linear search, a program will have to check 50 of them before finding the element. With the binary search, on average you'll still need to do about seven guesses. In an advanced algorithms course you can find the exact formula. For this course, just assume average and worst cases are the same.
You are not logged in. Log in here and track your progress.
English version by Paul Vincent Craven
Spanish version by Antonio Rodríguez Verdugo
Russian version by Vladimir Slav
Turkish version by Güray Yildirim
Portuguese version by Armando Marques Sobrinho and Tati Carvalho
Dutch version by Frank Waegeman
Hungarian version by Nagy Attila
Finnish version by Jouko Järvenpää
French version by Franco Rossi
Korean version by Kim Zeung-Il
Chinese version by Kai Lin
|
The problem is: prove the existence of function $f: \mathbb{R}\times\mathbb{R} \rightarrow \mathbb{N}$ such that $f(x,y)=f(y,z)\implies x=y=z$. I was thinking about $f(x,y) = \left\{ \begin{array}{ll} 1 & \textrm{iff $x=y$}\\ something & \textrm{ iff $x\neq y$} \end{array} \right.$ but I guess we need to have bijection from $\mathbb{N}$ to $\mathbb{R}$ to do that, which don't exist because $|\mathbb{N}| < |\mathbb{R}|$. EDIT: let's make it clear. If we take some $c \neq g$, $f(c,c)$ can't be equal to $f(g,g)$.
There seems to be a misunderstanding: the original condition
$$f(x, y)=f(y, z)\implies x=y=z$$
does not imply the new (added in an edit) condition
$$f(x, x)=f(y, y)\implies x=y.$$
To see this, consider the following counterexample: let $f: \{1, 2\}\rightarrow \{1, 2, 3\}$ be given by
$f(1, 1)=f(2, 2)=1$,
$f(1,2)=2$, and
$f(2, 1)=3$.
This $f$ satisfies the original condition, but not the new condition.
It's easy to show that there are no functions $\mathbb{R}^2\rightarrow \mathbb{N}$ satisfying the new condition. Perhaps surprisingly, if we restrict attention to the original condition,
there are such functions! Moreover, we don't even need something silly like the axiom of choice to build them! Construction. Let $\mathcal{A}=\{A_r: r\in\mathbb{R}\}$ be a family of sets of natural numbers other than $0$ with the property that $$r\not=s\implies A_r\setminus A_s\not=\emptyset.$$
Now define $f(r, s)$ as follows:
If $r\not=s$, then $f(r, s)$ is the least element of $A_r\setminus A_s$.
If $r=s$, then $f(r, s)$ is $0$.
Verification. Now suppose $f(x, y)=f(y, z)$ and it is not the case that $x=y=z$.
First, let's see that the numbers $x, y, z$ must be distinct. Suppose $x=y$. Then $f(x, y)=0$. But since $y\not=z$ (by assumption that "$x=y=z$" fails) then $f(y, z)\in A_y\not\ni0$, so $f(y, z)\not=f(x, y)$. Similarly, we can't have $y=z$.
So $x, y, z$ are distinct. But then $f(x, y)\in A_x\setminus A_y$, and $f(y, z)\in A_y\setminus A_z$. In particular, we have
$f(x, y)\not\in A_y$, but
$f(y, z)\in A_y$.
Oops. Caveat. Of course I'm skipping a crucial step here: showing that such a family $\mathcal{A}$ actually exists. Since there was some disagreement over whether such families actually exist, let me give a concrete one here:
Fix your favorite bijection $F$ between $\mathbb{N}$ and the set of finite binary strings $2^{<\omega}$; separately, fix your favorite bjection $G$ between $\mathbb{R}$ and the set of infinite binary strings $2^\omega$. Now, given a real $r$, we view each real $r$ as an infinite sequence, and let $A_r$ be the set of natural numbers standing for initial segments of that sequence: more formally, we let $$A_r=\{F^{-1}(\sigma): \sigma\prec G(r)\}.$$
The resulting family is in fact an
almost disjoint family: the intersection $A_r\cap A_s$ for $r\not=s$ is always finite, but every $A_r$ is infinite!
|
Suppose a quantum system (non-interacting) at finite temperature ($\beta^{-1}$). I want to know how to compute the transition probability between two degrees of freedom ($u$ and $v$) at two different times.
The system starts ($t=0$) in a mixed state, described by $$ \hat \rho = \sum_l e^{-\lambda_l \beta} |\psi_l\rangle\langle \psi_l|/Z $$ I projected the mixed state into $|u\rangle$, applying the projector $$ \hat P_u = |u\rangle\langle u|. $$ Therefore at $t=0$, I have $$ \hat P_u \hat \rho \hat P_u. $$ Because I want the transition probability in the future, I used the evolution operator $$ \hat U(t_f) = \sum_n e^{-i\lambda_n \beta} |\psi_n\rangle\langle \psi_n| $$ to evolve the mixed state $$ \hat U(t_f)^\dagger\hat P_u \hat \rho \hat P_u \hat U(t_f). $$ Then I projected the last operator into $|v\rangle$,
$$ \hat P_v \hat U(t_f)^\dagger\hat P_u \hat \rho \hat P_u \hat U(t_f)\hat P_v $$
Computing the trace of the above operator,
$$ \mathrm{Tr}[\hat P_v \hat U(t_f)^\dagger\hat P_u \hat \rho \hat P_u \hat U(t_f)\hat P_v] , $$
I get
$$ (\sum\limits_l \frac{e^{-\beta \lambda_l}\langle \psi_l|u\rangle\langle u|\psi_l\rangle}{Z}) (\sum\limits_m e^{-i\lambda_m t}\langle \psi_m|v\rangle\langle u|\psi_m\rangle) (\sum\limits_n e^{i\lambda_n t}\langle \psi_n|u\rangle\langle v|\psi_n\rangle) $$
Did I make any mistakes?
I expected some mixing between the time and temperature in the last equation, but if everything is right there is no mixing in that case.
|
So the equation for the pressure within a non-rotating, spherical gas cloud of radius R and uniform density $\rho$ ( if it is in hydrostatic equilibrium) is :
$$P(r)=P_c-\tfrac{2\pi}{3}G\rho^2r^2$$
starting from the hydrostatic equation in spherical co-ordinates
$$\tfrac{dP}{dr}=-\tfrac{GM(r)}{r^2}\rho(r)$$ we find that as $M(r)=\tfrac{4\pi \rho r^3}{3}$, P(r) is:
$P(r)=\int_0^r\tfrac{-4G\pi \rho^2 r}{3}dr=\tfrac{-2G\pi \rho^2 r^2}{3}$
But obviously this doesnt have the $P_c$ term we wanted. So my question is the following :
i) should I have have used $P(r)=\int\tfrac{-4G\pi \rho^2 r}{3}dr$ instead and treated P_c as a constant of integration ?
ii) if this is the correct method then what is the explanation for not using limits in our integrand ?
|
\[ y=f(x)=e^{-5x^2} , 0\leq x \leq 1\]
Figure 15.1-0
\[ \int_a^b f(x)\;dx = \lim_{n\rightarrow\infty}\sum_{i=1}^n f(x_i)\, \Delta x_i \]
A fundamental method to calculate the area: the base of function f(x) is equally divide into n pieces whose width are \( \Delta x \). Then \( S=\Delta x f(x_i) \) is the area of the rectangle at the location \(\Delta x_i\) and its height is \(f(x_i)\) among the range \( \Delta x_i\). Through summing all these rectangular pieces together, we can roughly estimate the area under the function \( f(x) \) in its domain. The equation \( \sum_{i=1}^n f(x_i)\, \Delta x_i \) can be used to represent this process.
However, \( \sum_{i=1}^n f(x_i)\, \Delta x_i \) can only help us to estimate the value, which means errors still exist. In this case, limits help us to fix the problem.
figure 15.1-1
As it was mentioned, the area was divided into n stripes. As \( n \rightarrow \infty \) and \(\Delta x \rightarrow 0\), stripes\( \Delta x f(x_i) \)approachs to a line whose length is equal to the height of \( f(x_i) \). Eventually, through infinite division and accumulation, the error is reduced to zero and the sum of \( f(x) \Delta x\) equals to the area under the curve.
\[ \int_a^b f(x)\;dx = \lim_{n\rightarrow\infty}\sum_{i=1}^n f(x_i)\, \Delta x_i \]
Thus, we can conclude that the integral is the function of accumulation as it accumulates infinite number of strips in a certain domain to calculate the area. Similarly, the double integral is also a function of accumulation. It accumulates infinite number of small 3D strips to calculate the volume of 3D objects.
\[ V=\int \int_R f(x_i,y_i) dx= \lim_{n\rightarrow\infty}\sum_{k=1}^n f(x_i,y_i)\Delta A_i \]
\(R\) is the domain of the function (the area that you want to integrate over)
Explanation:as \(n \rightarrow \infty\), the number of strips goes to infinity, \(\Delta A \rightarrow\infty\), the error of calculation goes to 0 and the accumulation of these infinite strips eventually equals the volume of the objects.
Theoretical discussion with descriptive elaboration
Theorem: Fubini's Theorem (First Form)
If \( f(x,y) \) is contunuous throughout the rectangular region R: \(a\leq x \leq b, c\leq y \leq d,\)
then
\[\int \int_R f(x,y)\Delta A=\int_c^d \int_a^b f(x,y)\Delta x \Delta y= \int_a^b \int_c^d f(x,y)\Delta y \Delta x.\]
Fubini's Theorem is usually used to calculate the volume of three dimensional bodies
Figure 15.1-2
\[ V_i= f(x_i,y_i) \Delta A_i= f(x_i,y_i)\Delta x \Delta y \]
In figure 15.1-2 , \( f(x_i,y_i) \) is the height of the cuboid and \( \Delta A \) is its base. \( V_i \) means that at different location, there is a corresponding cuboid whose height is closed to the average height of the graph at the area \( \Delta A_i \) .
Figure 15.1-3
At the specific \( \Delta y_i \), the cuboids with different \(\Delta x_i \) are lined up to form a layer.
Figure 15.1-4
\[ V= \sum_{i=1}^n f(x_i,y_i) \, \Delta A_i=\sum_{i=1}^n f(x_i,y_i)\Delta x_i \Delta y_i \]
As all the layers are combined together, we get a body that is approximated to the one in the next graph, but the error is still very large.
Figure 15.1-5
\[ V=\lim_{n\rightarrow \infty } \sum_{i=1}^n f(x_i,y_i) \, \Delta A_i \]
Limit helps to solve this problem. As n goes to infinity, \( \Delta A_i \) becomes smaller eventually turns to a dot. \( f(x_i, y_i) \Delta A_i \) becomes a line and error of volume decreases to zero. Thus, the accumulation of all these lines equal to the volume.
Example 1
Now we can calculate the volume below the function \( f(x,y)=27-x^2-\frac{1}{2}y^2dx dy \) and above \( f(x,y) \), in the domain \( 0\leq x \leq 3 \) and \( 0 \leq y \leq 6 \).
\[\begin{align} & \int_0^6 \int_0^3 27-x^2-\frac{1}{2} y^2dx dy \\ & =\int_0^6 (27-\frac{1}{2}y^2)x-\frac{1}{3}x^3 \Big|_0^3 dy \\ & =\int_0^6 [(27-\frac{1}{2}y^2)\times3-\frac{1}{3}\times 3^3 \\ & =\int_0^6 72-\frac{3}{2}y^2 \ dy \\ & =[72y-\frac{1}{2}y^3]\Big|_0^6 \\ & = (72\times 6-\frac{1}{2}\times6^3)-(0-0) \\ & = 324 \end{align}\]
Figure: (left) from step 1 to step 3 (right) from step 3 to step 6
Example 2
Another way to calculate the volume of the graph:
\[\begin{align} &\int_0^3 \int_0^6 27-x^2-\frac{1}{2} y^2dy dx \\ & = \int_0^3 (27-x^2)y-\frac{1}{6}y^3 \Big|_0^6 \ dx \\ & =\int_0^3 [(27-x^2)\times 6 -\frac{1}{6}\times6^3]-[0-0]\ dx \\ & =\int_0^3 126-6x^2\ dx \\ & = [126x-2x^3]\Big|_0^3 \\ & = (126\times3-2\times 3^3 )-(0-0) \\ & = 324. \end{align}\]
Example 3
Find the volume that is bounded above by the surface \(z=f(x,y)=x^2+y^2\) and below by a rectangule R: \(0\leq x \leq 2, 0\leq y \leq 3 \).
\[\begin{align} & \int_0^2 \int_0^3 x^2+y^2 dy dx \\ & =\int_0^2 x^2y+\frac{1}{3}y^3 \Big|_0^3 dx \\ & =\int_0^2 3x^2+9 dx \\ & =x^3+9x\Big|_0^2 \\ & =(8+18)-0 \\ & =26 \end{align}\]
Contributors
Integrated by Justin Marshall.
|
Recall that the chain rule states that
\[ (f(g(x)))' = f'(g(x))g'(x). \]
Integrating both sides we get:
\[ \int[f(g(x)]'dx = \int[f'(g(x)g'(x)dx]\]
or
\[ \int f'\left( g(x) \right) \, g' (x) \, dx = f\left(g(x)\right) + C \]
Example 1
Calculate
\[ \int \dfrac{2x}{x^2+1}\, dx = \int 2x\left( x^2+1\right)^{-2} \, dx. \]
Solution
Let
\[ u = x^2 +1 \]
then
\[ \dfrac{du}{dx} = 2x \]
and
\[ du = 2x \,dx.\]
We substitute:
\[ \int u^{-2} du = -u^{-1} + C = (x^2 +1)^{-1} + C. \]
Steps:
Find the function derivative pair (\(f\) and \(f'\)). Let \(u = f(x)\). Find \(du/dx\) and adjust for constants. Substitute. Integrate. Resubstitute.
We will try many more examples including those such as
\[ \int x\, \sin(x^2)\, dx, \]
\[ \int x\, \sqrt{x - 2}\, dx. \]
Contributors Larry Green (Lake Tahoe Community College)
Integrated by Justin Marshall.
|
Focus is $(2,3)$, point on directrix is $(-3,2)$. Parabola touches $x$-axis. Find vertex.
I would be thankful if someone could help me with this problem.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Here is a geometric way; I leave it to you to translate it to algebra.
You are given the focus $F$, a point $D$ on the directrix and a tangent $a$. For a parabola we have:
So mirroring $F$ across $a$ results in a point $F'$, and the line $DF'$ is the directrix.
Another useful parabola feature is:
Therefore consider a straight line through $F$ that intersects the directrix orthogonally in a point $P$. The midpoint between $P$ and $F$ is the sought vertex $V$.
With the given coordinates, $P$ happens to land on the tangent $a$, but that is mere coincidence.
After some inspection and algebraic manipulation we find that the directrix is $x+y+1=0$, and the axis of symmetry is $x-y+1=0$, both meeting conveniently at $(-1,0)$. The midpoint between this and the focus $F(2,3)$ is the vertex $\color{red}{V(0.5,1.5)}$. (For a more rigorous analysis, see section titled "In Greater Detail" below.)
The required parabola is thus tilted at $45^\circ$ and can be described in several forms as shown below:
$$(x-2)^2+(y-3)^2=\frac {(x+y+1)^2}2$$ or $$(x-y)^2-10x-14y+25=0$$ or $$(x-y-5)^2=24y$$ The $x-$axis ($y=0$) is tangential to the parabola at $(5,0)$.
Note that $x=-1$ is also tangential to the parabola (at $(-1,6)$), and, together with the $y=0$ (the $x-$axis) form a set of perpendicular tangents, which intersect at a point on the directrix, this being a property of the parabola. In this case the point of intersection is $(-1,0)$, which as was ascertained above, lies on the directrix.
In Greater Detail
Let focus be $F=(2,3)$ and point on directrix be $D=(-3,2)$ (values as given).
Let the point $R$ be the foot of the perpendicular from $F$ to the directrix. As the directrix is perpendicular to the axis of symmetry, $\angle DRF=90^\circ$. It follows that the locus of $R$ is a circle with diameter $DF$, i.e. $(x+\frac 12)^2+(y-\frac 32)^2=\frac {13}2$. At $y=0$, $x=0,-1$, ie. the circle crosses the $x-$axis at $(-1,0), (0,0)$. $DR$ is part of the directrix.
It is given that the $x-$axis is a tangent to the required parabola. We also know that for any parabola, two perpendicular tangents cross at a point on the directrix. Hence the perpendicular tangent to the $x-$axis must be a line parallel to the $y-$axis, i.e. the intersection point (which lies on the direcrix) must also lie on the $x-$axis.
Given the above we conclude that $R$ lies on the $x-$axis, and can be either $R_1(-1,0)$ or $R_2(0,0)$.
Let point $P(x,y)$ be a general point on the required parabola. By the basic definition of a parabola, $FP=PG$ where $G$ is the foot of the perpendicular from $P$ to the directrix. $FP^2=PG^2$ gives the equation of the parabola.
$\hspace{3cm}$
Hence we conclude that $R=R_1(-1,0)$ and the required parabola is $$(x-2)^2+(y-3)^2=\frac {(x+y-1)^2}2$$ with the $x-$axis tangential to it at $(5,0)$. The vertex of the parabola is the midpoint of $RF$, i.e. $\color{red}{V\big(\frac 12, \frac 32\big)}$.
HINT:
The equation of the directrix can be written as $$\dfrac{y-2}{x+3}=m$$
As the eccentricity is $1,$
the distance of any point $P(h,k)$ on the parabola from the focus = the distance from the directrix.
Now as $y=0$ is a tangent of the parabola, put $y=0$ in the relation derived above to form a Quadratic Equation in $x$ whose each root represents the abscissa of intersection.
Now for tangency, both root should be same.
|
I am working on a linear analysis problem where we have boiled down the problem to finding a continuous function $f:\mathbb{R} \to \mathbb{R}$ that is bounded, but has infinite derivative at zero. So far, we have conjured up the example $$f_n(x) = \frac{2}{\pi}\arctan(nx)$$ This sequence of functions will have infinite derivative at $0$ when $n\to \infty$, and is bounded by $1$. I believe this will work for the sake of our problem, but I would like to find a function that doesn't depend on $n$. I can picture what this should look like, but I can't come up with an example function. Any ideas? All appreciated.
$f(x)=\arctan(\sqrt[3]{x})$, for example.
A quarter of a unit circle (no, the
other quarter) up and down:$$ f(x) = \begin{cases} 0 , & x < -1, \\ 1-\sqrt{1-(x+1)^2}, & -1 \leq x < 0, \\ 1-\sqrt{1-(x-1)^2}, & 0 \leq x < 1, \\ 0 , & 1 \leq x \end{cases} \text{.}$$
$g(x)=arccotx^{1/2}$ another one.
My first thought was something like
$$f(x) = x\sin\left(\frac1x\right)$$ $$f'(x) = \sin\left(\frac1x\right) - \frac1x\cos\left(\frac1x\right)$$
This is bounded between $1$ and $-1$ (note that $\lim_{x\to\infty}f(x)=1$), and its derivative has an infinite oscillatory discontinuity.
Also, $$\lim_{x\to 0}f(x)=0$$ so $f$ itself has a removable discontinuity; it can be made continuous by defining $f(0)=0$.
|
Let $X$ be a smooth projective geometrically connected curve over $\mathbf{Q}$ of genus at least two. Fix an algebraic closure $\overline{\mathbf{Q}}$ of $\mathbf{Q}$ and let $G_{\mathbf{Q}}$ be the absolute Galois group of $\mathbf{Q}$. Moreover, fix a rational base point $x$ in $X(\mathbf{Q})$.
Let $\sigma:G_{\mathbf{Q}}\to \pi_1(X)$ be a section of the exact sequence of groups $$ 1\to \pi_1(X_{\overline{\mathbf{Q}}})\to \pi_1(X) \to G_{\mathbf{Q}}\to 1.$$
Let $K\subset \overline{\mathbf{Q}}$ be a finite field extension of $\mathbf{Q}$. (I don't want to assume $K$ to be Galois, but please do if this helps.)
Let $G_K$ be the absolute Galois group of $K$. Then $G_K$ is an open subgroup of $G_{\mathbf{Q}}$. Note that $\pi_1(X_K)$ (with the same base point $x$) injects into $\pi_1(X)$.
Question. Does there exist a section $\sigma^\prime:G_{\mathbf{Q}}\to \pi_1(X)$ which is a $\pi_1(X_{\overline{\mathbf{Q}}})$-conjugate of $\sigma$ such that the image of $\sigma^\prime|_{G_K}$ lies in $\pi_1(X_K)$? Motivation. If $a\in X(\mathbf{Q})$, then $a\in X(K)$. Thus, if $\sigma$ is a section associated to $a$, then the answer to the above question is positive. My question is really about sections that a priori do not come from a rational point. Note. I always use the base point $x$ to define the fundamental group and we can replace $\mathbf{Q}$ by any number field.
|
Suppose $f: A \to B$ and $g: B \to A$ are injections of rings(
commutative with identity). Must $A$ and $B$ be isomorphic asrings?
According to this question, this answer should be "no", but can someone give an example?
Thanks!
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community
Hey Damien, I think the following should work $\mathbb{C}$ and $\mathbb{C}(x)$. There is only one uncountable algebraically closed field of each cardinality in characteristic 0 and the algebraic closure of the right hand guy should have cardinality the continuum it should be isomorphic to $\mathbb{C}$. Probably, this assumes the axiom of choice though.
Here is another counterexample for fields: If $K=\overline{\mathbb{Q}(x_1,x_2,...)}$, then there are monomorphisms $K(x_0) \to K \to K(x_0)$, but no isomorphism since $K(x_0)$ is not algebraically closed.
This is not even true for fields. Let $E_1$ and $E_2$ be isogenous but not isomorphic elliptic curves over $K=\mathbb{Q}$ or $k=\mathbb{F}_p$ for some prime $p$. Then the isogeny $E_1\to E_2$ and its dual isogeny $E_2\to E_1$ induce field injections $k(E_2)\to k(E_1)$ and $k(E_1)\to k(E_2)$. But $k(E_1)$ and $k(E_2)$ are not isomorphic; a putative isomorphism must extend the identity on $k$ and it would induce an isomorphism between the elliptic curves $E_1$ and $E_2$.
There are many pairs of as-nice-as-possible compact topological spaces $X,Y$ with continuous surjections $X \to Y$ and $Y \to X$ but no homeomorphism. For example, let $X$ be a closed interval and $Y$ a circle. Then you get injections on algebras of functions: $\mathcal C(X) \hookrightarrow \mathcal C(Y)$ and $\mathcal C(Y) \hookrightarrow \mathcal C(X)$. For sufficiently nice spaces, Gelfand-Naimark, for example, says that the functor $\mathcal C$ that takes a space to its $*$-algebra of continuous $\mathbb C$-valued functions is a full and faithful contravariant functor to commutative algebras, and in particular a complete invariant, so in particular the two rings are not isomorphic.
Edit: There are complaints in the comments, and I didn't think very carefully before writing down all this. This has something to do with the fact that I tend to conflate the words "algebra" and "ring".
So let me switch meanings, and denote by $\mathcal C(X)$ the continuous $\mathbb R$-valued functions on $X$. Suppose that $X$ is Hausdorff and compact (and if that's not good enough, let's just go all the way to being a manifold with corners, where then everything absolutely works). Since $\mathbb R$ has no ring homomorphisms, the points of $X$ are precisely the same as ring homomorphism $\mathcal C(X) \to \mathbb R$. Actually, this is true for $X$ not compact provided it is regular and not too large: it suffices for there to be a function $f \in \mathcal C(X)$ so that every level set is finite. Anyway, then any ring map $\mathcal C(X) \to \mathcal C(Y)$ automatically induces a set map $Y \to X$. But also the closed sets are precisely the vanishing sets of functions, i.e. a subset $S\subseteq \operatorname{Hom}(\mathcal C(X),\mathbb R) = X$ is closed iff there is $f\in \mathcal C(X)$ so that $s\in S$ iff $s(f) = 0$. Anyway, the point is, pick a closed subset of $\mathcal C(X)$, pick a function $f$ determining it, look at the image of $f$ under the map, and its vanishing set in $Y$ is precisely the preimage of the closed subset under the map. So every ring homomorphism determines a continuous map. Since a continuous map is determined pointwise, we have the full-and-faithful functor that I wanted.
Note that for manifolds (with corners if you want) you can play the same game with $\mathcal C$ meaning "
smooth real-valued functions".
Sam Lichtenstein poses the dual question in comments:
What's a counterexample to "dual Schroeder-Bernstein" for rings? (That is, same question but with surjections rather than injections.) Is there one with A,B finite type over a field?
That is,
Do there exist finite type $k$-algebras $A, B$ not isomorphic to each other, and surjections $A\to B, B\to A$? (*)
He gives an example in the non-Noetherian case; I claim the "dual Schroeder-Bernstein theorem" is
true if $A$ and $B$ are Noetherian. And in general, if two Noetherian schemes $X, Y$ admit maps $i: X\to Y, j: Y\to X$ exhibiting each as a closed subscheme of the other, then $i, j$ are isomorphisms. So the answer to (*) is "no".
I'll prove the more general claim. Assume to the contrary that one of $i,j$ is not an isomorphism. Then $j\circ i: X\to X$ exhibits $X$ as a proper closed subscheme of itself, say $X_1$. But then $X_1$ is isomorphic to some
proper closed subscheme of itself, say $X_2$; continuing in this manner, we may construct a sequence $X_n$ where each $X_i$ is a proper closed subscheme of $X_{i-1}$. Let $\mathcal{I}_n$ be the ideal sheaf of $X_i$ in $\mathcal{O}_X$. By Noetherianness we must have that $\mathcal{I}_1\subset \mathcal{I}_2\subset \mathcal{I}_3\subset\cdots$ stabilizes, however, which contradicts the claim that each $X_i\subset X_{i-1}$ is a proper inclusion. Here's a more formal write-up of the affine case.
This provides an example of a "surjunctive" category in the sense of John Goodnick's answer to this question.
|
Say we want to compute the Coleman-Weinberg potential at 2 loops.
The general strategy as we know is to expand the field $\phi$ around some background classical field $\phi \rightarrow \phi_b + \phi$, and do a path integral over the quantum part of the field, $\phi$.
We can retrieve the effective action by doing a path integral, something like eq.42 in this reference.
There are 2 ways to do this at 1 loop, we can either evaluate a functional determinant or do the classic Coleman-Weinberg thing where we sum up all diagrams we get by inserting any number of background fields $\phi_b^2$ into the loop integral. This is eq. (56) of that same reference again.
My question is, why do we not need to do this resummation over background field insertions at 2 loops? For example, in this (quite standard) reference, as well as in chapter 11 in Peskin and Schroeder, the authors seem to claim that the 2 loop contribution to the path integral are simply the "rising sun" and "figure 8" vacuum diagrams, and no summing over classical field insertions is even mentioned.
What am I missing?
EDIT:
To give some more details, in perturbation theory, each diagram contributing to the path integral is spacial integral of some functional derivative acting on the free field path integral with a source: the loop diagram with n insertions of external field $\phi_b$ is the term: $$\left( \phi_b^2 \int dx \left( \frac {\delta}{\delta J(x)}\right)^2 \right)^n Z_0[J]$$
The 2 loop figure 8 is
$$\int dx \left( \frac {\delta}{\delta J(x)}\right)^4 Z_0[J]$$
The 2 loop diagrams that it seems like the papers cited above are excluding are contributions like
$$\left( \phi_b^2 \int dx \left( \frac {\delta}{\delta J(x)}\right)^2 \right)^n\int dx \left( \frac {\delta}{\delta J(x)}\right)^4 Z_0[J]$$
It seems to me that these terms will indeed arise in the exponential expansion of the interacting lagrangian, so it seems that a resummation over $n$, as in the 1 loop case, is still necessary. Where is my error?
|
In
sequent calculus LK (see Gaisi Takeuti, Proof Theory (2nd ed - 1987)) we have a "standard" derivation of Double Negation in the form $\rightarrow \lnot \lnot A \supset A$.
We have to start from an
Axiom :
$$\frac{A \rightarrow A}{\rightarrow \lnot A, A}$$ by $\lnot$-right;
then :
$$\frac{\rightarrow \lnot A, A}{\lnot \lnot A \rightarrow A}$$ by $\lnot$-left;
finally :
$$\frac{\lnot \lnot A \rightarrow A}{\rightarrow \lnot \lnot A \supset A}$$ by $\supset$-right.
The proof is not intuitionistically admissible, due to the violation (in the first step) of the restriction [see Takeuti, page 28] that : "a sequent in
LJ is of the form $\Gamma \rightarrow \Delta$, where $\Delta$ consists of at most one formula".
In a previous post we have a proof (assuming it is correct) of the "derived rule" :
$$\frac {\Gamma, \lnot \lnot A \vdash \Delta } {\Gamma, A \vdash \Delta }$$
that looks like a form of
Double Negation.
If we impose the restriction that $\Delta$ must consists of at most one formula, we have (assuming that the proof is correct) that the rule is admissible in
LJ. What is the "meaning" of this derived rule compared to the previous formulation ?
|
(From the comment above)The problem seems coNP-hard; the simple reduction is from 3CNF-UNSAT (which is coNP-complete):given a 3CNF formula $\varphi = C_1 \land ... \land C_m$, extend it adding a new clause with 4 new variables:
$$\varphi' = (y_1 \lor y_2 \lor y_3 \lor y_4) \land C_1 \land ... \land C_m$$
$\varphi'$ has an equivalent 3CNF formula defined on the same variables if and only if the original formula $\varphi$ is unsatisfiable.
($\Leftarrow$) the 3CNF formula $(y_1 \lor y_2 \lor y_3) \land (y_1 \lor y_2 \lor y_4) \land C_1 \land ... \land C_m$ is equivalent to $\varphi'$
($\Rightarrow$) suppose that $\varphi'$ has an equivalent 3CNF formula $\varphi''$ and that $\varphi$ is satisfiable.Pick a satisfying assignment $X = \langle \dot{x}_1,...,\dot{x}_n \rangle$ of $\varphi$, and simplify both $\varphi'$ and $\varphi''$ replacing the variables $x_i$ withthe corresponding truth values $\dot{x}_i$. We get $\varphi'_X$ which is satisfiable if and only if $\varphi''_X$ is satisfiable(both contain only variables $y_i$).Clearly $\varphi'_X = (y_1 \lor y_2 \lor y_3 \lor y_4)$. Every clause of $\varphi''_X$ contains at most three variables,so we can pick one of them, e.g. $(y_1 \lor \lnot y_2 \lor y_3)$, and use it to build a satisfying assignment for $\varphi'$:$\langle y_1=false, y_2=true, y_3=false,y_4=true,\dot{x}_1,...,\dot{x}_n \rangle$ which is not a satisfying assignment for $\varphi''$,leading to a contradiction.
|
How can I calculate the following limit:
$$ \lim_{n\to \infty} \frac{2\cdot 4 \cdots (2n)}{1\cdot 3 \cdot 5 \cdots (2n-1)} $$ without using the root test or the ratio test for convergence?
I have tried finding an upper and lower bounds on this expression, but it gives me nothing since I can't find bounds that will be "close" enough to one another. I have also tried using the fact that: $2\cdot 4 \cdot...\cdot (2n)=2^n n!$ and $1\cdot 3 \cdot 5 \cdot...\cdot (2n-1) =2^n (n-0.5)!$ but it also gives me nothing .
Will someone please help me ?
Thanks in advance
|
LaTeX:Symbols
LaTeX About - Getting Started - Diagrams - Symbols - Downloads - Basics - Math - Examples - Pictures - Layout - Commands - Packages - Help
This article will provide a short list of commonly used LaTeX symbols.
Contents Finding Other Symbols
Here are some external resources for finding less commonly used symbols:
Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! MathJax (what allows us to use on the web) maintains a list of supported commands. The Comprehensive LaTeX Symbol List. Operators Relations
Symbol Command Symbol Command Symbol Command \le \ge \neq \sim \ll \gg \doteq \simeq \subset \supset \approx \asymp \subseteq \supseteq \cong \smile \sqsubset \sqsupset \equiv \frown \sqsubseteq \sqsupseteq \propto \bowtie \in \ni \prec \succ \vdash \dashv \preceq \succeq \models \perp \parallel \mid \bumpeq
Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a couple examples, plus many other negations; it works for many of the many others as well.
Symbol Command Symbol Command Symbol Command \nmid \nleq \ngeq \nsim \ncong \nparallel \not< \not> \not= or \neq \not\le \not\ge \not\sim \not\approx \not\cong \not\equiv \not\parallel \nless \ngtr \lneq \gneq \lnsim \lneqq \gneqq
To use other relations not listed here, such as =, >, and <, in LaTeX, you must use the symbols on your keyboard, they are not available in .
Greek Letters
Symbol Command Symbol Command Symbol Command Symbol Command \alpha \beta \gamma \delta \epsilon \varepsilon \zeta \eta \theta \vartheta \iota \kappa \lambda \mu \nu \xi \pi \varpi \rho \varrho \sigma \varsigma \tau \upsilon \phi \varphi \chi \psi \omega
Symbol Command Symbol Command Symbol Command Symbol Command \Gamma \Delta \Theta \Lambda \Xi \Pi \Sigma \Upsilon \Phi \Psi \Omega Arrows
Symbol Command Symbol Command \gets \to \leftarrow \Leftarrow \rightarrow \Rightarrow \leftrightarrow \Leftrightarrow \mapsto \hookleftarrow \leftharpoonup \leftharpoondown \rightleftharpoons \longleftarrow \Longleftarrow \longrightarrow \Longrightarrow \longleftrightarrow \Longleftrightarrow \longmapsto \hookrightarrow \rightharpoonup \rightharpoondown \leadsto \uparrow \Uparrow \downarrow \Downarrow \updownarrow \Updownarrow \nearrow \searrow \swarrow \nwarrow
(For those of you who hate typing long strings of letters, \iff and \implies can be used in place of \Longleftrightarrow and \Longrightarrow respectively.)
Dots
Symbol Command Symbol Command \cdot \vdots \dots \ddots \cdots \iddots Accents
Symbol Command Symbol Command Symbol Command \hat{x} \check{x} \dot{x} \breve{x} \acute{x} \ddot{x} \grave{x} \tilde{x} \mathring{x} \bar{x} \vec{x}
When applying accents to i and j, you can use \imath and \jmath to keep the dots from interfering with the accents:
Symbol Command Symbol Command \vec{\jmath} \tilde{\imath}
\tilde and \hat have wide versions that allow you to accent an expression:
Symbol Command Symbol Command \widehat{7+x} \widetilde{abc} Others Command Symbols
Some symbols are used in commands so they need to be treated in a special way.
Symbol Command Symbol Command Symbol Command Symbol Command \textdollar or $ \& \% \# \_ \{ \} \backslash
(Warning: Using $ for will result in . This is a bug as far as we know. Depending on the version of this is not always a problem.)
European Language Symbols
Symbol Command Symbol Command Symbol Command Symbol Command {\oe} {\ae} {\o} {\OE} {\AE} {\AA} {\O} {\l} {\ss} !` {\L} {\SS} Bracketing Symbols
In mathematics, sometimes we need to enclose expressions in brackets or braces or parentheses. Some of these work just as you'd imagine in LaTeX; type ( and ) for parentheses, [ and ] for brackets, and | and | for absolute value. However, other symbols have special commands:
Symbol Command Symbol Command Symbol Command \{ \} \| \backslash \lfloor \rfloor \lceil \rceil \langle \rangle
You might notice that if you use any of these to typeset an expression that is vertically large, like
(\frac{a}{x} )^2
the parentheses don't come out the right size:
If we put \left and \right before the relevant parentheses, we get a prettier expression:
\left(\frac{a}{x} \right)^2
gives
And with system of equations:
\left\{\begin{array}{l}x+y=3\\2x+y=5\end{array}\right.
Gives
See that there's a dot after
\right. You must put that dot or the code won't work.
In addition to the
\left and
\right commands, when doing floor or ceiling functions with fractions, using
\left\lceil\frac{x}{y}\right\rceil
and
\left\lfloor\frac{x}{y}\right\rfloor
give both
And, if you type this
\underbrace{a_0+a_1+a_2+\cdots+a_n}_{x}
Gives
Or
\overbrace{a_0+a_1+a_2+\cdots+a_n}^{x}
Gives
\left and \right can also be used to resize the following symbols:
Symbol Command Symbol Command Symbol Command \uparrow \downarrow \updownarrow \Uparrow \Downarrow \Updownarrow Multi-Size Symbols
Some symbols render differently in inline math mode and in display mode. Display mode occurs when you use \[...\] or $$...$$, or environments like \begin{equation}...\end{equation}, \begin{align}...\end{align}. Read more in the commands section of the guide about how symbols which take arguments above and below the symbols, such as a summation symbol, behave in the two modes.
In each of the following, the two images show the symbol in display mode, then in inline mode.
Symbol Command Symbol Command Symbol Command \sum \int \oint \prod \coprod \bigcap \bigcup \bigsqcup \bigvee \bigwedge \bigodot \bigotimes \bigoplus \biguplus
|
2019-10-09 06:01
HiRadMat: A facility beyond the realms of materials testing / Harden, Fiona (CERN) ; Bouvard, Aymeric (CERN) ; Charitonidis, Nikolaos (CERN) ; Kadi, Yacine (CERN)/HiRadMat experiments and facility support teams The ever-expanding requirements of high-power targets and accelerator equipment has highlighted the need for facilities capable of accommodating experiments with a diverse range of objectives. HiRadMat, a High Radiation to Materials testing facility at CERN has, throughout operation, established itself as a global user facility capable of going beyond its initial design goals. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPRB085 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPRB085 Registre complet - Registres semblants 2019-10-09 06:01
Commissioning results of the tertiary beam lines for the CERN neutrino platform project / Rosenthal, Marcel (CERN) ; Booth, Alexander (U. Sussex (main) ; Fermilab) ; Charitonidis, Nikolaos (CERN) ; Chatzidaki, Panagiota (Natl. Tech. U., Athens ; Kirchhoff Inst. Phys. ; CERN) ; Karyotakis, Yannis (Annecy, LAPP) ; Nowak, Elzbieta (CERN ; AGH-UST, Cracow) ; Ortega Ruiz, Inaki (CERN) ; Sala, Paola (INFN, Milan ; CERN) For many decades the CERN North Area facility at the Super Proton Synchrotron (SPS) has delivered secondary beams to various fixed target experiments and test beams. In 2018, two new tertiary extensions of the existing beam lines, designated “H2-VLE” and “H4-VLE”, have been constructed and successfully commissioned. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW064 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW064 Registre complet - Registres semblants 2019-10-09 06:00
The "Physics Beyond Colliders" projects for the CERN M2 beam / Banerjee, Dipanwita (CERN ; Illinois U., Urbana (main)) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; Cholak, Serhii (Taras Shevchenko U.) ; D'Alessandro, Gian Luigi (Royal Holloway, U. of London) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) ; Rae, Bastien (CERN) et al. Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN’s accelerator complex up to 2040 and its scientific infrastructure through projects complementary to the existing and possible future colliders. Within the Conventional Beam Working Group (CBWG), several projects for the M2 beam line in the CERN North Area were proposed, such as a successor for the COMPASS experiment, a muon programme for NA64 dark sector physics, and the MuonE proposal aiming at investigating the hadronic contribution to the vacuum polarisation. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW063 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW063 Registre complet - Registres semblants 2019-10-09 06:00
The K12 beamline for the KLEVER experiment / Van Dijk, Maarten (CERN) ; Banerjee, Dipanwita (CERN) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; D'Alessandro, Gian Luigi (CERN) ; Doble, Niels (CERN) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) et al. The KLEVER experiment is proposed to run in the CERN ECN3 underground cavern from 2026 onward. The goal of the experiment is to measure ${\rm{BR}}(K_L \rightarrow \pi^0v\bar{v})$, which could yield information about potential new physics, by itself and in combination with the measurement of ${\rm{BR}}(K^+ \rightarrow \pi^+v\bar{v})$ of NA62. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW061 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW061 Registre complet - Registres semblants 2019-09-21 06:01
Beam impact experiment of 440 GeV/p protons on superconducting wires and tapes in a cryogenic environment / Will, Andreas (KIT, Karlsruhe ; CERN) ; Bastian, Yan (CERN) ; Bernhard, Axel (KIT, Karlsruhe) ; Bonura, Marco (U. Geneva (main)) ; Bordini, Bernardo (CERN) ; Bortot, Lorenzo (CERN) ; Favre, Mathieu (CERN) ; Lindstrom, Bjorn (CERN) ; Mentink, Matthijs (CERN) ; Monteuuis, Arnaud (CERN) et al. The superconducting magnets used in high energy particle accelerators such as CERN’s LHC can be impacted by the circulating beam in case of specific failure cases. This leads to interaction of the beam particles with the magnet components, like the superconducting coils, directly or via secondary particle showers. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPTS066 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPTS066 Registre complet - Registres semblants 2019-09-20 08:41
Shashlik calorimeters with embedded SiPMs for longitudinal segmentation / Berra, A (INFN, Milan Bicocca ; Insubria U., Varese) ; Brizzolari, C (INFN, Milan Bicocca ; Insubria U., Varese) ; Cecchini, S (INFN, Bologna) ; Chignoli, F (INFN, Milan Bicocca ; Milan Bicocca U.) ; Cindolo, F (INFN, Bologna) ; Collazuol, G (INFN, Padua) ; Delogu, C (INFN, Milan Bicocca ; Milan Bicocca U.) ; Gola, A (Fond. Bruno Kessler, Trento ; TIFPA-INFN, Trento) ; Jollet, C (Strasbourg, IPHC) ; Longhin, A (INFN, Padua) et al. Effective longitudinal segmentation of shashlik calorimeters can be achieved taking advantage of the compactness and reliability of silicon photomultipliers. These photosensors can be embedded in the bulk of the calorimeter and are employed to design very compact shashlik modules that sample electromagnetic and hadronic showers every few radiation lengths. [...] 2017 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 64 (2017) 1056-1061 Registre complet - Registres semblants 2019-09-20 08:41
Performance study for the photon measurements of the upgraded LHCf calorimeters with Gd$_2$SiO$_5$ (GSO) scintillators / Makino, Y (Nagoya U., ISEE) ; Tiberio, A (INFN, Florence ; U. Florence (main)) ; Adriani, O (INFN, Florence ; U. Florence (main)) ; Berti, E (INFN, Florence ; U. Florence (main)) ; Bonechi, L (INFN, Florence) ; Bongi, M (INFN, Florence ; U. Florence (main)) ; Caccia, Z (INFN, Catania) ; D'Alessandro, R (INFN, Florence ; U. Florence (main)) ; Del Prete, M (INFN, Florence ; U. Florence (main)) ; Detti, S (INFN, Florence) et al. The Large Hadron Collider forward (LHCf) experiment was motivated to understand the hadronic interaction processes relevant to cosmic-ray air shower development. We have developed radiation-hard detectors with the use of Gd$_2$SiO$_5$ (GSO) scintillators for proton-proton $\sqrt{s} = 13$ TeV collisions. [...] 2017 - 22 p. - Published in : JINST 12 (2017) P03023 Registre complet - Registres semblants 2019-04-26 08:32
Baby MIND: A magnetised spectrometer for the WAGASCI experiment / Hallsjö, Sven-Patrik (Glasgow U.)/Baby MIND The WAGASCI experiment being built at the J-PARC neutrino beam line will measure the ratio of cross sections from neutrinos interacting with a water and scintillator targets, in order to constrain neutrino cross sections, essential for the T2K neutrino oscillation measurements. A prototype Magnetised Iron Neutrino Detector (MIND), called Baby MIND, has been constructed at CERN and will act as a magnetic spectrometer behind the main WAGASCI target. [...] SISSA, 2018 - 7 p. - Published in : PoS NuFact2017 (2018) 078 Fulltext: PDF; External link: PoS server In : 19th International Workshop on Neutrinos from Accelerators, Uppsala, Sweden, 25 - 30 Sep 2017, pp.078 Registre complet - Registres semblants 2019-04-26 08:32 Registre complet - Registres semblants 2019-04-26 08:32
ENUBET: High Precision Neutrino Flux Measurements in Conventional Neutrino Beams / Pupilli, Fabio (INFN, Padua) ; Ballerini, G (Insubria U., Como ; INFN, Milan Bicocca) ; Berra, A (Insubria U., Como ; INFN, Milan Bicocca) ; Boanta, R (INFN, Milan Bicocca ; Milan Bicocca U.) ; Bonesini, M (INFN, Milan Bicocca) ; Brizzolari, C (Insubria U., Como ; INFN, Milan Bicocca) ; Brunetti, G (INFN, Padua) ; Calviani, M (CERN) ; Carturan, S (INFN, Legnaro) ; Catanesi, M G (INFN, Bari) et al. The ENUBET project aims at demonstrating that the systematics in neutrino fluxes from conventional beams can be reduced to 1% by monitoring positrons from K$_{e3}$ decays in an instrumented decay tunnel, thus allowing a precise measurement of the $\nu_e$ (and $\overline{\nu}_e$) cross section. This contribution will report the results achieved in the first year of activities. [...] SISSA, 2018 - 8 p. - Published in : PoS NuFact2017 (2018) 087 Fulltext: PDF; External link: PoS server In : 19th International Workshop on Neutrinos from Accelerators, Uppsala, Sweden, 25 - 30 Sep 2017, pp.087 Registre complet - Registres semblants
|
Answer:
Given, radius of base \[r=3.5\text{ }cm\] Total height of toy \[=15.5\text{ }cm\] Height of cone \['h'=15.5-3.5\] \[=12cm\] Slant height \['l'=\sqrt{{{h}^{2}}+{{r}^{2}}}\] \[=\sqrt{{{12}^{2}}+{{3.5}^{2}}}\] \[=\sqrt{144+12.25}\] \[=\sqrt{156.25}\] \[=12.5\,\,cm\] Total S.A. of toy = CSA of cone + CSA of hemisphere \[=\pi rl+2\pi {{r}^{2}}\] \[=\pi r\,[l+2r]\] \[=\frac{22}{7}\times 3.5[12.5+2\times 3.5]\] \[=22\times 0.5[12.5+7]\] \[=11\times 19.5\] \[=214.5\,\,c{{m}^{2}}\]
You need to login to perform this action.
You will be redirected in 3 sec
|
We now come to the first of three important theorems that extend the Fundamental Theorem of Calculus to higher dimensions. (The Fundamental Theorem of Line Integrals has already done this in one way, but in that case we were still dealing with an essentially one-dimensional integral.) They all share with the Fundamental Theorem the following rather vague description:
To compute a certain sort of integral over a region, we may do a computation on the boundary of the region that involves one fewer integrations.
Note that this does indeed describe the Fundamental Theorem of Calculus and the Fundamental Theorem of Line Integrals: to compute a single integral over an interval, we do a computation on the boundary (the endpoints) that involves one fewer integrations, namely, no integrations at all.
If the vector field \({\bf F}=\langle P,Q\rangle\) and the region \(D\) are sufficiently nice, and if \(C\) is the boundary of \(D\) (\(C\) is a closed curve), then
$$\iint\limits_{D} {\partial Q\over\partial x}-{\partial P\over\partial y} \,dA = \int_C P\,dx +Q\,dy ,$$
provided the integration on the right is done counter-clockwise around \(C\).
To indicate that an integral \(\int_C\) is being done over a closed curve in the counter-clockwise direction, we usually write \(\oint _C\). We also use the notation \(\partial D\) to mean the boundary of \(D\) {\dfont oriented\/}\index{oriented curve} in the counterclockwise direction. With this notation, \(\oint_C=\int_{\partial D}\).
We already know one case, not particularly interesting, in which this theorem is true: If \(\bf F\) is conservative, we know that the integral \(\oint_C {\bf F}\cdot d{\bf r}=0\), because any integral of a conservative vector field around a closed curve is zero. We also know in this case that \(\partial P/\partial y=\partial Q/\partial x\), so the double integral in the theorem is simply the integral of the zero function, namely, 0. So in the case that \(\bf F\) is conservative, the theorem says simply that \(0=0\).
Example \(\PageIndex{1}\)
We illustrate the theorem by computing both sides of
\[\int_{\partial D} x^4 \, dx + xy \, dy= \iint\limits_{D} y - 0 \, dA,\]
where \(D\) is the triangular region with corners \((0,0)\), \((1,0)\), \((0,1)\).
Starting with the double integral:
$$\iint\limits_{D} y-0\,dA=\int_0^1\int_0^{1-x} y\,dy\,dx= \int_0^1 {(1-x)^2\over2}\,dx=\left.-{(1-x)^3\over6}\right|_0^1={1\over6}.$$
There is no single formula to describe the boundary of \(D\), so to compute the left side directly we need to compute three separate integrals corresponding to the three sides of the triangle, and each of these integrals we break into two integrals, the "\(dx\)'' part and the "\(dy\)'' part. The three sides are described by \(y=0\), \(y=1-x\), and \(x=0\). The integrals are then
$$\eqalign{ \int_{\partial D}\!\!\! x^4\,dx + xy\,dy&= \int_0^1 x^4\,dx+\int_0^0 0\,dy+\int_1^0 x^4\,dx+\int_0^1 (1-y)y\,dy+ \int_0^0 0\,dx+\int_1^0 0\,dy\cr &={1\over5}+0-{1\over5}+{1\over6}+0+0={1\over6}.\cr}
$$
Alternately, we could describe the three sides in vector form as \(\langle t,0\rangle\), \(\langle 1-t,t\rangle\), and \(\langle 0,1-t\rangle\). Note that in each case, as \(t\) ranges from 0 to 1, we follow the corresponding side in the correct direction. Now
$$\eqalign{ \int_{\partial D} x^4\,dx + xy\,dy&= \int_0^1 t^4 + t\cdot 0\,dt + \int_0^1 -(1-t)^4 + (1-t)t\,dt +\int_0^1 0 + 0\,dt\cr &=\int_0^1 t^4\,dt + \int_0^1 -(1-t)^4 + (1-t)t\,dt ={1\over6}.\cr }$$
In this case, none of the integrations are difficult, but the second approach is somewhat tedious because of the necessity to set up three different integrals. In different circumstances, either of the integrals, the single or the double, might be easier to compute. Sometimes it is worthwhile to turn a single integral into the corresponding double integral, sometimes exactly the opposite approach is best.
Here is a clever use of Green's Theorem: We know that areas can be computed using double integrals, namely,
$$\iint\limits_{D} 1\,dA$$
computes the area of region \(D\). If we can find \(P\) and \(Q\) so that \(\partial Q/\partial x-\partial P/\partial y=1\), then the area is also
$$\int_{\partial D} P\,dx+Q\,dy.$$
It is quite easy to do this: \(P=0,Q=x\) works, as do \(P=-y, Q=0\) and \(P=-y/2,Q=x/2\).
Example \(\PageIndex{2}\)
An ellipse centered at the origin, with its two principal axes aligned with the \(x\) and \(y\) axes, is given by
$${x^2\over a^2}+{y^2\over b^2}=1.$$
We find the area of the interior of the ellipse via Green's theorem. To do this we need a vector equation for the boundary; one such equation is \(\langle a\cos t,b\sin t\rangle\), as
\(t\) ranges from 0 to \(2\pi\). We can easily verify this by substitution:
$${x^2\over a^2}+{y^2\over b^2}={a^2\cos^2 t\over a^2}+{b^2\sin^2t\over b^2}= \cos^2t+\sin^2t=1.$$
Let's consider the three possibilities for \(P\) and \(Q\) above: Using 0 and \(x\) gives
$$\oint_C 0\,dx+x\,dy=\int_0^{2\pi} a\cos(t)b\cos(t)\,dt= \int_0^{2\pi} ab\cos^2(t)\,dt.$$
Using \(-y\) and 0 gives
$$\oint_C -y\,dx+0\,dy=\int_0^{2\pi} -b\sin(t)(-a\sin(t))\,dt= \int_0^{2\pi} ab\sin^2(t)\,dt.$$
Finally, using \(-y/2\) and \(x/2\) gives
$$\eqalign{\oint_C -{y\over2}\,dx+{x\over2}\,dy&= \int_0^{2\pi} -{b\sin(t)\over2}(-a\sin(t))\,dt +{a\cos(t)\over2}(b\cos(t))\,dt\cr &=\int_0^{2\pi} {ab\sin^2t\over2}+{ab\cos^2t\over2}\,dt=\int_0^{2\pi} {ab\over2}\,dt=\pi ab.\cr}$$
The first two integrals are not particularly difficult, but the third is very easy, though the choice of \(P\) and \(Q\) seems more complicated.
Figure 16.4.1. A "standard'' ellipse, \({x^2\over a^2}+{y^2\over b^2}=1\).
Proof of Green's Theorem
We cannot here prove Green's Theorem in general, but we can do a special case. We seek to prove that
$$\oint_C P\,dx +Q\,dy = \iint\limits_{D} {\partial Q\over\partial x}-{\partial P\over\partial y} \,dA.$$
It is sufficient to show that
$$\oint_C P\,dx=\iint\limits_{D}-{\partial P\over\partial y} \,dA\qquad\hbox{and}
\qquad\oint_C Q\,dy=\iint\limits_{D} {\partial Q\over\partial x}\,dA,$$
which we can do if we can compute the double integral in both possible ways, that is, using \(dA=dy\,dx\) and \(dA=dx\,dy\).
For the first equation, we start with
$$\iint\limits_{D}{\partial P\over\partial y}\,dA= \int_a^b\int_{g_1(x)}^{g_2(x)} {\partial P\over \partial y}\,dy\,dx= \int_a^b P(x,g_2(x))-P(x,g_1(x))\,dx.$$
Here we have simply used the ordinary Fundamental Theorem of Calculus, since for the inner integral we are integrating a derivative with respect to \(y\): an antiderivative of \(\partial P/\partial y\) with respect to \(y\) is simply \(P(x,y)\), and then we substitute \(g_1\) and \(g_2\) for \(y\) and subtract.
Now we need to manipulate \(\oint_C P\,dx\). The boundary of region \(D\) consists of 4 parts, given by the equations \(y=g_1(x)\), \(x=b\),\(y=g_2(x)\), and \(x=a\). On the portions \(x=b\) and \(x=a\), \(dx=0\,dt\), so the corresponding integrals are zero. For the other two portions, we use the parametric forms \(x=t\), \(y=g_1(t)\), \(a\le t\le b\), and \(x=t\), \(y=g_2(t)\), letting \(t\) range from \(b\) to \(a\), since we are integrating counter-clockwise around the boundary. The resulting integrals give us
$$\eqalign{
\oint_C P\,dx = \int_a^b P(t,g_1(t))\,dt+\int_b^a P(t,g_2(t))\,dt &=\int_a^b P(t,g_1(t))\,dt-\int_a^b P(t,g_2(t))\,dt\cr &=\int_a^b P(t,g_1(t))-P(t,g_2(t))\,dt\cr }$$
which is the result of the double integral times \(-1\), as desired.
The equation involving \(Q\) is essentially the same, and left as an exercise.
\( \square \)
|
Given arbitrary sets $A$ and $B$, the notation $A^{B}$ is mostly clear from context to mean $A^{B} = \{f : f : B \rightarrow A\}$.
However, when these sets are ordinal or cardinals, especially $\omega$, the notation is not consistent even among subfields of logic.
For example $2^\omega$ in one sense can denote ordinal exponentiation. Hence $2^\omega = \lim_{n < \omega} 2^n = \omega$.
However, you can also consider $2^{\aleph_0}$. By using the cardinal $\aleph_0$, some people may consider it clear that $2^{\aleph_0}$ denotes the cardinal of the set $\{f : \aleph_0 = \omega_0 = \omega \rightarrow 2\}$.
In the above paragraph $2^{\aleph_0}$ is a cardinal, (in ZFC) it is a special ordinal. However, it descriptive set theory, you may want to consider not the cardinal but Cantor Space (or Baire Space), i.e. the set of functions from $\omega \rightarrow 2$. When you want the set of functions as oppose to the ordinal, is there a notation for that.
In recursion theory, I have found that $2^\omega$ or $\omega^\omega$ most frequently refers to Cantor or Baire space, and not the ordinal or cardinal. In Mostovachis book, he uses $\text{}^\omega2$ to denote Cantor Space.
Does anyone know of any establish custom to distiguish between ordinal exponentiation, cardinality of the set of functions between ordinals, and the actual set of functions between ordinals. I was thinking perhaps the left right exponent like $\text{}^\omega2$ and $2^\omega$ could be used as distinction, but from reference to recursion theory and Moschivakis's book, it seems that this is not the case.
Thanks for any help you can provide.
|
Can we upgrade the Reflection axiom schema in Ackermann to the following:
Modified Reflection axiom schema: if $\psi(y)$ is a formula that doesn't use the symbol $V$, in which only symbols $y,x_1,..,x_n$ occur free, and in which $x$ is not free, then all closures of:
$$x_1,..,x_n \in V \wedge \forall y (\psi(y) \to y \subset V) \to \\\exists x \in V \forall y (y \in x \leftrightarrow \psi(y))$$; are axioms.
In the traditional exposition of Ackermann set theory, the output of $\psi(y)$ is restricted to
elements of $V$. While here it is eased as to allow subsets of $V$.
With the above formulation, there is no need for a second completeness axiom for $V$, since it would be redundant. We only need the axiom of heredity, that is the first completeness axiom for $V$, which only mounts to saying that the world $V$ of all sets is transitive, a very natural statement!
Question: Is there a clear problem with the above scheme?
I tend to think that the above schema is a theorem schema of Ackermann set theory.
|
I have a (I hope) simple question! If I had a linear regression,
$Y_t = \alpha + \beta X_t + \epsilon_t$
with $\epsilon_t \sim N(0,\sigma^2)$
and I assume a Cauchy prior for $\sigma$, is it possible to get a conditional conjugate posterior, that I could embed in a Gibbs sampler without having to rely on a MH step? I am aware of the papers from Gelman and Polson, but I do not think they help here... Actually what I am trying to do is a bit more complicated (put Cauchy priors on the variances of latent states in a state space model), but if I know the posterior for the linear regression model, I could adapt it easily.
Thanks!
EDIT:
Ok, so just in case, yes, I meant the half Cauchy prior. I understand what Gelman does in the paper, but I am not sure I can apply the same parameter expansion to a model like this one, which is a simple state space model:
$Y_t = \alpha + \beta X_t + \epsilon_t$
$X_t = X_{t-1} + \eta_t$
I would like to put the half Cauchy prior on the variance of $\eta_t$. Ideally, I would pass a Kalman filter/smoother to get the distribution $p(X^T|Y^T,\beta ,\alpha ,\sigma^\epsilon , \sigma^\eta ) $ . Then, to build the Gibbs sampler, I would like to obtain the distribution $p(\sigma^\eta|X^T,Y^T,\beta ,\alpha ,\sigma^\epsilon ) $ . If I put an inverted gamma prior, given a draw from $p(X^T|...)$, the distribution is also inverted gamma as usual. But with a half Cauchy, I have no clue, to be honest. So, any help is more than welcome!
|
Program Arcade GamesWith Python And Pygame
Searching is an important and very common operation that computers do all the time. Searches are used every time someone does a ctrl-f for “find”, when a user uses “type-to” to quickly select an item, or when a web server pulls information about a customer to present a customized web page with the customer's order.
There are a lot of ways to search for data. Google has based an entire
multi-billion dollar company on this fact. This chapter introduces the two
simplest methods for searching, the
linear search and
the binary search.
Before discussing how to search we need to learn how to read
data from a file. Reading in a data set from a file is
way more
fun than typing it in by hand each time.
Let's say we need to create a program that will allow us to quickly
find the name of a super-villain. To start with, our program needs a database
of super-villains.
To download this data set, download and save this file:
http://ProgramArcadeGames.com/chapters/16_searching/super_villains.txt These are random names generated by the nine.frenchboys.net website, although last I checked they no longer have a super-villain generator.
Save this file and remember which directory you saved it to.
In the same directory as
super_villains.txt,
create, save, and run the following python program:
file = open("super_villains.txt") for line in file: print(line)
There is only one new command in this code
open. Because it is a built-in function like
import. Full details on this function can be found
in
the Python documentation
but at this point the documentation for that command is so technical
it might not even be worth looking at.
The above program has two problems with it, but it provides a simple
example of reading in a file.
Line 1 opens a file and gets it ready to be read. The name of the file
is in between the quotes. The new variable
file is an
object that represents the file being read. Line 3 shows how a
normal
for loop may be used to read through a file line by
line. Think of
file as a list of lines, and the new variable
line will be set to each of those lines as the program runs
through the loop.
Try running the program.
One of the problems with the it is that the text is printed
double-spaced. The reason for this is that each line pulled out of
the file and stored in the variable
line includes the
carriage return as part of the string.
Remember the carriage return and line feed introduced back in Chapter 1?
The
The second problem is that the file is opened, but not closed. This problem isn't as obvious as the double-spacing issue, but it is important. The Windows operating system can only open so many files at once. A file can normally only be opened by one program at a time. Leaving a file open will limit what other programs can do with the file and take up system resources. It is necessary to close the file to let Windows know the program is no longer working with that file. In this case it is not too important because once any program is done running, the Windows will automatically close any files left open. But since it is a bad habit to program like that, let's update the code:
file = open("super_villains.txt") for line in file: line = line.strip() print(line) file.close()
The listing above works better. It has two new additions. On line 4
is a call to the
strip method built into every
String class. This function returns a new string
without the trailing spaces and carriage returns of the original string.
The method does not alter the
original string but instead creates a new one. This line of code would
not work:
line.strip()
If the programmer wants the original variable to reference the new string, she must assign it to the new returned string as shown on line 4.
The second addition is on line 7. This closes the file so that the operating system doesn't have to go around later and clean up open files after the program ends.
It is useful to read in the contents of a file to an array so that the program can do processing on it later. This can easily be done in python with the following code:
# Read in a file from disk and put it in an array. file = open("super_villains.txt") name_list = [] for line in file: line = line.strip() name_list.append(line) file.close()
This combines the new pattern of how to read a file, along with the previously learned pattern of how to create an empty array and append to it as new data comes in, which was shown back in Chapter 7. To verify the file was read into the array correctly a programmer could print the length of the array:
print( "There were",len(name_list),"names in the file.")
Or the programmer could bring the entire contents of the array:
for name in name_list: print(name)
Go ahead and make sure you can read in the file before continuing on to the different searches.
If a program has a set of data in an array, how can it go
about finding where a specific element is? This can be done one of two
ways. The first method is to use a
linear search. This
starts at the first element, and keeps comparing elements until
it finds the desired element (or runs out of elements.)
# --- Linear search key = "Morgiana the Shrew" i = 0 while i < len(name_list) and name_list[i] != key: i += 1 if i < len(name_list): print( "The name is at position", i) else: print( "The name was not in the list." )
The linear search is rather simple. Line 4 sets up an increment variable
that will keep track of exactly where in the list the program needs
to check next. The first element that needs to be checked is zero, so
i is set to zero.
The next line is a bit more complex. The computer needs to keep looping until one of two things happens. It finds the element, or it runs out of elements. The first comparison sees if the current element we are checking is less than the length of the list. If so, we can keep looping. The second comparison sees if the current element in the name list is equal to the name we are searching for.
This check to see if the program has run out of elements
must occur first. Otherwise the program will check against a non-existent
element which will cause an error.
Line 6 simply moves to the next element if the conditions to keep searching are met in line 5.
At the end of the loop, the program checks to see if the end of the
list was reached on line 8. Remember, a list of n elements is numbered
0 to n-1. Therefore if
i is equal to the length of the
list, the end has been reached. If it is less, we found the element.
Variations on the linear search can be used to create several common algorithms. For example, say we had a list of aliens. We might want to check this group of aliens to see if one of the aliens is green. Or are all the aliens green? Which aliens are green?
To begin with, we'd need to define our alien:
class Alien: """ Class that defines an alien""" def __init__(self, color, weight): """ Constructor. Set name and color""" self.color = color self.weight = weight
Then we'd need to create a function to check and see if it has the property that we are looking for. In this case, is it green? We'll assume the color is a text string, and we'll convert it to upper case to eliminate case-sensitivity.
def has_property(my_alien): """ Check to see if an item has a property. In this case, is the alien green? """ if my_alien.color.upper() == "GREEN": return True else: return False
Is at least one alien green? We can check. The basic algorithm behind this check:
def check_if_one_item_has_property_v1(my_list): """ Return true if at least one item has a property. """ i = 0 while i < len(my_list) and not has_property(my_list[i]): i += 1 if i < len(my_list): # Found an item with the property return True else: # There is no item with the property return False
This could also be done with a
for loop. In this case, the loop
will exit early by using a
return once the item has been found. The code is
shorter, but not every programmer would prefer it. Some programmers feel that
loops should not be prematurely ended with a
return or
break statement.
It all goes to personal preference, or the personal preference of the person that is
footing the bill.
def check_if_one_item_has_property_v2(my_list): """ Return true if at least one item has a property. Works the same as v1, but less code. """ for item in my_list: if has_property(item): return True return False
Are all aliens green? This code is very similar to the prior example. Spot the difference and see if you can figure out the reason behind the change.
def check_if_all_items_have_property(my_list): """ Return true if at ALL items have a property. """ for item in my_list: if not has_property(item): return False return True
What if you wanted a list of aliens that are green? This is a combination of our prior code, and the code to append items to a list that we learned about back in Chapter 7.
def get_matching_items(list): """ Build a brand new list that holds all the items that match our property. """ matching_list = [] for item in list: if has_property(item): matching_list.append(item) return matching_list
How would you run all these in a test? The code above can be combined with this code to run:
alien_list = [] alien_list.append(Alien("Green", 42)) alien_list.append(Alien("Red", 40)) alien_list.append(Alien("Blue", 41)) alien_list.append(Alien("Purple", 40)) result = check_if_one_item_has_property_v1(alien_list) print("Result of test check_if_one_item_has_property_v1:", result) result = check_if_one_item_has_property_v2(alien_list) print("Result of test check_if_one_item_has_property_v2:", result) result = check_if_all_items_have_property(alien_list) print("Result of test check_if_all_items_have_property:", result) result = get_matching_items(alien_list) print("Number of items returned from test get_matching_items:", len(result))
For a full working example see:
programarcadegames.com/python_examples/show_file.php?file=property_check_examples.py
These common algorithms can be used as part of a solution to a larger problem, such as find all the addresses in a list of customers that aren't valid.
A faster way to search a list is possible with the
binary search.
The process of a binary search can be described by using the classic number
guessing game “guess a number between 1 and 100” as an example. To
make it easier to understand the process, let's modify the game to be
“guess a number between 1 and 128.” The number range is inclusive, meaning
both 1 and 128 are possibilities.
If a person were to use the linear search as a method to guess the secret number, the game would be rather long and boring.
Guess a number 1 to 128: 1 Too low. Guess a number 1 to 128: 2 Too low. Guess a number 1 to 128: 3 Too low. .... Guess a number 1 to 128: 93 Too low. Guess a number 1 to 128: 94 Correct!
Most people will use a binary search to find the number. Here is an example of playing the game using a binary search:
Guess a number 1 to 128: 64 Too low. Guess a number 1 to 128: 96 Too high. Guess a number 1 to 128: 80 Too low. Guess a number 1 to 128: 88 Too low. Guess a number 1 to 128: 92 Too low. Guess a number 1 to 128: 94 Correct!
Each time through the rounds of the number guessing game, the guesser is able to eliminate one half of the problem space by getting a “high” or “low” as a result of the guess.
In a binary search, it is necessary to track an upper and a lower bound of the list that the answer can be in. The computer or number-guessing human picks the midpoint of those elements. Revisiting the example:
A lower bound of 1, upper bound of 128, mid point of $\dfrac{1+128}{2} = 64.5$.
Guess a number 1 to 128: 64 Too low.
A lower bound of 65, upper bound of 128, mid point of $\dfrac{65+128}{2} = 96.5$.
Guess a number 1 to 128: 96 Too high.
A lower bound of 65, upper bound of 95, mid point of $\dfrac{65+95}{2} = 80$.
Guess a number 1 to 128: 80 Too low.
A lower bound of 81, upper bound of 95, mid point of $\dfrac{81+95}{2} = 88$.
Guess a number 1 to 128: 88 Too low.
A lower bound of 89, upper bound of 95, mid point of $\dfrac{89+95}{2} = 92$.
Guess a number 1 to 128: 92 Too low.
A lower bound of 93, upper bound of 95, mid point of $\dfrac{93+95}{2} = 94$.
Guess a number 1 to 128: 94 Correct!
A binary search requires significantly fewer guesses. Worst case, it can guess a number between 1 and 128 in 7 guesses. One more guess raises the limit to 256. 9 guesses can get a number between 1 and 512. With just 32 guesses, a person can get a number between 1 and 4.2 billion.
To figure out how large the list can be given a certain number of guesses, the
formula works out like $n=x^{g}$ where $n$ is the size of the list and $g$ is the
number of guesses. For example:
$2^7=128$ (7 guesses can handle 128 different numbers) $2^8=256$ $2^9=512$ $2^{32}=4,294,967,296$
If you have the problem size, we can figure out the number of guesses using
the
log function. Specifically, log base 2. If you don't
specify a base, most people will assume you mean the natural log with a base of
$e \approx 2.71828$ which is not what we want. For example, using log base 2 to
find how many guesses: $log_2 128 = 7$ $log_2 65,536 = 16$
Enough math! Where is the code? The code to do a binary search is more complex than a linear search:
# --- Binary search key = "Morgiana the Shrew" lower_bound = 0 upper_bound = len(name_list)-1 found = False # Loop until we find the item, or our upper/lower bounds meet while lower_bound <= upper_bound and not found: # Find the middle position middle_pos = (lower_bound + upper_bound) // 2 # Figure out if we: # move up the lower bound, or # move down the upper bound, or # we found what we are looking for if name_list[middle_pos] < key: lower_bound = middle_pos + 1 elif name_list[middle_pos] > key: upper_bound = middle_pos - 1 else: found = True if found: print( "The name is at position", middle_pos) else: print( "The name was not in the list." )
Since lists start at element zero, line 3 sets the lower bound to zero. Line 4 sets the upper bound to the length of the list minus one. So for a list of 100 elements the lower bound will be 0 and the upper bound 99.
The Boolean variable on line 5 will be used to let the while loop know that the element has been found.
Line 8 checks to see if the element has been found or if we've run out of elements. If we've run out of elements the lower bound will end up equaling the upper bound.
Line 11 finds the middle position. It is possible to get a middle position
of something like 64.5. It isn't possible to look up position 64.5. (Although
J.K. Rowling was rather clever in enough coming up with Platform $9\frac{3}{4}$,
that doesn't work here.)
The best way of handling this is to use the
// operator
first introduced way back in Chapter 5.
This is similar to the
/ operator, but will only return integer results.
For example,
11 // 2 would give 5 as an answer, rather than 5.5.
Starting at line 17 the program checks to see if the guess is high, low, or
correct. If the guess is low, the lower bound is moved up to just past the guess.
If the guess is too high, the upper bound is moved just below the guess. If the
answer has been found,
found is set to
True ending the search.
With the a list of 100 elements, a person can reasonably guess that on average with the linear search, a program will have to check 50 of them before finding the element. With the binary search, on average you'll still need to do about seven guesses. In an advanced algorithms course you can find the exact formula. For this course, just assume average and worst cases are the same.
You are not logged in. Log in here and track your progress.
English version by Paul Vincent Craven
Spanish version by Antonio Rodríguez Verdugo
Russian version by Vladimir Slav
Turkish version by Güray Yildirim
Portuguese version by Armando Marques Sobrinho and Tati Carvalho
Dutch version by Frank Waegeman
Hungarian version by Nagy Attila
Finnish version by Jouko Järvenpää
French version by Franco Rossi
Korean version by Kim Zeung-Il
Chinese version by Kai Lin
|
num2vec: Numerical Embeddings from Deep RNNs Introduction
Encoding numerical inputs for neural networks is difficult because the representation space is very large and there is no easy way to embed numbers into a smaller space without losing information. Some of the ways to currently handle this is:
Scale inputs from minimum and maximum values to [-1, 1] One hot for each number One hot for different bins (e.g. [0-0], [1-2], [3-7], [8 – 19], [20, infty])
In small integer number ranges, these methods can work well, but they don’t scale well for wider ranges. In the input scaling approach, precision is lost making it difficult to distinguish between two numbers close in value. For the binning methods, information about the mathematical properties of the numbers such as adjacency and scaling is lost.
The desideratum of our embeddings of numbers to vectors are as follows:
able to handle numbers of arbitrary length captures mathematical relationships between numbers (addition, multiplication, etc.) able to model sequences of numbers
In this blog post, we will explore a novel approach for embedding numbers as vectors that include these desideratum.
Approach
My approach for this problem is inspired by word2vec but unlike words which follow the distributional hypothesis, numbers follow the the rules of arithmetic. Instead of finding a “corpus” of numbers to train on, we can generate random arithmetic sequences and have our network “learn” the rules of arithmetic from the generated sequences and as a side effect, be able to encode numbers as vectors and sequences as vectors.
Problem Statement
Given a sequence of length n integers \(x_1, x_2 \ldots x_n\), predict the next number in the sequence \(x_{n+1}\).
Architecture
The architecture of the system consists of three parts: the encoder, the decoder and the nested RNN.
The encoder is an RNN that takes a number represented as a sequence of digits and encodes it into a vector that represents an embedded number.
The nested RNN takes the embedded numbers and previous state to output another embedded vector that represents the next number.
The decoder then takes the embedded number and unravels it through the decoder RNN to output the digits of the next predicted number.
Formally:
Let \(X\) represent a sequence of natural numbers where \(X_{i,j}\) represents the j-th digit of the i-th number of the sequence. We also append an <eos> “digit” to the end of each number to signal the end of the number. For the sequence X = 123, 456, 789, we have \(X_{1,2} = 2, X_{3,3} = 9, X_{3,4} = <eos>\).
Let \(l_i\) be the number of digits in the i-th number of the sequence (including <eos> digit. Let \(E\) be an embedding matrix for each digit.
Let \(\vec{u}_i\) be an embedding of the i-th number in a sequence. It is computed as the final state of the encoder. Let \(\vec{v}_i\) be an embedding of the predicted (i+1)-th number in a sequence. It is computed from the output of the nested RNN and used as the initial state of the decoder.
Let \(R^e, R^d, R^n\) be the functions that gives the next state for the encoder, decoder and nested RNN respectively. Let \(O^d, O^n\) be the functions that gives the output of the current state for the decoder and nested RNN respectively.
Let \(\vec{s}^e_{i,j}\) be the state vector for \(R^e\) for the j-th timestep of the i-th number of the sequence. Let \(\vec{s}^d_{i,j}\) be the state vector for \(R^d\) for the j-th timestep of the i-th number of the sequence. Let \(\vec{s}^n_i\) represent the state vector of \(R^n\) for the i-th timestep.
Let \(z_{i,j}\) be the output of \(R^d\) at the j-th timestep of the i-th number of the sequence.
Let \(\hat{y}_{i,j}\) represent the distribution of digits for the prediction of the j-th digit of the (i+1)th number of the sequence.\(\displaystyle{\begin{eqnarray}\vec{s}^e_{i,j} &=& R^e(E[X_{i,j}], \vec{s}^e_{i, j-1})\\\vec{u}_i &=& \vec{s}^e_{i,l_i}\\ \vec{s}^n_i &=& R^n(\vec{u}_i, \vec{s}^n_{i-1})\\\vec{v_i} &=& O^n(\vec{s}^n_i)\\ \vec{z}_{i,j} &=& O^d(\vec{s}^d_{i,j})\\ \vec{s}^d_{i, j} &=& R^d(\vec{z}_{i,j-1}, \vec{s}^d_{i, j-1})\\ \hat{y}_{i,j} &=& \text{softmax}(\text{MLP}(\vec{z}_{i,j}))\\ p(X_{i+1,j})=k |X_1, \ldots, X_i, X_{i+1, 1}, \ldots X_{i+1, j-1}) &=& \hat{y}_{i,j}[k]\end{eqnarray}}\)
We use a cross-entropy loss function where \(y_{i,j}[t]\) represents the correct digit class for \(y_{i,j}\):\(\displaystyle{\begin{eqnarray}L(y, \hat{y}) &=& \sum_i \sum_j -\log \hat{y}_{i,j}[t]\end{eqnarray} }\)
Since I also find it difficult to intuitively understand what these sets of equations mean, here is a clearer diagram of the nested network:
Training
The whole network is trained end-to-end by generating random mathematical sequences and predicting the next number in the sequence. The sequences generated contains addition, subtraction, multiplication, division and exponents. The sequences generated also includes repeating series of numbers.
After 10,000 epochs of 500 sequences each, the networks converges and is reasonably able to predict the next number in a sequence. On my Macbook Pro with a Nvidia GT750M, the network implemented on Tensorflow took 1h to train.
Results
Taking a look at some sample sequences, we can see that the network is reasonably able to predict the next number.
Seq [43424, 43425, 43426, 43427] Predicted [43423, 43426, 43427, 43428] Seq [3, 4, 3, 4, 3, 4, 3, 4, 3, 4] Predicted [9, 5, 4, 3, 4, 3, 4, 3, 4, 3] Seq [2, 4, 8, 16, 32, 64, 128] Predicted [4, 8, 16, 32, 64, 128, 256] Seq [10, 9, 8, 7, 6, 5, 4, 3] Predicted [20, 10, 10, 60, 4, 4, 3, 2]
With the trained model, we can compute embeddings of individual numbers and visualize the embeddings with the t-sne algorithm.
We can see an interesting pattern when we plot the first 100 numbers (color coded by last digit). Another interesting pattern to observe is within clusters, the numbers also rotate clockwise or counterclockwise.
We can also trace the path of the embeddings sequentially, we can see that there is some structure to the positioning of the numbers.
If we look at the visualizations of the embeddings for numbers 1-1000 we can see that the clusters still exist for the last digit (each color corresponds to numbers with the same last digit)
We can also see the same structural lines for the sequential path for numbers 1 to 1000:
The inner linear pattern is formed from the number 1-99 and the outer linear pattern is formed from the numbers 100-1000.
We can also look at the embeddings of each sequence by taking the vector \(\vec{s}^n_k\) after feeding in k=8 numbers of a sequence into the model. We can visualize the sequence embeddings with t-sne using 300 sequences:
From the visualization, we can see that similar sequences are clustered together. For example, repeating patterns, quadratic sequences, linear sequences and large number sequences are grouped together. We can see that the network is able to extract some high level structure for different types of sequences.
Using this, we can see that if we encounter a sequence we can’t determine a pattern for, we can find the nearest sequence embedding to approximate the pattern type.
Code: Github
The model is written in Python using Tensorflow 1.1. The code is not very well written due to the fact that I was forced to use an outdated version of TF with underdeveloped RNN support because of OS X GPU compatibility reasons.
The code is a proof of a concept and comes from the result of stitching together outdated tutorials together.
Further improvements: bilateral RNN stack more layers attention mechanism beam search negative numbers teacher forcing
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.