text
stringlengths 256
16.4k
|
|---|
Is there a series of functions $\sum (-1)^n u_n(x)$ that is convergent for Leibniz rule (alternating series test) in some interval but it is not
uniformly convergent?
In particular the series must be (for using Leibniz rule)
positive $(u_n(x) \geq 0)$ decreasing $(u_{n+1}(x)\leq u_n(x))$ such that $\lim_{n \to \infty} u_n(x)=0$ Edit: I refer to the following definition: a series of function is uniformly convergent in a subset $S$ if
$$\lim_{n \to \infty} \sup_{x \in S} |\sum_{n \geq 0} u_n(x) -\sum_{n=0}^{N} u_n(x)| =0$$
|
Definitions
correlation coefficient $= r = \frac{\sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\sum_{i=1}^{n}(x_i - \bar{x})^2\sum_{i=1}^{n}(y_i - \bar{y})^2}}$
My Question
What is the motivation of this formula? It's supposed to measure linear relationships on bivariate data, but I don't understand why it would do that as defined. For example, Riemann integrals are said to measure area under a curve, and that makes sense because $\sum f(x_i)\Delta x$ is adding areas of rectangles under the curve $f(x)$ approximating its area more and more as we take more samples. Does such an intuition exist for the correlation coefficient? What is it? My background in statistics is nothing but a bit of discrete probability. I know histograms, data plots, mean, median, range, variance, standard deviation, box plots and scatter plots at this point (from reading the first weeks material on an introductory statistics class).
My Research
All of the "Questions that may already have your answer" seemed to either be asking about what the formula said mathematically or asked questions that were more advanced than my knowledge.
|
In the previous section, we discussed the relationship between the bulk mass of a substance and the number of atoms or molecules it contains (moles). Given the chemical formula of the substance, we were able to determine the amount of the substance (moles) from its mass, and vice versa. But what if the chemical formula of a substance is unknown? In this section, we will explore how to apply these very same principles in order to derive the chemical formulas of unknown substances from experimental mass measurements.
Derivation of Molecular Formulas
Recall that empirical formulas are symbols representing the
relative numbers of a compound’s elements. Determining the absolute numbers of atoms that compose a single molecule of a covalent compound requires knowledge of both its empirical formula and its molecular mass or molar mass. These quantities may be determined experimentally by various measurement techniques. Molecular mass, for example, is often derived from the mass spectrum of the compound (see discussion of this technique in the previous chapter on atoms and molecules). Molar mass can be measured by a number of experimental methods, many of which will be introduced in later chapters of this text.
Molecular formulas are derived by comparing the compound’s molecular or molar mass to its empirical formula mass. As the name suggests, an empirical formula mass is the sum of the average atomic masses of all the atoms represented in an empirical formula. If we know the molecular (or molar) mass of the substance, we can divide this by the empirical formula mass in order to identify the number of empirical formula units per molecule, which we designate as
n:
\[\mathrm{\dfrac{molecular\: or\: molar\: mass\left(amu\: or\:\dfrac{g}{mol}\right)}{empirical\: formula\: mass\left(amu\: or\:\dfrac{g}{mol}\right)}= \mathit n\: formula\: units/molecule}\]
The molecular formula is then obtained by multiplying each subscript in the empirical formula by
n, as shown by the generic empirical formula A xB y:
\[\mathrm{(A_xB_y)_n=A_{nx}B_{nx}}\]
For example, consider a covalent compound whose empirical formula is determined to be CH
2O. The empirical formula mass for this compound is approximately 30 amu (the sum of 12 amu for one C atom, 2 amu for two H atoms, and 16 amu for one O atom). If the compound’s molecular mass is determined to be 180 amu, this indicates that molecules of this compound contain six times the number of atoms represented in the empirical formula:
\[\mathrm{\dfrac{180\:amu/molecule}{30\:\dfrac{amu}{formula\: unit}}=6\:formula\: units/molecule}\]
Molecules of this compound are then represented by molecular formulas whose subscripts are six times greater than those in the empirical formula:
\[\ce{(CH2O)6}=\ce{C6H12O6}\]
Note that this same approach may be used when the molar mass (g/mol) instead of the molecular mass (amu) is used. In this case, we are merely considering one mole of empirical formula units and molecules, as opposed to single units and molecules.
Example \(\PageIndex{5}\): Determination of the Molecular Formula for Nicotine
Nicotine, an alkaloid in the nightshade family of plants that is mainly responsible for the addictive nature of cigarettes, contains 74.02% C, 8.710% H, and 17.27% N. If 40.57 g of nicotine contains 0.2500 mol nicotine, what is the molecular formula?
Solution
Determining the molecular formula from the provided data will require comparison of the compound’s empirical formula mass to its molar mass. As the first step, use the percent composition to derive the compound’s empirical formula. Assuming a convenient, a 100-g sample of nicotine yields the following molar amounts of its elements:
\[\begin{alignat}{2}
&\mathrm{(74.02\:g\: C)\left(\dfrac{1\:mol\: C}{12.01\:g\: C}\right)}&&= \:\mathrm{6.163\:mol\: C}\\ &\mathrm{(8.710\:g\: H)\left(\dfrac{1\:mol\: H}{1.01\:g\: H}\right)}&&= \:\mathrm{8.624\:mol\: H}\\ &\mathrm{(17.27\:g\: N)\left(\dfrac{1\:mol\: N}{14.01\:g\: N}\right)}&&= \:\mathrm{1.233\:mol\: N} \end{alignat}\]
Next, we calculate the molar ratios of these elements.
The C-to-N and H-to-N molar ratios are adequately close to whole numbers, and so the empirical formula is C
5H 7N. The empirical formula mass for this compound is therefore 81.13 amu/formula unit, or 81.13 g/mol formula unit.
We calculate the molar mass for nicotine from the given mass and molar amount of compound:
\[\mathrm{\dfrac{40.57\:g\: nicotine}{0.2500\:mol\: nicotine}=\dfrac{162.3\:g}{mol}} onumber \]
Comparing the molar mass and empirical formula mass indicates that each nicotine molecule contains two formula units:
\[\mathrm{\dfrac{162.3\:g/mol}{81.13\:\dfrac{g}{formula\: unit}}=2\:formula\: units/molecule} onumber \]
Thus, we can derive the molecular formula for nicotine from the empirical formula by multiplying each subscript by two:
\[\ce{(C5H7N)6}=\ce{C10H14N2} onumber \]
Exercise \(\PageIndex{5}\)
What is the molecular formula of a compound with a percent composition of 49.47% C, 5.201% H, 28.84% N, and 16.48% O, and a molecular mass of 194.2 amu?
Answer
C
8H 10N 4O 2 Summary
The chemical identity of a substance is defined by the types and relative numbers of atoms composing its fundamental entities (molecules in the case of covalent compounds, ions in the case of ionic compounds). A compound’s percent composition provides the mass percentage of each element in the compound, and it is often experimentally determined and used to derive the compound’s empirical formula. The empirical formula mass of a covalent compound may be compared to the compound’s molecular or molar mass to derive a molecular formula.
Combustion Analysis
When a compound containing carbon and hydrogen is subject to combustion with oxygen in a special combustion apparatus all the carbon is converted to CO
2 and the hydrogen to H 2O (Figure \(\PageIndex{2}\)). The amount of carbon produced can be determined by measuring the amount of CO 2 produced. This is trapped by the sodium hydroxide, and thus we can monitor the mass of CO 2 produced by determining the increase in mass of the CO 2 trap. Likewise, we can determine the amount of H produced by the amount of H 2O trapped by the magnesium perchlorate.
Figure \(\PageIndex{2}\): Combustion analysis apparatus
One of the most common ways to determine the elemental composition of an unknown hydrocarbon is an analytical procedure called combustion analysis. A small, carefully weighed sample of an unknown compound that may contain carbon, hydrogen, nitrogen, and/or sulfur is burned in an oxygen atmosphere,Other elements, such as metals, can be determined by other methods. and the quantities of the resulting gaseous products (CO
2, H 2O, N 2, and SO 2, respectively) are determined by one of several possible methods. One procedure used in combustion analysis is outlined schematically in Figure \(\PageIndex{3}\) and a typical combustion analysis is illustrated in Examples \(\PageIndex{3}\) and \(\PageIndex{4}\).
Example \(\PageIndex{3}\): Combustion of Isopropyl Alcohol
What is the empirical formulate for isopropyl alcohol (which contains only C, H and O) if the combustion of a 0.255 grams isopropyl alcohol sample produces 0.561 grams of CO
2 and 0.306 grams of H 2O? Solution
From this information quantitate the amount of C and H in the sample.
\[ (0.561\; \cancel{g\; CO_2}) \left( \dfrac{1 \;mol\; CO_2}{44.0\; \cancel{g\;CO_2}}\right)=0.0128\; mol \; CO_2 \]
Since one mole of CO
2 is made up of one mole of C and two moles of O, if we have 0.0128 moles of CO 2 in our sample, then we know we have 0.0128 moles of C in the sample. How many grams of C is this?
\[ (0.0128 \; \cancel{mol\; C}) \left( \dfrac{12.011\; g \; C}{1\; \cancel{mol\;C}}\right)=0.154\; g \; C \]
How about the hydrogen?
\[ (0.306 \; \cancel{g\; H_2O}) \left( \dfrac{1\; mol \; H_2O}{18.0\; \cancel{g \;H_2O}}\right)=0.017\; mol \; H_2O \]
Since one mole of H
2O is made up of one mole of oxygen and moles of hydrogen, if we have 0.017 moles of H two 2O, then we have 2*(0.017) = 0.034 moles of hydrogen. Since hydrogen is about 1 gram/mole, we must have 0.034 grams of hydrogenin our original sample.
When we add our carbon and hydrogen together we get:
0.154 grams (C) + 0.034 grams (H) =
0.188 grams
But we know we combusted
0.255 grams of isopropyl alcohol. The 'missing' mass must be from the oxygen atoms in the isopropyl alcohol:
0.255 grams - 0.188 grams = 0.067 grams oxygen
This much oxygen is how many moles?
\[ (0.067 \; \cancel{g\; O}) \left( \dfrac{1\; mol \; O}{15.994\; \cancel{g \;O}}\right)=0.0042\; mol \; O \]
Overall therefore, we have:
0.0128 moles Carbon 0.0340 moles Hydrogen 0.0042 moles Oxygen
Divide by the smallest molar amount to normalize:
C = 3.05 atoms H = 8.1 atoms O = 1 atom
Within experimental error, the most likely empirical formula for propanol would be \(C_3H_8O\)
Example \(\PageIndex{4}\): Combustion of Naphalene
Naphthalene, the active ingredient in one variety of mothballs, is an organic compound that contains carbon and hydrogen only. Complete combustion of a 20.10 mg sample of naphthalene in oxygen yielded 69.00 mg of CO
2 and 11.30 mg of H 2O. Determine the empirical formula of naphthalene. Given: mass of sample and mass of combustion products Asked for: empirical formula Strategy: Use the masses and molar masses of the combustion products, CO 2and H 2O, to calculate the masses of carbon and hydrogen present in the original sample of naphthalene. Use those masses and the molar masses of the elements to calculate the empirical formula of naphthalene. Solution: A Upon combustion, 1 mol of CO 2 is produced for each mole of carbon atoms in the original sample. Similarly, 1 mol of H 2O is produced for every 2 mol of hydrogen atoms present in the sample. The masses of carbon and hydrogen in the original sample can be calculated from these ratios, the masses of CO 2 and H 2O, and their molar masses. Because the units of molar mass are grams per mole, we must first convert the masses from milligrams to grams:
\[ mass \, of \, C = 69.00 \, mg \, CO_2 \times {1 \, g \over 1000 \, mg } \times {1 \, mol \, CO_2 \over 44.010 \, g \, CO_2} \times {1 \, mol C \over 1 \, mol \, CO_2 } \times {12.011 \,g \over 1 \, mol \, C} \]
\[ = 1.883 \times 10^{-2} \, g \, C \]
\[ mass \, of \, H = 11.30 \, mg \, H_2O \times {1 \, g \over 1000 \, mg } \times {1 \, mol \, H_2O \over 18.015 \, g \, H_2O} \times {2 \, mol H \over 1 \, mol \, H_2O } \times {1.0079 \,g \over 1 \, mol \, H} \]
\[ = 1.264 \times 10^{-3} \, g \, H \]
B To obtain the relative numbers of atoms of both elements present, we need to calculate the number of moles of each and divide by the number of moles of the element present in the smallest amount:
\[ moles \, C = 1.883 \times 10^{-2} \,g \, C \times {1 \, mol \, C \over 12.011 \, g \, C} = 1.568 \times 10^{-3} \, mol C \]
\[ moles \, H = 1.264 \times 10^{-3} \,g \, H \times {1 \, mol \, H \over 1.0079 \, g \, H} = 1.254 \times 10^{-3} \, mol H \]
Dividing each number by the number of moles of the element present in the smaller amount gives
\[H: {1.254\times 10^{−3} \over 1.254 \times 10^{−3}} = 1.000 \, \, \, C: {1.568 \times 10^{−3} \over 1.254 \times 10^{−3}}= 1.250\]
Thus naphthalene contains a 1.25:1 ratio of moles of carbon to moles of hydrogen: C
1.25H 1.0. Because the ratios of the elements in the empirical formula must be expressed as small whole numbers, multiply both subscripts by 4, which gives C 5H 4 as the empirical formula of naphthalene. In fact, the molecular formula of naphthalene is C 10H 8, which is consistent with our results.
Exercise \(\PageIndex{4}\)
Xylene, an organic compound that is a major component of many gasoline blends, contains carbon and hydrogen only. Complete combustion of a 17.12 mg sample of xylene in oxygen yielded 56.77 mg of CO 2and 14.53 mg of H 2O. Determine the empirical formula of xylene. The empirical formula of benzene is CH (its molecular formula is C 6H 6). If 10.00 mg of benzene is subjected to combustion analysis, what mass of CO 2and H 2O will be produced? Answer a
The empirical formula is C
4H 5. (The molecular formula of xylene is actually C 8H 10.) Answer b
33.81 mg of CO
2; 6.92 mg of H 2O Contributors
Paul Flowers (University of North Carolina - Pembroke), Klaus Theopold (University of Delaware) and Richard Langley (Stephen F. Austin State University) with contributing authors. Textbook content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/85abf193-2bd...a7ac8df6@9.110).
|
We say that A≤LRB if every B-random set is A-random with respect to Martin–Löf randomness. We study this relation and its interactions with Turing reducibility, classes, hyperimmunity and other recursion theoretic notions.
We say that A ≤LR B if every B-random number is A-random. Intuitively this means that if oracle A can identify some patterns on some real γ. In other words. B is at least as good as A for this purpose. We study the structure of the LR degrees globally and locally (i.e., restricted to the computably enumberable degrees) and their relationship with the Turing degrees. Among other results we show that whenever α in not GL₂ the LR degree of (...) α bounds $2^{\aleph _{0}}$ degrees (so that, in particular, there exist LR degrees with uncountably many predecessors) and we give sample results which demonstrate how various techniques from the theory of the c.e. degrees can be used to prove results about the c.e. LR degrees. (shrink)
We prove a number of results in effective randomness, using methods in which Π⁰₁ classes play an essential role. The results proved include the fact that every PA Turing degree is the join of two random Turing degrees, and the existence of a minimal pair of LR degrees below the LR degree of the halting problem.
The strong weak truth table (sw) reducibility was suggested by Downey, Hirschfeldt, and LaForte as a measure of relative randomness, alternative to the Solovay reducibility. It also occurs naturally in proofs in classical computability theory as well as in the recent work of Soare, Nabutovsky, and Weinberger on applications of computability to differential geometry. We study the sw-degrees of c.e. reals and construct a c.e. real which has no random c.e. real (i.e., Ω number) sw-above it.
A set $B\subseteq\mathbb{N}$ is called low for Martin-Löf random if every Martin-Löf random set is also Martin-Löf random relative to B . We show that a $\Delta^0_2$ set B is low for Martin-Löf random if and only if the class of oracles which compress less efficiently than B , namely, the class $\mathcal{C}^B=\{A\ |\ \forall n\ K^B(n)\leq^+ K^A(n)\}$ is countable (where K denotes the prefix-free complexity and $\leq^+$ denotes inequality modulo a constant. It follows that $\Delta^0_2$ is the largest arithmetical (...) class with this property and if $\mathcal{C}^B$ is uncountable, it contains a perfect $\Pi^0_1$ set of reals. The proof introduces a new method for constructing nontrivial reals below a $\Delta^0_2$ set which is not low for Martin-Löf random. (shrink)
Given two infinite binary sequences A,B we say that B can compress at least as well as A if the prefix-free Kolmogorov complexity relative to B of any binary string is at most as much as the prefix-free Kolmogorov complexity relative to A, modulo a constant. This relation, introduced in Nies [14] and denoted by A≤LKB, is a measure of relative compressing power of oracles, in the same way that Turing reducibility is a measure of relative information. The equivalence classes (...) induced by ≤LK are called LK degrees and there is a least degree containing the oracles which can only compress as much as a computable oracle, also called the ‘low for K’ sets. A well-known result from Nies [14] states that these coincide with the K-trivial sets, which are the ones whose initial segments have minimal prefix-free Kolmogorov complexity. We show that with respect to ≤LK, given any non-trivial sets X,Y there is a computably enumerable set A which is not K-trivial and it is below X,Y. This shows that the local structures of and Turing degrees are not elementarily equivalent to the corresponding local structures in the LK degrees. It also shows that there is no pair of sets computable from the halting problem which forms a minimal pair in the LK degrees; this is sharp in terms of the jump, as it is known that there are sets computable from which form a minimal pair in the LK degrees. We also show that the structure of LK degrees below the LK degree of the halting problem is not elementarily equivalent to the or structures of LK degrees. The proofs introduce a new technique of permitting below a set that is not K-trivial, which is likely to have wider applications. (shrink)
TheK-trivial sets form an ideal in the Turing degrees, which is generated by its computably enumerable members and has an exact pair below the degree of the halting problem. The question of whether it has an exact pair in the c.e. degrees was first raised in [22, Question 4.2] and later in [25, Problem 5.5.8].We give a negative answer to this question. In fact, we show the following stronger statement in the c.e. degrees. There exists aK-trivial degreedsuch that for all (...) degreesa, bwhich are notK-trivial anda > d, b > dthere exists a degreevwhich is notK-trivial anda > v, b > v. This work sheds light to the question of the definability of theK-trivial degrees in the c.e. degrees. (shrink)
We show that there exists a real α such that, for all reals β, if α is linear reducible to β then β≤Tα. In fact, every random real satisfies this quasi-maximality property. As a corollary we may conclude that there exists no ℓ-complete Δ2 real. Upon realizing that quasi-maximality does not characterize the random reals–there exist reals which are not random but which are of quasi-maximal ℓ-degree–it is then natural to ask whether maximality could provide such a characterization. Such hopes, (...) however, are in vain since no real is of maximal ℓ-degree. (shrink)
We study ideals in the computably enumerable Turing degrees, and their upper bounds. Every proper ideal in the c.e. Turing degrees has an incomplete upper bound. It follows that there is no prime ideal in the c.e. Turing degrees. This answers a question of Calhoun [2]. Every proper ideal in the c.e. Turing degrees has a low2 upper bound. Furthermore, the partial order of ideals under inclusion is dense.
We survey recent advances on the interface between computability theory and algorithmic randomness, with special attention on measures of relative complexity. We focus on (weak) reducibilities that measure (a) the initial segment complexity of reals and (b) the power of reals to compress strings, when they are used as oracles. The results are put into context and several connections are made with various central issues in modern algorithmic randomness and computability.
We study inversions of the jump operator on ${\mathrm{\Pi }}_{1}^{0}$ classes, combined with certain basis theorems. These jump inversions have implications for the study of the jump operator on the random degrees—for various notions of randomness. For example, we characterize the jumps of the weakly 2-random sets which are not 2-random, and the jumps of the weakly 1-random relative to 0′ sets which are not 2-random. Both of the classes coincide with the degrees above 0′ which are not 0′-dominated. A (...) further application is the complete solution of [24, Problem 3.6.9]: one direction of van Lambalgen's theorem holds for weak 2-randomness, while the other fails. Finally we discuss various techniques for coding information into incomplete randoms. Using these techniques we give a negative answer to [24, Problem 8.2.14]: not all weakly 2-random sets are array computable. In fact, given any oracle X, there is a weakly 2-random which is not array computable relative to X. This contrasts with the fact that all 2-random sets are array computable. (shrink)
A new approach for a uniform classification of the computably approximable real numbers is introduced. This is an important class of reals, consisting of the limits of computable sequences of rationals, and it coincides with the 0'-computable reals. Unlike some of the existing approaches, this applies uniformly to all reals in this class: to each computably approximable real x we assign a degree structure, the structure of all possible ways available to approximate x. So the main criterion for such classification (...) is the variety of the effective ways we have to approximate a real number. We exhibit extreme cases of such approximation structures and prove a number of related results. (shrink)
We study Δ2 reals x in terms of how they can be approximated symmetrically by a computable sequence of rationals. We deal with a natural notion of ‘approximation representation’ and study how these are related computationally for a fixed x. This is a continuation of earlier work; it aims at a classification of Δ2 reals based on approximation and it turns out to be quite different than the existing ones (based on information content etc.).
We investigate notions of randomness in the space ${{\mathcal C}(2^{\mathbb N})}$ of continuous functions on ${2^{\mathbb N}}$ . A probability measure is given and a version of the Martin-Löf test for randomness is defined. Random ${\Delta^0_2}$ continuous functions exist, but no computable function can be random and no random function can map a computable real to a computable real. The image of a random continuous function is always a perfect set and hence uncountable. For any ${y \in 2^{\mathbb N}}$ , (...) there exists a random continuous function F with y in the image of F. Thus the image of a random continuous function need not be a random closed set. The set of zeroes of a random continuous function is always a random closed set. (shrink)
We study the approximation properties of computably enumerable reals. We deal with a natural notion of approximation representation and study their wtt-degrees. Also, we show that a single representation may correspond to a quite diverse variety of reals.
Let h : ℕ → ℚ be a computable function. A real number x is called h-monotonically computable if there is a computable sequence of rational numbers which converges to x h-monotonically in the sense that h|x – xn| ≥ |x – xm| for all n andm > n. In this paper we investigate classes h-MC of h-mc real numbers for different computable functions h. Especially, for computable functions h : ℕ → ℚ, we show that the class h-MC coincides (...) with the classes of computable and semi-computable real numbers if and only if Σi∈ℕ) = ∞and the sum Σi∈ℕ) is a computable real number, respectively. On the other hand, if h ≥ 1 and h converges to 1, then h-MC = SC no matter how fast h converges to 1. Furthermore, for any constant c > 1, if h is increasing and converges to c, then h-MC = c-MC. Finally, if h is monotone and unbounded, then h-MC contains all ω-mc real numbers which are g-mc for some computable function g. (shrink)
We extend the hierarchy defined in [5] to cover all hyperarithmetical reals. An intuitive idea is used or the definition, but a characterization of the related classes is obtained. A hierarchy theorem and two fixed point theorems are presented.
We show that in the c.e. weak truth table degrees if b < c then there is an a which contains no hypersimple set and b < a < c. We also show that for every w < c in the c.e. wtt degrees such that w is hypersimple, there is a hypersimple a such that w < a < c. On the other hand, we know that there are intervals which contain no hypersimple set.
|
Perimeter is the distance around a two dimensional shape, or the measurement of the distance around something; the length of the boundary.
A
perimeter is a path that surrounds a two-dimensional shape. The word comes from the Greek peri (around) and meter (measure). The term may be used either for the path or its length - it can be thought of as the length of the outline of a shape. The perimeter of a circle or ellipse is called its circumference.
Calculating the perimeter has considerable practical applications. The perimeter can be used to calculate the length of fence required to surround a yard or garden. The perimeter of a wheel (its circumference) describes how far it will roll in one revolution. Similarly, the amount of string wound around a spool is related to the spool's perimeter.
Formulae
shape formula variables circle 2 \pi r = \pi d where r is the radius of the circle and d is the diameter. triangle a + b + c\, where a, b and c are the lengths of the sides of the triangle. square/rhombus 4a where a is the side length. rectangle 2(l+w) where l is the length and w is the width. equilateral polygon n \times a\, where n is the number of sides and a is the length of one of the sides. regular polygon 2nb \sin\left(\frac{\pi}{n}\right) where n is the number of sides and b is the distance between center of the polygon and one of the vertices of the polygon. general polygon a_1 + a_2 + a_3 + \cdots + a_n = \sum_{i=1}^n a_i where a_{i} is the length of the i-th (1st, 2nd, 3rd ... nth) side of an n-sided polygon.
The perimeter is the distance around a shape. Perimeters for more general shapes can be calculated as any path with \int_0^L \mathrm{d}s where L is the length of the path and ds is an infinitesimal line element. Both of these must be replaced with other algebraic forms in order to be solved: an advanced notion of perimeter, which includes hypersurfaces bounding volumes in n-dimensional Euclidean spaces can be found in the theory of Caccioppoli sets.
Polygons
Perimeter of a rectangle.
Polygons are fundamental to determining perimeters, not only because they are the simplest shapes but also because the perimeters of many shapes are calculated by approximating them with sequences of polygons tending to these shapes. The first mathematician known to have used this kind of reasoning is Archimedes, who approximated the perimeter of a circle by surrounding it with regular polygons.
The perimeter of a polygon equals the sum of the lengths of its edges. In particular, the perimeter of a rectangle which width is w and length \ell is equal to 2w + 2\ell.
An equilateral polygon is a polygon which has all sides of the same length (for example, a rhombus is a 4-sided equilateral polygon). To calculate the perimeter of an equilateral polygon, one must multiply the common length of the sides by the number of sides.
A regular polygon may be defined by the number of its sides and by its
radius, that is to say, the constant distance between its Centre (geometry) and each of its vertices. One can calculate the length of its sides using trigonometry. If R is a regular polygon's radius and n is the number of its sides, then its perimeter is 2nR \sin\left(\frac{180^{\circ}}{n}\right).
A splitter of a triangle is a cevian (a segment from a vertex to the opposite side) that divides the perimeter into two equal lengths, this common length being called the semiperimeter of the triangle. A cleaver is a segment from the midpoint of a side of a triangle to the opposite side such that the perimeter is divided into two equal lengths.
Circumference of a circle
If the diameter of a circle is 1, its circumference equals π.
The perimeter of a circle, often called the circumference, is proportional to its diameter and its radius. That is to say, there exists a constant number pi, π (the greek
p for perimeter), such that if P is the circle's perimeter and D its diameter then: P = \pi\cdot{D}.\!
In terms of the radius
r of the circle, this formula becomes: {P}={2}\pi\cdot{r}.\!
To calculate a circle's perimeter, knowledge of its radius or diameter and of the number π is sufficient. The problem is that π is not rational (it cannot be expressed as the quotient of two integers), nor is it algebraic (it is not a root of a polynomial equation with rational coefficients). So, obtaining an accurate approximation of π is important for the calculation. The search for the digits of π is relevant to many fields, such as mathematical analysis, algorithmics and computer science.
Perception of perimeter
The more one cuts this shape, the lesser the area and the greater the perimeter. The convex hull
remains the same.
The perimeter and the area are the main two measures of geometric figures. Confusing them is frequent, as well as believing that the greater one of them is, the greater is the other. Indeed, an enlargement (or a reduction) of a shape make its area grow (or decrease) as well as its perimeter. For example, if a field is drawn on a 1/10,000 scale map, the actual field perimeter can be calculated multiplying the drawing perimeter by 10,000. The real area is 10,000
2 times the area of the shape on the map.
Nevertheless there is no relation between the area and the perimeter of an ordinary shape. For example, the perimeter of a rectangle of width 0.001 and length 1000 is slightly above 2000, while the perimeter of a rectangle of width 0.5 and length 2 is 5. Both areas equal to 1.
Proclus (5th century) reported that Greek peasants "fairly" parted fields relying on their perimeters.
[1] But a field's production is proportional to its area, not to its perimeter: many naive peasants may have got fields with long perimeters but low areas (thus, low crops).
If one removes a piece from a figure, its area decreases but its perimeter may not. In the case of very irregular shapes, some people may confuse perimeter with convex hull. The convex hull of a figure may be visualized as the shape formed by a rubber band stretched around it. On the animated picture on the left, all the figures have the same convex hull: the big, first hexagon.
Isoperimetry
The isoperimetric problem is to determine a figure with the largest area, amongst those having a given perimeter. The solution is intuitive: it is the circle. In particular, that is why drops of fat on a broth surface are circular.
This problem may seem simple, but its mathematical proof needs sophisticated theorems. The isoperimetric problem is sometimes simplified: to find the quadrilateral, or the triangle or another particular figure, with the largest area amongst those having a given perimeter. The solution to the quadrilateral isoperimetric problem is the square, and the solution to the triangle problem is the equilateral triangle. In general, the polygon with
n sides having the largest area and a given perimeter is the regular polygon, which is closer to being a circle than is an irregular polygon. See also References ^ Heath, T. (1981). A History of Greek Mathematics 2. Dover Publications. p. 206. External links
This article incorporates information from
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
Forgot password? New user? Sign up
Existing user? Log in
It is given that 2cos2πn=x+1x 2\cos \dfrac{2\pi}{n} = x+\dfrac{1}{x}2cosn2π=x+x1 for n∈Nn \in \mathbb Nn∈N. If xn+1xn=anb x^{n} + \dfrac{1}{x^{n}} = an^{b} xn+xn1=anb, where aaa and bbb are real numbers, find a+ba+ba+b.
Problem Loading...
Note Loading...
Set Loading...
|
Check out section 2.3.2 of this paper by Chapelle and Zien. They have a nice heuristic to select a good search range for $\sigma$ of the RBF kernel and $C$ for the SVM. I quote
To determine good values of the remaining free parameters (eg, by CV),
it is important to search on the right scale. We therefore fix default
values for $C$ and $\sigma$ that have the right order of magnitude. In
a $c$-class problem we use the $1/c$ quantile of the pairwise
distances $D^\rho_{ij}$ of all data-points as a default for $\sigma$.
The default for $C$ is the inverses of the empirical variance $s^2$ in
features space, which can be calculated by $s^2 = \frac{1}{n} \sum_i K_{ii} - \frac{1}{n^2}\sum_{i,j} K_{ij}$
from a $n\times n$ kernel
matrix $K$.
Afterwards, they use multiples (e.g. $2^k$ for $k\in \{-2,...,2\}$) of the default value as search range in a grid-search using cross-validation. That always worked very well for me.
Of course, we @ciri said, normalizing the data etc. is always a good idea.
|
They perhaps want you to make the calculations
relatively, which means that even if you assume by looking at the first graph that the mass of the molecular chain length of $1$ (taken relatively and not as per scale) is $x$ units and thus, the mass of $10$ (again, taken relatively and not as per scale) such chains add up to $10x$ units and finally calculate $M_n$ to be $\approx 68.2273x$ it isn't helping us to get a meaningful answer.
A thing that should be noted is that even if you calculate it by following my example, when you'll finally calculate $\displaystyle \frac{M_w}{M_n}$ which (perhaps) the questioner wants you to calculate it won't make difference because this result is independent of $x$.
It appears that the questioner hasn't explicitly stated the mass of a polymer chain or that of the monomer units. They even haven't stated that all the polymers are considered identical with respect; it is highly probable that they want you to assume these things so as not to make things overly complicated.
|
Taylor series
A Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the function’s derivatives at a single point
$$f(a) \approx \sum\limits_{n=0}^{\infty}{\frac{f^{(n)}(a)}{n!}(x-a)^n}$$
where $f^{n}(a)$ donates the $n^{th}$ derivative of $f$ evaluated at the point $a$.
And here’s a very intuitive example:
The exponential function $e^x$ (in blue), and the sum of the first $n + 1$ terms of its Taylor series at $0$ (in red).
Newton’s method Overview
In calculus, Newton’s method is an iterative method for finding the roots of a differentiable function $f$.
In machine learning, we apply Newton’s method to the derivative $f’$ of the cost function $f$.
One-dimension version
In the one-dimensional problem, Newton’s method attempts to construct a sequence ${x_n}$ from an initial guess $x_0$ that converges towards some value $\hat{x}$ satisfying $f'(\hat{x})=0$. In another word, we are trying to find a stationary point of $f$.
Consider the second order Taylor expansion $f_T(x)$ of $f$ around $x_n$ is
$$f_T(x)=f_T(x_n+\Delta x) \approx f(x_n) + f'(x_n) \Delta x + \frac{1}{2}f”(x_n) \Delta x^2$$
So now, we are trying to find a $\Delta x$ that sets the derivative of this last expression with respect to $\Delta x$ equal to zero, which means
$$\frac{\rm{d}}{\rm{d} \Delta x}(f(x_n) + f'(x_n) \Delta x + \frac{1}{2}f”(x_n) \Delta x^2) = f'(x_n)+f”(x_n)\Delta x = 0$$
Apparently, $\Delta x = -\frac{f'(x_n)}{f”(x_n)}$ is the solution. So it can be hoped that $x_{n+1} = x_n+\Delta x = x_n – \frac{f'(x_n)}{f”(x_n)}$ will be closer to a stationary point $\hat{x}$.
High dimensional version
Still consider the second order Taylor expansion $f_T(x)$ of $f$ around $x_n$ is
$$f_T(x)=f_T(x_n+\Delta x) \approx f(x_n) + \Delta x^{\mathsf{T}} \nabla f(x_n) + \frac{1}{2} {\rm H} f(x_n) \Delta x (\Delta x)^{\mathsf{T}}$$
where ${\rm H} f(x)$ donates the Hessian matrix of $f(x)$ and $\nabla f(x)$ donates the gradient. (See more about Taylor expansion at https://en.wikipedia.org/wiki/Taylor_series#Taylor_series_in_several_variables)
So $\Delta x = – [{\rm H}f(x_n)]^{-1}\nabla f(x_n)$ should be a good choice.
Limitation
As is known to us all, the time complexity to get $A^{-1}$ given $A$ is $O(n^3)$ when $A$ is a $n \times n$ matrix. So when the data set is of too many dimensions, the algorithm will work quite slow.
Advantage
The reason steepest descent goes wrong is that the gradient for one weight gets messed up by simultaneous changes to all the other weights.
And the Hessian matrix determines the sizes of these interactions so that Newton’s method minimize these interactions as much as possible.
References
|
I need help to solve each inequality
6. 3| x - 3 | < 12
7. 5| 2x + 8 | + 4 < 9
8. -2| 8x - 4 | + 1 < -15
6. \(3| x - 3 | < 12\)
First, divide both sides by \(3\) to get \(| x - 3 | < 4\). Now we have that \(x - 3 < 4\) or that \( x - 3 > -4\). By solving each one, we get \(x<7\) or \(x>-1\). This gives us \((-1, 7)\).
7. \( 5| 2x + 8 | + 4 < 9\)
First, subtract \(4\) from both sides to get \(5\left|2x+8\right|<5\). Then, we can divide by \(5\) on both sides to get \(\left|2x+8\right|<1\). Now, we have \(-1<2x+8<1\). By subtracting \(8\) on both sides then dividing by \(2\), we get \(x>-\frac{9}{2}\) and \(x<-\frac{7}{2}\) so we have \((-\frac{9}{2},-\frac{7}{2})\).
8. \(-2| 8x - 4 | + 1 < -15\)
Subtract \(1\) from both sides to get \(-2\left|8x-4\right|<-16\). Then, divide by \(-2\) on both sides to get \(\left|8x-4\right|>8\). Now, we can get that \(8x-4>8\) and \(8x-4<-8\). By simplifying both inequalities, we get \(\:\left(-\infty \:,\:-\frac{1}{2}\right)\cup \left(\frac{3}{2},\:\infty \:\right)\).
- Daisy
|
Now we return to systems of distinct representatives.
A system of distinct representatives corresponds to a set of edges inthe corresponding bipartite graph that share no endpoints; such a collection of edges (in any graph, not just a bipartite graph) iscalled a
matching. In figure 4.5.1, a matching is shown in red. This is a largest possiblematching, since it contains edges incident with all four of the topvertices, and it thus corresponds to a complete sdr.
Any bipartite graph can be interpreted as a set system: we simplylabel all the vertices in one part with "set names'' $A_1$, $A_2$,etc., and the other part is labeled with "element names'', and thesets are defined in the obvious way: $A_i$ is theneighborhood of the vertex labeled "$A_i$''. Thus, we know one way tocompute the size of a maximum matching, namely, we interpret thebipartite graph as a set system and compute the size of a maximum
sdr; this is the size of a maximum matching.
We will see another way to do this working directly with the graph. There are two advantages to this: it will turn out to be more efficient, and as a by-product it will actually find a maximum matching.
Given a bipartite graph, it is easy to find a
maximalmatching, that is, one that cannot be made larger simply by adding anedge: just choose edges that do not share endpoints until this is nolonger possible. See figure 4.5.2 for an example.
An obvious approach is then to attempt to make the matchinglarger. There is a simple way to do this, if it works: We look for an
alternating chain, defined as follows.
Definition 4.5.1 Suppose $M$ is a matching, and suppose that $v_1,w_1,v_2,w_2,\ldots,v_k,w_k$ is a sequence of vertices such that no edge in $M$ is incident with $v_1$ or $w_k$, and moreover for all $1\le i\le k$, $v_i$ and $w_i$ are joined by an edge not in $M$, and for all $1\le i\le k-1$, $w_i$ and $v_{i+1}$ are joined by an edge in $M$. Then the sequence of vertices together with the edges joining them in order is an alternating chain.
Suppose now that we remove from $M$ all the edges that are in the alternating chain and also in $M$, forming $M'$, and add to $M'$ all of the edges in the alternating chain not in $M$, forming $M''$. It is not hard to show that $M''$ is a matching, and it contains one more edge than $M$. See figure 4.5.4.
Remarkably, if there is no alternating chain, then the matching $M$ is a maximum matching.
Theorem 4.5.2 Suppose that $M$ is a matching in a bipartite graph $G$, and there is no alternating chain. Then $M$ is a maximum matching.
Proof. We prove the contrapositive: Suppose that $M$ is not a maximum matching. Then there is a larger matching, $N$. Create a new graph $G'$ by eliminating all edges that are in both $M$ and $N$, and also all edges that are in neither. We are left with just those edges in $M$ or $N$ but not both.
In this new graph no vertex is incident with more than two edges, since if $v$ were incident with three edges, at least two of them would belong to $M$ or two would belong to $N$, but that can't be true since $M$ and $N$ are matchings. This means that $G'$ is composed of disjoint paths and cycles. Since $N$ is larger than $M$, $G'$ contains more edges from $N$ than from $M$, and therefore one of the paths starts and ends with an edge from $N$, and along the path the edges alternate between edges in $N$ and edges in $M$. In the original graph $G$ with matching $M$, this path forms an alternating chain. The "alternating'' part is clear; we need to see that the first and last vertices in the path are not incident with any edge in $M$.
Suppose that the first two vertices are $v_1$ and $v_2$. Then $v_1$ and $v_2$ are joined by an edge of $N$. Suppose that $v_1$ is adjacent to a vertex $w$ and that the edge between $v_1$ and $w$ is in $M$. This edge cannot be in $N$, for then there would be two edges of $N$ incident at $v_1$. But then this edge is in $G'$, since it is in $M$ but not $N$, and therefore the path in $G'$ does not start with the edge in $N$ joining $v_1$ and $v_2$. This contradiction shows that no edge of $M$ is incident at $v_1$. The proof that the last vertex in the path is likewise not incident with an edge of $M$ is essentially identical.
Now to find a maximum matching, we repeatedly look for alternating chains; when we cannot find one, we know we have a maximum matching. What we need now is an efficient algorithm for finding the alternating chain.
The key, in a sense, is to look for all possible alternating chains simultaneously. Suppose we have a bipartite graph with vertex partition $\{v_1,v_2,\ldots,v_n\}$ and $\{w_1,w_2,\ldots,w_m\}$ and a matching $M$. The algorithm labels vertices in such a way that if it succeeds, the alternating chain is indicated by the labels. Here are the steps:
0. Label with `(S,0)' all vertices $v_i$ that are not incident with an edge in $M$. Set variable
step to 0.
Now repeat the next two steps until no vertex acquires a new label:
1. Increase
step by 1. For each newly labeled vertex $v_i$, label with $(i,step)$ any unlabeled neighbor $w_j$ of $v_i$ that is connected to $v_i$ by an edge that is not in $M$.
2. Increase
step by 1. For each newly labeled vertex $w_i$, label with $(i,step)$ any unlabeled neighbor $v_j$ of $w_i$ that is connected to $w_i$ by an edge in $M$.
Here "newly labeled'' means labeled at the previous step. When labeling vertices in step 1 or 2, no vertex is given more than one label. For example, in step 1, it may be that $w_k$ is a neighbor of the newly labeled vertices $v_i$ and $v_j$. One of $v_i$ and $v_j$, say $v_i$, will be considered first, and will cause $w_k$ to be labeled; when $v_j$ is considered, $w_k$ is no longer unlabeled.
At the conclusion of the algorithm, if there is a labeled vertex $w_i$that is not incident with any edge of $M$, then there is analternating chain, and we say the algorithm succeeds. If there is nosuch $w_i$, then there is no alternating chain, and we say thealgorithm fails. The first of these claims is easy to see: Supposevertex $w_i$ is labeled $(k_1,s)$. It became labeled due to vertex$v_{k_1}$ labeled $(k_2,s-1)$ that is connected by an edge not in $M$to $w_i$. In turn, $v_{k_1}$ is connected by an edge in $M$ to vertex$w_{k_2}$ labeled $(k_3,s-2)$. Continuing in this way, we discover analternating chain ending at some vertex $v_j$ labeled $(S,0)$: sincethe second coordinate
step decreases by 1 at each vertex alongthe chain, we cannot repeat vertices, and must eventually get to avertex with $step=0$. If we apply the algorithm to the graph infigure 4.5.2, we get the labeling shown infigure 4.5.5, which then identifies thealternating chain $w_2, v_3,w_1,v_1$. Note that as soon as a vertex $w_i$ that is incident with no edge of $M$is labeled, we may stop, as there must be an alternating chainstarting at $w_i$; we need not continue the algorithm until no morelabeling is possible. In the example in figure 4.5.5,we could stop after step 3, when $w_2$ becomes labeled. Also, the step component of the labels is not really needed; it wasincluded to make it easier to understand that if the algorithmsucceeds, there really is an alternating chain.
To see that when the algorithm fails there is no alternating chain, we introduce a new concept.
Definition 4.5.3 A
vertex cover in a graph is a set of vertices$S$ such that every edge in the graph has at least one endpoint in$S$.
There is always a vertex cover of a graph, namely, the set of all the vertices of the graph. What is clearly more interesting is a smallest vertex cover, which is related to a maximum matching.
Theorem 4.5.4 If $M$ is a matching in a graph and $S$ is a vertex cover, then $|M|\le |S|$.
Proof. Suppose $\hat M$ is a matching of maximum size and $\hat S$ is a vertex cover of minimum size. Since each edge of $\hat M$ has an endpoint in $\hat S$, if $|\hat M|> |\hat S|$ then some vertex in $\hat S$ is incident with two edges of $\hat M$, a contradiction. Hence $|M|\le |\hat M|\le |\hat S|\le |S|$.
Suppose that we have a matching $M$ and vertex cover $S$ for a graph, and that $|M|=|S|$. Then the theorem implies that $M$ is a maximum matching and $S$ is a minimum vertex cover. To show that when the algorithm fails there is no alternating chain, it is sufficient to show that there is a vertex cover that is the same size as $M$. Note that the proof of this theorem relies on the "official'' version of the algorithm, that is, the algorithm continues until no new vertices are labeled.
Theorem 4.5.5 Suppose the algorithm fails on the bipartite graph $G$ with matching $M$. Let $U$ be the set of labeled $w_i$, $L$ the set of unlabeled $v_i$, and $S=L\cup U$. Then $S$ is a vertex cover and $|M|=|S|$.
Proof. If $S$ is not a cover, there is an edge $\{v_i,w_j\}$ with neither $v_i$ nor $w_j$ in $S$, so $v_i$ is labeled and $w_j$ is not. If the edge is not in $M$, then the algorithm would have labeled $w_j$ at the step after $v_i$ became labeled, so the edge must be in $M$. Now $v_i$ cannot be labeled $(S,0)$, so $v_i$ became labeled because it is connected to some labeled $w_k$ by an edge of $M$. But now the two edges $\{v_i,w_j\}$ and $\{v_i,w_k\}$ are in $M$, a contradiction. So $S$ is a vertex cover.
We know that $|M|\le |S|$, so it suffices to show $|S|\le |M|$, which we can do by finding an injection from $S$ to $M$. Suppose that $w_i\in S$, so $w_i$ is labeled. Since the algorithm failed, $w_i$ is incident with an edge $e$ of $M$; let $f(w_i)=e$. If $v_i\in S$, $v_i$ is unlabeled; if $v_i$ were not incident with any edge of $M$, then $v_i$ would be labeled $(S,0)$, so $v_i$ is incident with an edge $e$ of $M$; let $f(v_i)=e$. Since $G$ is bipartite, it is not possible that $f(w_i)=f(w_j)$ or $f(v_i)=f(v_j)$. If $f(w_i)=f(v_j)$, then $w_i$ and $v_j$ are joined by an edge of $M$, and the algorithm would have labeled $v_j$. Hence, $f$ is an injection.
We have now proved this theorem:
It is clear that the size of a maximum
sdr is the same as thesize of a maximum matching in the associated bipartite graph $G$. Itis not too difficult to see directly that the size of a minimum vertex cover in$G$ is the minimum value of$f(n,i_1,i_2,\ldots,i_k)=n-k+|\bigcup_{j=1}^k A_{i_j}|$. Thus, if the size of a maximum matching is equal to the size of a minimumcover, then the size of a maximum sdr is equal to the minimumvalue of $n-k+|\bigcup_{j=1}^k A_{i_j}|$, and conversely. More concisely,theorem 4.5.6 is true if and only if theorem 4.2.1 is true.
More generally, in the schematic of figure 4.5.6, if any three of the relationships are known to be true, so is the fourth. In fact, we have proved all but the bottom equality, so we know it is true as well.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Given a modular form $f$ of weight $k$ for a congruence subgroup $\Gamma$, and a modular function $t$ with $t(i\infty)=0$, we can form a function $F$ such that $F(t(z))=f(z)$ (at least locally), and we know that this $F$ must now satisfy a linear ordinary differential equation $$P_{k+1}(T)F^{(k+1)} + P_{k}(T)F^{(k)} + ... + P_{0}(T)F = 0$$
Where $F^{(i)}$ is the i-th derivative, and the $P_i$ are algebraic functions of $T$, and are rational functions of $T$ if $t$ is a Hauptmodul for $X(\Gamma)$.
My question is the following:
given a modular form $f$, what are necessary and sufficient conditions for the existence of a modular function $t$ as above such that the $P_i(T)$ are rational functions?
For example, the easiest sufficient condition is that $X(\Gamma)$ has genus 0, by letting $t$ be a Hauptmodul. But, this is not necessary, as the next condition will show.
Another sufficient condition is that $f$ is a rational weight 2 eigenform. I can show this using Shimura's construction* of an associated elliptic curve, and a computation of a logarithm for the formal group in some coordinates (*any choice in the isogeny class will work).
Trying to generalise, I have thought of the following: if $f$ is associated to a motive $h^i(V)$ of a variety $V$, with a pro-representable Artin-Mazur formal group $\Phi^i(V)$ of dimension 1, then we can construct formal group law a-la Stienstra style, and get a logarithm using the coefficients of powers of a certain polynomial. This makes the logarithm satisfy a differential equation with rational functions as coefficients. Since the dimension is 1, the isomorphism back to "modular coordinates" will be a single modular function $t$, and this answers the question positively.
This was the original motivation for the question - a positive answer is weaker, but maybe suggests the existence of associated varieties to rational eigenforms.
Putting non-eigenforms aside, since I'm not interested as much in them, we are left with non-rational eigenforms. We can try to perform the same Stienstra construction, but this time we get that the galois orbit of $f$ is associated to a "formal group law" of a motive with dimension greater than one. This will make for an interesting recurrence for the vector of the galois orbit, but not necessarily for each form individually, as the isomorphism of formal groups laws (between Stienstra's and those with the modular forms as logarithm) might scramble them together. Maybe not, and this solves might the question. I realise this last paragraph might be difficult to understand, for the wording is clumsy, and the mathematical notions are even worse. If you're really interested in this, I'd be happy to elaborate.
|
When dealing with phonons and specific heat of solids, it seems the really important quantity to obtain is the density of states $N(\omega)$. When we have it, we can find the internal energy as
$$U(T)=\int N(\omega) \dfrac{\hbar \omega}{e^{-\hbar\omega/k_BT}-1}d\omega,$$
and having it we can also find the specific heat
$$C(T)=\dfrac{\partial U}{\partial T}.$$
Now, the way to find $N(\omega)$ is usually this: if the real lattice has primitive cell with volume $V$, the volume of the primitive cell of the $k$-space is $(2\pi)^3/V$. This means that, since there's just one point of the Bravais lattice per primitive cell, there are $V/(2\pi)^3$ points of the $k$-lattice per unit volume.
This in turns leads to integrals of the form
$$N(\omega)d\omega=\dfrac{V}{(2\pi)^3}\int d\mathbf{k},$$
where the integral is taken over the region between $\omega $ and $\omega+d\omega$, or equivalently
$$N(\omega)=\dfrac{V}{(2\pi)^3}\int\dfrac{dS}{|\nabla_\mathbf{k}\omega|},$$
where the integral is taken over the surface $\omega$.
This all is fine, given one dispersion relation $\omega(\mathbf{k})$ we can find $N(\omega)$ using those integrals, and with $N(\omega)$ we can find $U(T)$ and hence $C(T)$.
On the other hand, what if the basis of the Bravais lattice has more than one atom? For instance, a 2 atom basis?
This is quite common, but I'm failing to see how this affects all of this derivation. The naive guess would be that $N(\omega)$ would be multiplied by $2$, but this is just a guess. So, how the number of atoms in the basis affects this reasoning, and hence, the thermodynamic properties of a crystal, like specific heat?
|
ECE662: Statistical Pattern Recognition and Decision Making Processes
Spring 2008, Prof. Boutin
Collectively created by the students in the class
Contents Lecture 28 Lecture notes The Last Class
Welcome to the last class of the semester! The kiwi is going to be a great resource.
Partial Differential Equations (Continued)
[Note: I do not think the difference between $ \partial $ and $ d $ is very important -- feel free to fix it if you disagree. I'm also a little sloppy with the vector symbols]
We may discretize this using a finite difference equation.
a different way.
where $ u(t,x) $ is the initial guess. But we can also look at this
We fix t, and let $ \frac{\Delta t}{\Delta x^{2}}=\frac{1}{3} $. Then
What does this mean? Let's consider an example. This is the same as convolution:
[Figure: [1/3,1/3,1/3]*[(x-delx),*(x),(x+delx)]u(t,.) ]
At each successive time step, the convolution iterates on the results of the previous convolution until the solution stabilizes. (That is, until we reach the steady state solution, where $ \frac{du}{dt}=0\Longrightarrow u_{xx}=0 $)
We can use heat flow to smooth out parametric curves.e.g. in $ \mathbb{R}^{2} $, consider the curve
We shall commonly drop the vector notation in the rest of this discussion.
[Figure: Arbitrary curve , looping from $ \tau=0 $ to $ \tau=10,000 $.]
And consider the family
satisfying the 2D heat equation
This gives us two 1-D heat equations
The solution to these equations is If we cannot solve these analytically, we can again discretize the solution. Note that $ \Delta\tau\neq\Delta x $. $ \Delta\tau $ is the spacing between the parameters, and is a constant. However, $ \Delta x $ is the spacing of the points in the x direction, and it is certainly not constant around the entire curve! And a little note on notation: [Figure showing the notation]
At this point in the class, we reviewed how to solve differential equations iteratively.
How does this apply to valley-finding?
Use estimated density $ p(x) $ to define the energy of the curve inthe feature space. The simplest formulation is
It's funny that rivers solve PDEs. They find the minimal energy route through the valleys. This is exactly what we want, because the valleys separate the mountains, which are our clusters.
Look for a curve that minimizes E. For example, in 2D feature space, look for a curve in $ \mathrm{R}^{2} $, say $ c(\tau)=(x(\tau),y(\tau)) $,such that
Gradient descent from $ \frac{dc}{dt}=E' $.
right hand side
Numerically, we need to discritize the left hand side, but not the [Figure of density points and approximated density landscape.]
[Start with curve around both mountains, this is the initial guess for the boundaries of the feature space]
[Note that the curve is discritized, each point is assigned to a different value of $ \tau $]
We can represent the points on the curve in a matrix
Then we can iterate using the equation above to find the boundaries which minimizes the energy, and is thus the best seperation of the clusters. Issues with this approach We must fix the number of clusters We must assume the curve has some degree of smoothness.
[Figure of this phenomenon]
The points tend to collapse on each other or move away from each other.
[Figure of this phenomenon] How can we overcome these difficulties? With (as the Spanish developers might call it) "De Le-vel set." (pronounced with the long e sound which doesn't really exist in English so much.)
The Level Set Approach
This method was developed by Osher and Sethian [spelling?].
Assume we are evolving a curve in $ \mathbb{R}^{2} $. Instead of doingthis, we evolve a surface in $ \mathbb{R}^{3} $ for which the ``zerolevel set,
that is the intersection with the z=0 plane is the curvewe wish to evolve.
[Figure of an evolving curve]
[Figure of evolving surface instead of curve]
Although evolving such a surface can be challenging, it is definitely worth it, because our curve may undergo topology changes. How can this happen? Even though the surface evolves without topology changes, the zero-set (our curve) can change topologies and have singularities.
[Figure t=0: of the one-hump animal, yielding a single curve]
[Figure t=1: The top flattens, the curve turns into a peanut]
[Figure t=2: The valley dips so we get the Chameau]
[Figure t=3: The Chameau sinks deeper in the water so that only two humps appear. The curve splits into two]
We evolve the surface until it converges, and then we can look at the resulting zero-set and count the clusters. We do not need ot know them a-priori.
As a note for those who are not taking this class: We use French here because that is the mother language of our professor (Mimi), and because we were having difficulty finding the English words for these at first! (And by "we", we do mean the entire class.)
What PDE describes this surface evolution?Suppose
where $ \overrightarrow{N}(t,\tau) $ is normal to the curve. This is a very general equation because any curve evolution can be expressed in this fashion. Then we can consider a surface evolving$ \phi(x,y,t):=\mathbb{R}^{2}\times\mathbb{R}\rightarrow $a zero-level set at t, $ \left\{ x,y|\phi(x,y,t)=0\right\} $ is thecurve we are evolving. The Euler form of the level set is
Conclusion
And that's it for the semester! Have fun!
|
Nearly $\theta$-supercompact cardinals
The near $\theta$-supercompactness hierarchy of cardinals was introduced by Jason Schanker in [1] and [2]. The hierarchy stratifies the $\theta$-supercompactness hierarchy in the sense that every $\theta$-supercompact cardinal is nearly $\theta$-supercompact, and every nearly $2^{\theta^{{<}\kappa}}$-supercompact cardinal $\kappa$ is $\theta$-supercompact. However, these cardinals can be very different. For example, relative to the existence of a supercompact cardinal $\kappa$ with an inaccessible cardinal $\theta$ above it, we can force to destroy $\kappa$'s measurability while still retaining its near $\theta$-supercompactness and the weak inaccessibility of $\theta$. Yet, if $\theta^{{<}\kappa} = \theta$ and $\kappa$ is $\theta$-supercompact, we can also force to preserve $\kappa$'s $\theta$-supercompactness while destroying any potential near $\theta^+$-supercompactness without collapsing cardinals below $\theta^{++}$. Assuming that $\theta^{{<}\kappa} = \theta$, nearly $\theta$-supercompact cardinals $\kappa$ exhibit a hybrid of weak compactness and supercompactness in that the witnessing embeddings are between $\text{ZFC}^-$ ($\text{ZFC}$ minus the powerset axiom) models of size $\theta$ but are generated by "partially normal" fine filters on $\mathcal{P}_{\kappa}(\theta$). Weakly compact cardinals $\kappa$ are nearly $\kappa$-supercompact.
Contents Formal definition
A cardinal $\kappa$ is
nearly $\theta$-supercompact if and only if for every $A\subseteq\theta$, there exists a transitive $M \vDash ZFC^{-}$ closed under ${<}\kappa$ sequences with $A, \kappa, \theta \in M$, a transitive $N$, and an elementary embedding $j: M \rightarrow N$ with critical point $\kappa$ such that $j(\kappa) > \theta$ and $j''\theta \in N$. A cardinal is nearly supercompact if it is nearly $\theta$-supercompact for all $\theta$. Characterizations of near $\theta$-supercompactness
If $\theta^{{<}\kappa} = \theta$, then the following are equivalent characterizations for the near $\theta$-supercompactness of $\kappa$:
Embedding For every ${<}\kappa$-closed transitive set $M$ of size $\theta$ with $\theta \in M$, there exists a transitive $N$ and an elementary embedding $j: M \rightarrow N$ with critical point $\kappa$ such that $j(\kappa) > \theta$ and $j''\theta \in N$. Normal Embedding Normal ZFC Embedding Normal Fine Filter For every family of subsets $\mathcal{A} \subset \mathcal{P}_\kappa(\theta)$ of size at most $\theta$ and every collection $\mathcal{F}$ of at most $\theta$ many functions from $\mathcal{P}_{\kappa}(\theta)$ into $\theta$, there exists a $\kappa$-complete fine filter $F$ on $\mathcal{P}_{\kappa}(\theta)$, which is $\mathcal{F}$-normal in the sense that for every $f \in \mathcal{F}$ that's regressive on some set in $F$, there exists $\alpha_f < \theta$ for which $\{\sigma \in \mathcal{P}_{\kappa}(\theta)| f(\sigma) = \alpha_f\} \in F$. Hauser Embedding Nearly strongly compact
References Schanker, Jason A. Partial near supercompactness.Ann Pure Appl Logic , 2012. (In Press.) www DOI bibtex Schanker, Jason A. Weakly measurable cardinals and partial near supercompactness.Ph.D. Thesis, CUNY Graduate Center, 2011. bibtex
|
Learning Objectives
Make sure you thoroughly understand the following essential ideas:
When arbitrary quantities of the different components of a chemical reaction system are combined, the overall system composition will not likely correspond to the equilibrium composition. As a result, a net change in composition ("a shift to the right or left") will tend to take place until the equilibrium state is attained. The status of the reaction system in regard to its equilibrium state is characterized by the value of the equilibrium expression whose formulation is defined by the coefficients in the balanced reaction equation; it may be expressed in terms of concentrations, or in the case of gaseous components, as partial pressures. The various terms in the equilibrium expression can have any arbitrary value (including zero); the value of the equilibrium expression itself is called the reaction quotient Q. If the concentration or pressure terms in the equilibrium expression correspond to the equilibrium state of the system, then has the special value Q , which we call the equilibrium constant. K The ratio of Q/K (whether it is 1, >1 or <1) thus serves as an index of how far the system is from its equilibrium composition, and its value indicates the direction in which the net reaction must proceed in order to reach its equilibrium state. When Q= K, then the equilibrium state has been reached, and no further net change in composition will take place as long as the system remains undisturbed.
Consider a simple reaction such as the gas-phase synthesis of hydrogen iodide from its elements: \[H_2 + I_2 \rightarrow 2 HI\] Suppose you combine arbitrary quantities of \(H_2\), \(I_2\) and \(HI\). Will the reaction create more HI, or will some of the HI be consumed as the system moves toward its equilibrium state? The concept of the
reaction quotient, which is the focus of this short lesson, makes it easy to predict what will happen. What is the Equilibrium Quotient?
In the previous section we defined the
equilibrium expression for the reaction
In the general case in which the concentrations can have any arbitrary values (including zero), this expression is called the reaction quotient (the term
equilibrium quotient is also commonly used.) and its value is denoted by \(Q\) (or \(Q_c\) or \(Q_p\) if we wish to emphasize that the terms represent molar concentrations or partial pressures.) If the terms correspond to equilibrium concentrations, then the above expression is called the equilibrium constant and its value is denoted by \(K\) (or \(K_c\) or \(K_p\)).
\(K\)is thus the special value that \(Q\) has when the reaction is at equilibrium
The value of
Q in relation to K serves as an index how the composition of the reaction system compares to that of the equilibrium state, and thus it indicates the direction in which any net reaction must proceed. For example, if we combine the two reactants A and B at concentrations of 1 mol L –1 each, the value of Q will be 0÷1=0. The only possible change is the conversion of some of these reactants into products. If instead our mixture consists only of the two products C and D, Q will be indeterminately large (1÷0) and the only possible change will be in the reverse direction.
For example, if we combine the two reactants A and B at concentrations of 1 mol L
–1 each, the value of Q will be 0÷1=0. The only possible change is the conversion of some of these reactants into products. If instead our mixture consists only of the two products C and D, Q will be indeterminately large (1÷0) and the only possible change will be in the reverse direction.
It is easy to see (by simple application of the Le Chatelier principle) that the ratio of
Q/K immediately tells us whether, and in which direction, a net reaction will occur as the system moves toward its equilibrium state. A schematic view of this relationship is shown below:
It is very important that you be able to work out these relations for yourself, not by memorizing them, but from the definitions of \(Q\) and \(K\).
Condition Status of System Q > K Product concentration too high for equilibrium; net reaction proceeds to left. Q = K System is at equilibrium; no net change will occur. Q < K Product concentration too low for equilibrium; net reaction proceeds to right.
It is very important that you be able to work out these relations for yourself, not by memorizing them, but from the definitions of \(Q\) and \(K\).
Example \(\PageIndex{1}\)
The equilibrium constant for the oxidation of sulfur dioxide is
K p = 0.14 at 900 K.
\[\ce{2 SO_2(g) + O_2(g) \rightleftharpoons 2 SO_3(g)} \nonumber\]
If a reaction vessel is filled with SO
3 at a partial pressure of 0.10 atm and with O 2 and SO 2 each at a partial pressure of 0.20 atm, what can you conclude about whether, and in which direction, any net change in composition will take place? Solution:
The value of the equilibrium quotient
Q for the initial conditions is
\[ Q= \dfrac{p_{SO_3}^2}{p_{O_2}p_{SO_2}^2} = \dfrac{(0.10\; atm)^2}{(0.20 \;atm) (0.20 \; atm)^2} = 1.25\; atm^{-1} \nonumber\]
Since
Q > K, the reaction is not at equilibrium, so a net change will occur in a direction that decreases Q. This can only occur if some of the SO 3 is converted back into products. In other words, the reaction will "shift to the left".
The formal definitions of
Q and K are quite simple, but they are of limited usefulness unless you are able to relate them to real chemical situations. The following diagrams illustrate the relation between Q and K from various standpoints. Take some time to study each one carefully, making sure that you are able to relate the description to the illustration.
Example \(\PageIndex{2}\): Dissociation of dinitrogen tetroxide
For the reaction
\[N_2O_{4(g)} \rightleftharpoons 2 NO_{2(g)} \nonumber\]
K c = 0.0059 at 298 K.
This equilibrium condition is represented by the red curve that passes through all points on the graph that satisfy the requirement that
\[Q = \dfrac{[NO_2]^2}{ [N_2O_4]} = 0.0059 \nonumber\]
There are of course an infinite number of possible
Q's of this system within the concentration boundaries shown on the plot. Only those points that fall on the red line correspond to equilibrium states of this system (those for which \(Q = K_c\)). The line itself is a plot of [NO 2] that we obtain by rearranging the equilibrium expression
\[[NO_2] = \sqrt{[N_2O_4]K_c} \nonumber\]
If the system is initially in a non-equilibrium state, its composition will tend to change in a direction that moves it to one that is on the line. Two such non-equilibrium states are shown. The state indicated by has \(Q > K\), so we would expect a net reaction that reduces
Q by converting some of the NO 2 into N 2O 4; in other words, the equilibrium "shifts to the left". Similarly, in state , Q < K, indicating that the forward reaction will occur.
The blue arrows in the above diagram indicate the successive values that Q assumes as the reaction moves closer to equilibrium. The slope of the line reflects the stoichiometry of the equation. In this case, one mole of reactant yields two moles of products, so the slopes have an absolute value of 2:1.
Example \(\PageIndex{3}\): Phase-change equilibrium
One of the simplest equilibria we can write is that between a solid and its vapor. In this case, the equilibrium constant is just the vapor pressure of the solid. Thus for the process
\[I_{2(s)} \rightleftharpoons I_{2(g)} \nonumber\]
all possible equilibrium states of the system lie on the horizontal red line and is independent of the quantity of solid present (as long as there is at least enough to supply the relative tiny quantity of vapor.)
So adding various amounts of the solid to an empty closed vessel (states and ) causes a gradual buildup of iodine vapor. Because the equilibrium pressure of the vapor is so small, the amount of solid consumed in the process is negligible, so the arrows go straight up and all lead to the same equilibrium vapor pressure.
Example \(\PageIndex{4}\): Heterogeneous chemical reaction
The decomposition of ammonium chloride is a common example of a heterogeneous (two-phase) equilibrium. Solid ammonium chloride has a substantial vapor pressure even at room temperature:
\[NH_4Cl_{(s)} \rightleftharpoons NH_{3(g)} + HCl_{(g)}\]
Arrow traces the states the system passes through when solid NH
4Cl is placed in a closed container. Arrow represents the addition of ammonia to the equilibrium mixture; the system responds by following the path back to a new equilibrium state which, as the Le Chatelier principle predicts, contains a smaller quantity of ammonia than was added. The unit slopes of the paths and reflect the 1:1 stoichiometry of the gaseous products of the reaction.
|
When it comes to buffer solution one of the most common equation is the Henderson-Hasselbalch approximation. An important point that must be made about this equation is it's useful only if stoichiometric or initial concentration can be substituted into the equation for equilibrium concentrations.
Origin of the Henderson-Hasselbalch Equation
Where the Henderson-Hasselbalch approximation comes from
\[HA + H_2O \rightleftharpoons H_3O^+ + A^- \label{1}\]
where,
\(A^-\) is the conjugate base \(HA\) is the weak acid
We know that \(K_a\) is equal to the products over the reactants and, by definition, H
2O is essentially a pure liquid that we consider to be equal to one.
\[K_a = [H_3O^+][A^-] \label{2}\]
Take the \(-\log\) of both sides:
\[-\log \; K_a = -\log([H_3O^+][A^-]) \label{3}\]
\[-\log \; K_a = -\log[H_3O^+] \; -\log[A^-] \label{4}\]
Using the following two relationships:
\[-\log[K_a] = pK_a \label{5}\]
\[-\log[H_3O^+] = pH \label{6}\]
We can simplify the above equation:
\[pK_a = pH - \log[A^-] \label{7}\]
If we add \(\log[A^-]\) to both sides, we get the
Henderson-Hasselbalch approximation :
\[pH = pK_a + \log[A^-] \label{8}\]
This approximation is only
valid when: The conjugate base / acid falls between the values of 0.1 and 10 The molarity of the buffers exceeds the value of the K aby a factor of at least 100
Example \(\PageIndex{1}\)
Suppose we needed to make a buffer solution with a pH of 2.11. In the first case, we would try and find a weak acid with a pK
a value of 2.11. However, at the same time the molarities of the acid and the its salt must be equal to one another. This will cause the two molarities to cancel; leaving the \(\log [A^-]\) equal to \(\log(1)\) which is zero.
\[pH = pK_a + \log[A^-] = 2.11 + \log(1) = 2.11\]
This is a very unlikely scenario, however, and you won't often find yourself with Case #1
Example \(\PageIndex{2}\)
What mass of \(NaC_7H_5O_2\) must be dissolved in 0.200 L of 0.30 M HC
7H 5O 2 to produce a solution with pH = 4.78? (Assume solution volume is constant at 0.200L) SOLUTION
\(HC_7H_5O_2 + H_20 \rightleftharpoons H_3O^+ + C_7H_5O_2\)
\(K_a =6.3 \times 10^{-5}\)
\(K_a = \dfrac{[H_3O^+][C_7H_5O_2]}{[HC_7H_5O_2]} = 6.3 \times 10^{-5}\)
\([H_3O^+] = 10^{-pH} = 10^{-4.78} = 16.6 \times 10^{-6}\;M\;[HC_7H_5O_2] = 0.30\;M\;[C_7H_5O_2] =\)
\[[C_7H_5O_2^-] = K_a \times \dfrac{[HC_7H_5O_2]}{[H_3O^+]}\]
\[1.14 \; M = 6.3 \times 10^{-5} \times \dfrac{0.30}{16.6 \times 10^{-6}}\]
Mass = 0.200 L x 1.14 mol C
7H 5O 2 - / 1L x 1mol NaC 7H 5O 2 / 1 mol C 7H 5O 2 - x 144 g NaC 7H 5O 2 / 1 mol NaC 7H 5O 2 = 32.832 g NaC 7H 5O 2 References Petrucci, et al. General Chemistry: Principles & Modern Applications. 9th ed. Upper Saddle River, New Jersey 2007. Contributors Jonathan Nguyen (UCD), Garrett Larimer (UCD)
|
You are right. The $i$ indices of the occupation numbers $n_i$ can be thought of as multi-indices meaning that each $i$ encodes both the wavevector $k$ (which is quantized once boundary conditions are imposed) and the spin sign $\sigma$. In general, $i$ will enumerate all the allowed combinations of quantum numbers that characterize the single particle states.
It is also true that only the entries $n_i$ of those states which are populated will be non-zero. If the total particle number $N$ is fixed, the constraint $\sum_in_i=N$ has to hold.
I added this in response to your comment:
The space the occupation number states naturally live in is referred to as Fock space. It can be written as
$\mathcal F^\pm = \bigoplus_{N=0}^\infty\mathcal H_N^\pm = \mathcal H_0 \oplus \mathcal H_1 \oplus \mathcal H_2^\pm \oplus \dots$
where $\mathcal H_N^\pm$ refers to the usually $N$ particle Hilbert spaces and the $\pm$ indicates whether we restict ourselves to symmetric or antisymmetric states, i.e. bosons and fermions respectively. It is convenient to work in Fock space because annihilation and creation operators will take you from one Hilbert space with fixed particle number to another.
Now, assuming that you mean 'dimension' when you talk about the 'size' of a Hilbert space, keep in mind that you have to obey the (anti-)symmetry constaint. This reduces the dimension of each of the $N$ particle subspaces.
As an example, consider a two state fermionic system, e.g. electrons with spin either $\downarrow$ ($i=1$) or $\uparrow$ ($i=2$). Due to the Pauli principle (i.e. as a consequence of the antisymmetry requirement) the only occupation numbers allowed are 0 and 1. $\mathcal H_0$ is one-dimensional as always, it contains only the vacuum state $|00\rangle$ (and multiples thereof, but we are interested in normalized linear independent states). The one particle Hilbert space is two-dimensional as the electron can be in either one of the states (and symmetry still doesn't matter since there are no two particles to swap). The two particle Hilbert space $\mathcal H_2^-$ has again only one linear independent state, $|11\rangle$, because Mr. Pauli doesn't allow us to populate any of the two states twice. Thus, the two-state fermionic Fock space is four-dimenional.
More generally, the $M$-state fermionic Fock space will be $\sum_{N=0}^M\binom{M}{N}=2^M$ dimensional (which follows from simple combinatorics) and bosonic Fock spaces are infinite dimensional because bosons can occupy the same state as many times as they want.
|
I expected to get 27,6 mHBy doing what you guys said I get to a turns ratio of 2 (dividing the voltages voltages). That means the current I2rms is 2 times current I1rms equals to 10V. That means L22 = LM/2 = 15.9 mH
1. Homework StatementOk I have the following circuit and data (when the subscript is "ef" it means "rms" values):I am asked to determine the parameters of the transformer r1 L11, L22 and LM with the given experimental data.2. Homework Equations3. The Attempt at a SolutionI had...
1. Homework StatementSo I'm really confused with mutual inductors and dot convention. If your answer is going to be a link to any website I can assure you I read them all and that only left me more confused. So here are my questions:2. Homework Equations3. The Attempt at a Solution->...
1. Homework StatementDetermine the parameters of the PI controller such that two of the closed-loop poles of the transfer function Gclr(s) correspond to the poles of a second order LTI system with the following specifications: i) overshoot S% = 25%; and ii) settling time ts(5%) = 120 s.2...
1. Homework StatementI have a coaxial cable with internal conductor of radius r1 and external conductor of radii r2 and r3. The material of the conductors has a conductivity ##\sigma_1##. Between the conductors there is a imperfect dielectric of conductivity ##\sigma_2##.Consider the...
1. Homework StatementThe following figure represent the traversal cut of a system with two cylindrical equal conductors of radius r0 length l at a distance d from one another and at the same distance h of a plane conductor (conductor zero). The dielectric that surrounds the conductors is the...
Hey berkeman! Thank you for trying to help.Yes partial capacitances are a little bit different than "regular capacitances" (i.e. the capacitances that figure out on a capacitance matrix). What my book says is" An alternative method – the so-called partial capacitance scheme – consists of...
1. Homework StatementI have a small question about the following problem. The figure represents the cross-section of a three-conductor system comprising a communications coaxial cable of length l running parallel to a conducting wall (reference conductor). Determine the partial capacitance...
@Charles Link as you advised me I gave a look into this problem. I have a few questions though that I hope you can help me clarify. My doubts are in the expression you use (the constant ##\mu_0##). Shouldn't we use ? And ? However by doing that I get to ##B= \mu_0 M##. Is that wrong?
Hi! Yes I considered the theoretical model given in class, that takes beta as a constant. I also noticed that the LTSPICE model of the transistor includes a finite value for the Early voltage, while in the theoretical analysis we are instructed to assume that this voltage is infinite (therefore...
Hi! I did a new measure considering the peak of the output voltage as the frequency to measure my resistances and obtained 19.6 k for Ri1 (which is pretty satisfactory)... But 200 kohm for Ri2! What am I doing wrong?
1. Homework StatementI'm studying a circuit with BJT's and I'm asked to determine the input resistances of the two amplification steps of the circuit. The circuit I'm analyzing is the following one:2. Homework Equations3. The Attempt at a SolutionTo determine the input resistances I...
1. Homework StatementI have the following circuit with MOSFET (cascode amplifier).[![enter image description here][1]][1][1]: https://i.stack.imgur.com/BAqQh.pngInformation given:##V_t=1V####I_1=20\mu A####k=10 \mu AV^{-2}##Knowing that ##V_b## is 5V determine the minimum value...
1. Homework StatementI'm studying the following circuit with a MOSFET[![enter image description here][1]][1][1]: https://i.stack.imgur.com/gEEdQ.png2. Homework Equations3. The Attempt at a SolutionNow for analyzing this circuit my book came out with various equations (which I...
Hi!Oh yes that's the better approach, I didn't thought about it, thanks!The phase difference between the voltages across the inductor and the capacitor is \pi.Since the overall effective voltage is 1 that leaves us with two options: the voltage on the capacitor is zero (this is the case...
1. Homework StatementConsider the following circuit where i(t) is sinusoidal and exists across both components. (1) is an inductor and (2) is a capacitor. The ideal voltmeters measure effective value. What is the value measured by V2:[![enter image description here][1]][1][1]...
1. Homework Statement2. Homework EquationsI'm studying a mathematical behaviour of a servo motor and I need some help to understand it.The output signal is \$\beta (t)\$, representing the angle rotated by the axis at instant t, in relation to the equilibrium position.On the servomotor...
1. Homework StatementI have the following problem. Consider a circuit node where 3 sinusoidal currents with the same frequency converge, i1 i2 and i3. Knowing that the effective values of i1 and i2 are I1ef=1A and I2ef=2A. What can we say about I3ef:Options:$$(a)1A \leq I_{3ef} \leq 3A$$...
|
Variable Importance in Random Forests can suffer from severe overfitting Predictive vs. interpretational overfitting
There appears to be broad consenus that random forests rarely suffer from “overfitting” which plagues many other models. (We define
overfitting as choosing a model flexibility which is too high for the data generating process at hand resulting in non-optimal performance on an independent test set.) By averaging many (hundreds) of separately grown deep trees -each of which inevitably overfits the data – one often achieves a favorable balance in the bias variance tradeoff. For similar reasons, the need for careful parameter tuning also seems less essential than in other models.
This post does not attempt to contribute to this long standing discussion (see e.g. https://stats.stackexchange.com/questions/66543/random-forest-is-overfitting) but points out that random forests’ immunity to overfitting is restricted to the predictions only and not to the default variable importance measure!
We assume the reader is familiar with the basic construction of random forests which are averages of large numbers of individually grown regression/classification trees. The random nature stems from both “row and column subsampling’’: each tree is based on a random subset of the observations, and each split is based on a random subset of candidate variables. The tuning parameter – which for popular software implementations has the default \(\lfloor p/3 \rfloor\) for regression and \(\sqrt{p}\) for classification trees – can have profound effects on prediction quality as well as the variable importance measures outlined below.
At the heart of the random forest library is the CART algorithm which chooses the split for each node such that maximum reduction in overall node impurity is achieved. Due to the CART bootstrap row sampling, \(36.8\%\) of the observations are (on average) not used for an individual tree; those “out of bag” (OOB) samples can serve as a validation set to estimate the test error, e.g.:
\[\begin{equation} E\left( Y – \hat{Y}\right)^2 \approx OOB_{MSE} = \frac{1}{n} \sum_{i=1}^n{\left( y_i – \overline{\hat{y}}_{i, OOB}\right)^2} \end{equation}\]
where \(\overline{\hat{y}}_{i, OOB}\) is the average prediction for the \(i\)th observation from those trees for which this observation was OOB.
Variable Importance
The default method to compute variable importance is the
mean decrease in impurity (or gini importance) mechanism: At each split in each tree, the improvement in the split-criterion is the importance measure attributed to the splitting variable, and is accumulated over all the trees in the forest separately for each variable. Note that this measure is quite like the \(R^2\) in regression on the training set.
The widely used alternative as a measure of variable importance or short
permutation importance is defined as follows: \[\begin{equation} \label{eq:VI} \mbox{VI} = OOB_{MSE, perm} – OOB_{MSE} \end{equation}\] Gini importance can be highly misleading
We use the well known titanic data set to illustrate the perils of putting too much faith into the Gini importance which is based entirely on training data – not on OOB samples – and makes no attempt to discount impurity decreases in deep trees that are pretty much frivolous and will not survive in a validation set.
In the following model we include
passengerID as a feature along with the more reasonable Age, Sex and Pclass:
randomForest(Survived ~ Age + Sex + Pclass + PassengerId, data=titanic_train[!naRows,], ntree=200,importance=TRUE,mtry=2)
The figure below shows both measures of variable importance and surprisingly
passengerID turns out to be ranked number 2 for the Gini importance (right panel). This unexpected result is robust to random shuffling of the ID.
The permutation based importance (left panel) is not fooled by the irrelevant ID feature. This is maybe not unexpected as the IDs shold bear no predictive power for the out-of-bag samples.
Noise Feature
Let us go one step further and add a Gaussian noise feature, which we call PassengerWeight:
titanic_train$PassengerWeight = rnorm(nrow(titanic_train),70,20)
rf4 =randomForest(Survived ~ Age + Sex + Pclass + PassengerId + PassengerWeight, data=titanic_train[!naRows,], ntree=200,importance=TRUE,mtry=2)
Again, the blatant “overfitting” of the Gini variable importance is troubling whereas the permutation based importance (left panel) is not fooled by the irrelevant features. (Encouragingly, the importance measures for ID and weight are even negative!)
In the remainder we investigate if other libraries suffer from similar spurious variable importance measures.
h2o library
Unfortunately, the h2o random forest implementation does not offer permutation importance:
Coding passenger ID as integer is bad enough:
Coding passenger ID as factor makes matters worse:
Let’s look at a single tree from the forest:
If we scramble ID, does it hold up?
partykit
conditional inference trees are not being fooled by ID:
And the variable importance in
cforest is indeed unbiased python’s sklearn
Unfortunately, like h2o the python random forest implementation offers only Gini importance, but this insightful post offers a solution:
Gradient Boosting
Boosting is highly robust against frivolous columns:
mdlGBM = gbm(Survived ~ Age + Sex + Pclass + PassengerId +PassengerWeight, data= titanic_train, n.trees = 300, shrinkage = 0.01, distribution = "gaussian")
Conclusion
Sadly, this post is 12 years behind:
It has been known for while now that the Gini importance tends to inflate the importance of continuous or high-cardinality categorical variables:
the variable importance measures of Breiman’s original Random Forest method … are not reliable in situations where potential predictor variables vary in their scale of measurement or their number of categories.
Single Trees
I am still struggling with the extent of the overfitting. It is hard to believe that passenger ID could be chosen as a split point
early in the tree building process given the other informative variables! Let us inspect a single tree
## rowname left daughter right daughter split var split point status
## 1 1 2 3 Pclass 2.5 1
## 2 2 4 5 Pclass 1.5 1
## 3 3 6 7 PassengerId 10.0 1
## 4 4 8 9 Sex 1.5 1
## 5 5 10 11 Sex 1.5 1
## 6 6 12 13 PassengerId 2.5 1
## prediction
## 1 <NA>
## 2 <NA>
## 3 <NA>
## 4 <NA>
## 5 <NA>
## 6 <NA>
This tree splits on passenger ID at the second level !! Let us dig deeper:
The help page states
For numerical predictors, data with values of the variable less than or equal to the splitting point go to the left daughter node.
So we have the 3rd class passengers on the right branch. Compare subsequent splits on (i) sex, (ii) Pclass and (iii) passengerID:
Starting with a parent node Gini impurity of 0.184
Splitting on sex yields a Gini impurity of 0.159
1 2 0 72 303 1 71 50
Splitting on passengerID yields a Gini impurity of 0.183
FALSE TRUE 0 2 373 1 3 118
And how could passenger ID accrue more importance than sex ?
// add bootstrap table styles to pandoc tables function bootstrapStylePandocTables() { $('tr.header').parent('thead').parent('table').addClass('table table-condensed'); } $(document).ready(function () { bootstrapStylePandocTables(); });
|
Search
Now showing items 1-10 of 155
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
|
ZFR
I am math graduate student in one of the U.S. universities and I am deeply interested in Number theory, Additive combinatorics, Ramsey theory, Polynomial Method, Group and Ring theory, Finite Fields, Real and Fourier analysis, Measure Theory.
New York, NY, USA
Member for 5 years, 8 months
0 profile views
Last seen Apr 13 at 16:55 Communities (2) Top network posts 17 Sum of series $\sum \limits_{k=1}^{\infty}\frac{\sin^3 3^k}{3^k}$ 15 $A+B+C+7$ is perfect square where $A,B,C$ numbers with repeating digits. 12 Infinite group has infinitely many subgroups, namely cyclic subgroups. 12 Solve an equation of 4th degree 10 If $\lim_{x \to \infty} (f(x)-g(x)) = 0$ then $\lim_{x \to \infty} (f^2(x)-g^2(x)) = 0\ $: False? 10 Looking for a non-combinatorial proof that $a! \cdot b! \mid (a+b)!$ 9 A finite set is closed View more network posts →
|
How is the ordinal $\omega_1$ defined? I know that it is a supremum of all smaller ordinals, but then $\omega^\omega$ is also a supremum of all smaller ordinals. How can we distinguish these two numbers? Edit: changing the question into how $\omega^{\omega}$ can be shown countable, and how other countable ordinals can be shown to be countable.
$\omega^\omega$ is countable because it is a union of a countable collection of sets, each of which is itself countable: $$\omega^\omega = \bigcup_{i<\omega} \omega^i$$
So it suffices to show that:
A union of a countable collection of countable sets is countable. $\omega^i$ is countable whenever $i$ is finite.
(1) should be familiar to you; it is the usual Cantor argument for showing that the rationals are countable. If the countable sets are $S_0, S_1\ldots$ with elements $S_i = \{s_{i0}, s_{i1}, \ldots\}$, then we can enumerate the union of the $S_i$ as $s_{00}; s_{10}, s_{01}; s_{20}, s_{11}, s_{02}; s_{30}, \ldots$.
(2) is not hard either; you can prove the the product of two countable sets is countable (essentially as in the previous paragraph) and then show that since $\omega^{i+1} = \omega\times\omega^i $, countability of $\omega^{i+1}$ follows from that of $\omega^i$, which is enough to establish the result.
(You may want to look up the notion of cofinality. An ordinal $X$ has countable cofinality if it is a countable union of smaller ordinals. If those smaller ordinals are themselves countable, then $X$ is countable. So to show that $\omega^\omega$ is countable, it is enough to show that it has countable cofinality, which we can do by observing that it is the union of the countable ordinals $\omega^i$ for finite $i$.)
Then similarly if $C$ is some countable ordinal, $\omega^C$ is countable. For we can write some countable sequence $c_0, c_1,\ldots$ whose limit is $C$, and then $\omega^C = \bigcup \omega^{c_i}$ expresses $\omega^C$ as a countable union of countable sets. So not only are $\omega$ and $\omega^\omega$ countable, so are $\omega^{\omega^\omega}, \omega^{\omega^{\omega^\omega}} \ldots$. And then we can take the union of countable the sequence of countable sets $\omega, \omega^\omega, \omega^{\omega^\omega}, \omega^{\omega^{\omega^\omega}} \ldots$ and conclude that this union, usually written $\epsilon_0$, is countable as well.
$\epsilon_0$ has the property that it is the smallest ordinal $x$ for which $x=\omega^x$. There is an infinite sequence of countable ordinals with this property, and their union is still countable.
Generally speaking, the best course of action when trying to show that $\alpha$ is a countable ordinal, is to show that:
$\alpha = \delta+n$ where $\delta$ is a limit ordinal (if $n=0$ then $\alpha=\delta$); $\delta$ is the limit of $\{f(\beta)\mid \beta<\gamma\}$ for some countable $\gamma$; and $f(\beta)$ is countable is countable whenever $\beta$ is countable.
For example, $\epsilon_0$ is the limit of $\omega^\omega,\omega^{\omega^\omega},\omega^{\omega^{\omega^\omega}},\ldots$. We can show that this is the limit of the recursive function $f(n+1)=\omega^{f(n)}$ and $f(0)=\omega$. We can further show that if $\beta$ is countable then $\omega^\beta$ is also countable (note that this is ordinal exponentiation) and therefore $\epsilon_0$ is the countable limit of countable ordinals and thus countable.
The same method can be applied to general ordinals, although obtaining such $f$ is often harder than it is in the case of $\epsilon_0$.
One caveat is that this is true in ZFC but not necessarily in ZF, where a countable union of countable ordinals may be countable.
|
Suppose that $G$ is a Lie group with Lie algebra $\mathfrak{g}$ and the center of $G$ is denoted by $Z(G)$ with its Lie algebra denoted $Z(\mathfrak{g})$. It's easy to show that $Z(\mathfrak{g})\subset\big\{X\in\mathfrak{g}\;|\;[X,Y]=0,\forall Y\in\mathfrak{g} \big\}$, my problem is that, when are they equal?
One can show the following:
Let $G$ be a connected Lie group. Then $Z(\operatorname{Lie}G) = \operatorname{Lie}Z(G)$.
Here, $\operatorname{Lie}$ denotes the functor from Lie groups to Lie algebras, i.e. $\operatorname{Lie}G$ is the Lie algebra of $G$.
There is a nice
corollary:
Let $G$ be a connected Lie group. Then $G$ is Abelian if and only if $\operatorname{Lie}G$ is.
To prove the statement, we'll need the following
lemma:
Let $G$ be a connected Lie group. Then $Z(G) = \operatorname{ker}(Ad:G \rightarrow GL(\operatorname{Lie}G))$
Proof of the lemma:
"$\subset$": This follows immediately from the fact that conjugation by an element that lies in the center is the identity on $G$, in particular its differential is just $\operatorname{id}_{\operatorname{Lie}G}$.
"$\supset$": Let $g$ be in the kernel, i.e. $(\alpha_g)_* \equiv \operatorname{Ad}(g) = \operatorname{id}_{\operatorname{Lie}G}$, where $\alpha_g$ denotes conjugation by g. But the functor Lie is faithful on the subcategory of connected Lie groups
(*), so since $(\alpha_g)_* = \operatorname{id}_{\operatorname{Lie}G} = (\operatorname{id}_G)_*$, it follows $\alpha_g = \operatorname{id}_G$, thus $g \in Z(G)$.
This proves the lemma. In particular, since the kernel is a Lie subgroup of $G$, this shows that $Z(G)$ is a Lie subgroup, as well, and thus a Lie group, so the notion of $\operatorname{Lie}Z(G)$ in the original claim is in fact well-defined.
Proof of the original claim:
Since $Z(G)$ is a Lie subgroup of $G$, we can consider $\operatorname{Lie}Z(G) \subset \operatorname{Lie}G$ to be a subalgebra in the canonical way and $\exp_{Z(G)} \equiv \exp_G|_{Z(G)}$ (which we'll just denote by $\exp$ in the following).
"$\subset$": Let $X \in Z(\operatorname{Lie}G)$, i.e. $\operatorname{ad}X = 0$. If we find a curve $\gamma$ in $Z(G)$ whose derivative is $X$ we're done. The somewhat obvious idea is $\gamma(t) := \exp(tX)$ which has derivative $X$. So it remains to show that indeed $\gamma(t) \in Z(G)$. But $\operatorname{Ad}(\gamma(t)) = \operatorname{Ad}(\exp(tX)) = \exp_{GL(\operatorname{Lie}G)}(\operatorname{ad}(tX))$ by a commutative diagram which, in turn, equals $\exp_{GL(\operatorname{Lie}G)}(0) = \operatorname{id}_{GL(\operatorname{Lie}G)}$ and the claim follows by the lemma.
"$\supset$": Let $X \in \operatorname{Lie}Z(G)$, in particular $\exp(tX) \in Z(G)\; \forall t$. We have to show that $\operatorname{ad}X = 0$. Now, using commutative diagrams one can show in a straightforward way that $\forall x, y \in \operatorname{Lie}G$:
$$ \exp x · \exp y · (\exp x)^{-1} = \exp(e^{\operatorname{ad}x}(y))$$
Here, $e \equiv \exp_{GL(\operatorname{Lie}G)}$ is the matrix exponential. We apply this identity in the following way: Let $Y \in \operatorname{Lie}G$ and let $U$ be a neighborhood of $0 \in \operatorname{Lie}G$ s.t. $\exp|U: U \rightarrow \exp U$ is a diffeomorphism. Pick $s, t \in \mathbb{K}, s,t \neq 0$ small enough s.t. $sX, tY, e^{\operatorname{ad}(sX)}(tY) \in U$. Then plugging in $x := sX, y := tY$ above and using $\exp(sX) \in Z(G)$ results in:
$$ \exp(tY) = \exp(e^{\operatorname{ad}(sX)}(tY))\,, $$
so by injectivity of $\exp$ on $U$ (cancel $t$ using linearity):
$$ Y = e^{\operatorname{ad}(sX)}(Y)$$
Since $Y$ was arbitrary, this implies $e^{\operatorname{ad}(sX)} = \operatorname{id}_{\operatorname{Lie}G}$ for all $s$ small enough. By choosing $s$ even smaller (if necessary), we achieve that $\operatorname{ad}(sX)$ is in a neighborhood of $0 \in \operatorname{gl}(\operatorname{Lie}G)$ where $e$ is injective as well and we obtain $\operatorname{ad}(sX) = 0$, implying the claim.
Proof of the corollary:
Let $G$ be Abelian. Then $Z(G) = G$, so by the above we have $Z(\operatorname{Lie}G) = \operatorname{Lie}G$, proving that $\operatorname{Lie}G$ is Abelian.
Let $\operatorname{Lie}G$ be Abelian, i.e. $\operatorname{Ad}_* = \operatorname{ad}: \operatorname{Lie}G \rightarrow \operatorname{gl}(\operatorname{Lie}G)$ is the zero map. But the map $G \rightarrow \operatorname{GL}(\operatorname{Lie}G), g \mapsto \operatorname{id}_{\operatorname{Lie}G}$ has differential $0$ as well, so once again by the faithfulness of the functor Lie we have: $\operatorname{Ad}(g) = \operatorname{id}_{\operatorname{Lie}G}$, i.e. $g \in Z(G)$, so $G$ is Abelian.
(*) The functor Lie is faithful on the category of connected Lie groups:
Let $G, H$ be two Lie groups, $G$ connected, and $f_{1,2}: G \rightarrow H$ two morphisms with $f_{1*} = f_{2*}$. Then $f_1 = f_2$.
Sketch of proof:
Pick an open neighborhood $U$ of $0 \in \operatorname{Lie}G$ s.t. the exponential map maps $U$ diffeomorphically to an open neighborhood $V := \exp{U} \subset G$ of $e \in G$. Show that $f_1|_V = f_2|V$. (Use the standard commutative diagram involving $\exp_G, \exp_H$, $f_{1,2}$ and $f_{1,2*}$.) Since $G$ is connected, it is generated by the elements in $V$, i.e. $G = \{g_1 · … g_n | n \in \mathbb{N}, g_1, …, g_n \in V \cap V^{-1}\}$. To see this, show that the right-hand side is an open subgroup of $G$, thus a closed subgroup, so it equals $G$ by connectedness. Use 3) to extend the statement $f_1|_V = f_2|V$ from 2) to all of $G$.
I don't see the easy way to do this (I hope there is one) but
1) you can reduce the problem to the case of simply connected Lie group (because not simply connected Lie groups are factors of simply connected by some discrete subgroup of the center)
2) For simply connected groups one can use BCH formula to show, that the exponent of the element of the center lies in the center of the group.
This implies, that $\dim Z(G) \geq N$, where $N$ is the dimension of the center of Lie algebra. But you already have the reverse inequality.
|
The Annals of Mathematical Statistics Ann. Math. Statist. Volume 31, Number 1 (1960), 222-224. Sums of Small Powers of Independent Random Variables Abstract
Let $(x_{nk}), k = 1, 2, \cdots, k_n; n = 1, 2, \cdots$ be a double sequence of infinitesimal random variables which are rowwise independent (i.e. $\lim_{n \rightarrow \infty} \max_{1 \leqq k \leqq k_n} P(|x_{nk}| > \epsilon) = 0$ for every $\epsilon > 0$, and for each $n, x_{n1}, \cdots, x_{nk_n}$ are independent). Let $S_n = x_{n1} + \cdots + x_{nk_n} - A_n$ where the $A_n$ are constants and let $F_n(x)$ be the distribution function of $S_n$. In a previous paper [3] the system of infinitesimal, rowwise independent random variables $(|x_{nk}|^r)$ was studied for $r \geqq 1$. Specifically, let $S^r_n = |x_{n1}|^r + \cdots + |x_{nk_n}|^r - B_n(r),$ where the $B_n(r)$ are suitably chosen constants. Let $F_n^r(x)$ be the distribution function of $S^r_n$. Necessary and sufficient conditions for $F_n^r(x)$ to converge $(n \rightarrow \infty)$ to a distribution function $F^r(x)$ and for $F^r(x)$ to converge $(r \rightarrow \infty)$ to a distribution function $H(x)$ were given, together with the form that $H(x)$ must take. In Section 2 of this paper we consider the system $(|x_{nk}|^r)$ for $0 < r < 1$. Results similar to the above are found, replacing $(r \rightarrow \infty)$ by $(r \rightarrow 0^+)$. However different assumptions must be made at certain points. Various remarks are made in this paper to show where the results here differ from [3]. In particular it is shown that, if $F^r(x)$ converges $(r \rightarrow 0^+)$ to a distribution function $H(x)$, then $H(x)$ will be the distribution function of the sum of two independent random variables, one Poisson and the other Gaussian. Furthermore, while the Gaussian summand may or may not be degenerate, the Poisson summand will be nondegenerate in all but one special case.
Article information Source Ann. Math. Statist., Volume 31, Number 1 (1960), 222-224. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177705999 Digital Object Identifier doi:10.1214/aoms/1177705999 Mathematical Reviews number (MathSciNet) MR119237 Zentralblatt MATH identifier 0100.34602 JSTOR links.jstor.org Citation
Shapiro, J. M. Sums of Small Powers of Independent Random Variables. Ann. Math. Statist. 31 (1960), no. 1, 222--224. doi:10.1214/aoms/1177705999. https://projecteuclid.org/euclid.aoms/1177705999
|
How to solve this logarithmic equation? $8n^2 = 64n\log n$, ($\log n$ here is base 2) I have tried to convert it to $n-8\log n = 0$, but how to solve the latest?
This doesn't have any solutions using elementary functions. But, using the Lambert W function, we get: $$n = -\frac {8}{\ln 2} \operatorname{W} \left (-\frac {\ln 2}{8} \right)$$ and $$n = -\frac {8}{\ln 2} \operatorname{W}_{-1} \left (-\frac {\ln 2}{8} \right)$$
If you cannot use Lambert function, consider that you look for the zero's of $$f(x)=x-8\log_2(x)$$ The first derivative $$f'(x)=1-\frac{8}{x \log (2)}$$ cancels for $x_*=\frac{8}{\log (2)}$ and $$f(x_*)=\frac{8-8 \log \left(\frac{8}{\log (2)}\right)}{\log (2)}\approx -16.6886$$ The second derivative test shows that this corresponds to a minimum. So, there are two roots to the equation.
If you plot the function, you will see that the roots are close to $1$ and $40$. So, start Newton method and below are given the iterates $$\left( \begin{array}{cc} n & x_n \\ 0 & 1.000000000 \\ 1 & 1.094862617 \\ 2 & 1.099983771 \\ 3 & 1.099997030 \end{array} \right)$$ $$\left( \begin{array}{cc} n & x_n \\ 0 & 40.00000000 \\ 1 & 43.61991000 \\ 2 & 43.55927562 \\ 3 & 43.55926044 \end{array} \right)$$
|
Firstly, least squares (or sum of squared errors) is a possible loss function to use to fit your coefficients. There's nothing technically wrong about it.
However there are number of reasons why MLE is a more attractive option. In addition to those in the comments, here are two more:
Computational efficiency
Because the likelihood function of a logistic regression model is a member of the exponential family, we can use Fisher's Scoring algorithm to efficiently solve for $\beta$. In my experience, this algorithm converges in only a few steps. To solve least squares numerically will likely take longer.
Lest this gets lost, per @vbox's comment:
learning parameters for any machine learning model (such as logistic regression) is much easier if the cost function is convex. And, it's not too difficult to show that, for logistic regression, the cost function for the sum of squared errors is not convex, while the cost function for the log-likelihood is.
MLE has very nice properties
Solutions using MLE have nice properties such:
consistency: meaning that with more data, our estimate of $\beta$ gets closer to the true value. asymptotic normality: meaning that with more data, our estimate of $\beta$ is approximately normal distributed with variance that decreases with $O(\frac{1}{n})$ functional invariance: nice property to have when dealing with multiple parameters (nuisance parameters) and calculating the profile likelihood.
Among others.
However using Least Squares does have some benefits
Least squares tends to be more robust to outliers because an outlier can be wrong by at most 1 (because $(1-0)^2 = 1$), whereas under a negative log likelihood loss function, the distance can be arbitrarily large.
For more information check this or this out.
Edited
My interpretation of the OPs question is why do we use MLE instead of a square loss function to determine $\beta$ in a logistic regression model of the form:
$$logit(P(Y=1|X)) = x\beta$$
Where $P(Y=1|X) = f(x;\beta) = \frac{e^{x\beta}}{1 + e^{x\beta}} = \frac{1}{1 + e^{-x\beta}}$
So the loss function looks like:
$$\sum_{i} (y_i - f(x_i;\beta))^2 = \sum_{i} (y_i - \frac{1}{1 + e^{-x\beta}})^2$$
where $y_i$'s take values 0/1.
When I talk about computational efficiency, I mean finding the $\beta$ which minimizes the above vs. Fisher Scoring on the likelihood function.
|
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
|
My favourite one: $[0, 1]$ is compact, i.e. every open cover of $[0, 1]$ has a finite subcover.
Proof: Suppose for a contradiction that there is an open cover $\mathcal{U}$ which does not admit any finite subcover. Thus, either $\left[ 0, \frac{1}{2} \right]$ or $\left[ \frac{1}{2}, 1 \right]$ cannot be covered with a finite number of sets from $\mathcal{U}$ - name it $I_1$. Again, one of the two $I_1$'s subintervals of length $\frac{1}{4}$ can't be covered with a finite number of sets from $\mathcal{U}$. Continuing, we get a descending sequence of intervals $I_n$ of length $\frac{1}{2^n}$ each, every of which cannot be finitely covered.
By the Cantor Intersection Theorem,
$$\bigcap_{n=1}^{\infty} I_n = \{ x \}$$
for some $x \in [0, 1].$ But there is such $U \in \mathcal{U}$ that $x \in U$ and so $I_n \subseteq U$ for some sufficiently large $n$. That's a contradiction.
But given an arbitrary cover $\mathcal{U}$, I think finding a finite subcover may be a somewhat tedious task. :p
P.S. There actually comes a procedure from the proof above:
See if $[0, 1]$ itself is covered by one set from $\mathcal{U}$. If so, we're done. If not, execute step 1. for $\left[ 0, \frac{1}{2} \right]$ and $\left[ \frac{1}{2}, 1 \right]$ to get their finite subcovers, then unite them.
The proof guarantees you will eventually find a finite subcover (i.e. you'll never end up going downwards infinitely), but you cannot tell how long it will take. So it is not
as constructive as one would expect.
|
For my own convenience I'll work in $\infty$-categories, feel free to answer in whatever framework best suits you. My question is essentially how to show, given an $E_\infty$-ring object $R$ in an $\infty$-category $C$ and another object $M$ with a map $R\otimes M\to M$, that the pair $(R,M)$ is in fact an algebra, as Lurie calls it in Higher Algebra, for the $\infty$-operad $LM^\otimes$. In other words, how to show that $M$ is a module over $R$ with respect to all of $R$'s structure (not just the structure of being "homotopy associative and commutative").
One idea I have is the following: I'd like to produce a map of $\infty$-operads $F:LM^\otimes\to C^\otimes$, where $C^\otimes\to Fin_\ast$ is the coCartesian fibration determining the symmetric monoidal structure on $C$. To produce $F$ I need to have that it takes inert morphisms of $LM^\otimes$ to inert morphisms of $C^\otimes$. Now, recall that the inert morphisms of $LM^\otimes$ are precisely morphisms which map to inert morphisms in $Fin_\ast$ along the forgetful map $LM^\otimes\to Fin_\ast$, which are morphisms $\langle n\rangle\to\langle m\rangle$ that represent $\langle m\rangle$ as a "quotient" of $\langle n\rangle$, where everything we quotient out by gets sent to $\ast\in\langle m\rangle=\{\ast,1,\ldots,m\}$.
However, to describe an $R$-action on $M$, we certainly never send $M$ to $\ast$. As such, the inert morphisms in $LM^\otimes$ shouldn't have anything to do with $M$ (it's just coming along for the ride, so-to-speak). In other words, we need that the morphisms describing the monoidal structure on $R$ (inside of $LM^\otimes$) go to the right place, but we don't have to check anything involving inert morphisms with respect to $M$. Hence a functor $LM^\otimes\to C^\otimes$ which picks out an associative algebra object $R$ of $C$ also picks out an object of $LMod(R)$. If $R$ is in fact $E_\infty$ then we also know that $LMod(R)\simeq Mod^{Comm}(R)$, giving us what we need to know.
Does this seem valid? I have very little experience working with $\infty$-operads, and can't find this sort of thing anywhere anyway, so I might be making some very silly mistakes.
Assuming my argument above doesn't make any sense, does anyone have any other ideas?
Thanks!
|
I) In this answer we will only discuss the equilibrium shape. Recall that when we discussed the shape of Earth in this Phys.SE post, the gravitational quadrupole moment was important. Unlike the Earth, from a surface perspective, it is a very good approximation to assume that all the mass of the Sun sits in the center, cf. below graph.
Moreover, the Newton's shell theorem helps out here. We conclude that it is enough to consider the gravitational monopole field
$$\tag{1} g(r)~=~\frac{GM}{r^2}$$
of the Sun. From Wikipedia, we get that
$$\tag{2} G~=~ 6.674\cdot 10^{−11} {\rm Nm}^2/{\rm kg}^2\quad\text{and}\quad M~=~(1.98855 \pm 0.00025)\cdot 10^{30} {\rm kg}. $$The equatorial radius and period are$$\tag{3} r_e~=~(696342\pm 65)~{\rm km} \quad\text{and}\quad T_e~=~25.05 ~{\rm days},$$
respectively. The equatorial speed is
$$\tag{4} v_e~=~\omega_e r_e~=~\frac{2\pi r_e}{T_e}~\approx~2.02 ~{\rm km/s}.$$
The equatorial surface gravity is then
$$\tag{5} g_e~=~\frac{GM}{r_e^2}~\approx~274~{\rm m/s^2}. $$
Repeating Mark Eichenlaub's monopole argument for the Sun, the height difference between the equatorial and polar radius becomes
$$\tag{6} h~:=~r_e-r_p~=~\frac{v_e^2}{2g_e}~\approx~7.5 ~{\rm km}, $$
leading to a flattening
$$\tag{7} f~=~\frac{h}{r_e}~\approx~ 11 \cdot 10^{-6} .$$
This estimate overshoots by 20% the actual observed flattening, which is only $9 \cdot 10^{-6}$.
II) In the remainder of this answer, we would like to argue that the 20% difference in eq. (7) is mainly due to the fact that the Sun does not spin as a rigid body, which we implicitly assumed in Section I. The polar period
$$\tag{8} T_p~=~34.4 ~{\rm days} $$
is slower than the equatorial period (3). To proceed, let us for simplicity assume that the square $T^2$ of the period $T$ depends on the polar angle $\theta$ in the following way$^1$
$$\tag{9} T^2~=~T_p^2 + s (T_e^2-T_p^2) , \qquad s~\equiv~\sin^2\theta,\qquad \omega~\equiv~ \frac{2\pi}{T}. $$
Analogously, define for later convenience the quantity
$$\tag{10} A~:=~\frac{GM}{\omega^2}~=~ A_p + s A^{\prime}, \qquad A^{\prime}~:=~A_e-A_p~<~0, $$
which is proportional to $T^2$. The centrifugal acceleration is $$\tag{11} a_{\rm cf}~=~\omega^2 r\sin\theta.$$
Using arguments similar to my Phys.SE answer here, the total force should be perpendicular to the surface
$$\tag{12} \left(g -a_{\rm cf} \sin \theta \right)\mathrm{d}r -a_{\rm cf} \cos \theta ~r\mathrm{d}\theta~=~0.$$
The differential (12) is inexact. After multiplying with an integrating factor, wehave
$$ \tag{13} \mathrm{d}U~=~\lambda(u) \left[\left(\frac{A}{r^2}-sr\right)\mathrm{d}r -\frac{r^2}{2}\mathrm{d}s\right], $$
where
$$ \tag{14} \lambda(u)~:=~\exp\left(\frac{2}{3}A^{\prime} u^3 \right) , \qquad u~\equiv~\frac{1}{r}. $$
The potential becomes
$$ \tag{15} U~=~ -A_p \int_0^u \! du^{\prime} ~ \lambda(u^{\prime}) - s\frac{\lambda(u)}{2u^2} . $$
The difference between equatorial and polar potential should be zero:
$$\tag{16} 0~=~U_e-U_p~=~ A_p \int_{u_e}^{u_p} \! du~\lambda(u) -\frac{\lambda(u_e)}{2u_e^2} , $$
or equivalently,
$$ \frac{1}{2 A_p u_e^2} ~\stackrel{(16)}{=}~ \int_{u_e}^{u_p} \! du~ \exp\left(\frac{2}{3}A^{\prime} (u^3-u_e^3) \right)~$$$$ \tag{17}\approx~ \int_{u_e}^{u_p} \! du~ e^{2 A^{\prime} (u-u_e) u_e^2}~=~\frac{e^{2 A^{\prime} (u_p-u_e) u_e^2}-1}{2A^{\prime} u_e^2}.$$
The height difference becomes
$$\tag{18} h~:=~r_e-r_p~\approx~\frac{u_p-u_e}{u_e^2}~\stackrel{(17)}{\approx}~\frac{r_e^4}{2A^{\prime}} \ln \left(1 + \frac{A^{\prime}}{A_p}\right)~\approx~5.3~{\rm km},$$
leading to a flattening
$$\tag{19} f~=~\frac{h}{r_e}~\approx~ 8 \cdot 10^{-6} ,$$
which is 10% below the observed flattening. Anyway, the above simple model demonstrates that it is important to take into account the non-rigid differential rotation of the Sun.
--
$^1$ Besides fulfilling the correct boundary conditions, the ansatz (9) is admittedly chosen to make the integrating factor (14) simple (rather than being based on observations or astrophysical models).
|
The CHOLMOD library provides a
CHOLMOD_rcond function that estimates the reciprocal condition number (in the one norm) of a symmetric positive definite matrix from its cholesky decomposition. This is defined as the ratio between the smallest and largest entries in the diagonal of the
L factor. Why is this a true fact?
The CHOLMOD library provides a
This question is covered in the following paper:
N. J. Higham, "A survey of condition number estimation for triangular matrices," SIAM Review, vol. 29, no. 4, pp. 575–596, Dec. 1987. Also available as an epub from The University of Manchester.
Given:
The condition number of matrix $A\in\mathbb R^{n\times n}$ (wrt to inversion) is defined, as follows:
$$ \kappa(A)=\big\lVert A\big\lVert\big\lVert A^{-1}\big\lVert $$ where $\left\lVert \cdot \right\lVert$ is a matrix norm.
The Cholesky decomposition of $A=LL^T$, where $L\in \mathbb R^{n\times n}$, lower-triangular, with positive diagonal entries.
Let's see what we can say about $L$ just by looking at its entries (in particular, its diagonal entries). First of all, we can relax a more general inequality: $$ \big\lVert L \big\rVert_{1,2,\infty,F}\geq|\ell_{ij}|,\quad i,j=1,\ldots,n \tag{1} \label{eq1} $$
to the following, which looks only on the diagonal $$ \begin{aligned} \big\lVert L \big\rVert_{1,2,\infty,F}&\geq\ell_{ii},\quad i=1,\ldots,n \\ &\geq \max\limits_{1\leq i \leq n} \ell_{ii} \end{aligned} \tag{2} \label{eq2} $$
In $\eqref{eq1}$ and $\eqref{eq2}$, $\ell_{ij}$is the $(i,j)$th entry of the matrix $L$. $1$, $2$, $\infty$, and $F$ denote the 1-norm, 2-norm, infinity-norm, and Frobenius norm, respectively. The absolute value is dropped, since diagonal entries of $L$ arising from Cholesky factorization are positive.
Now, let's think if we can say something about $L^{-1}$. From $\eqref{eq1}$ and the fact that the reciprocals of the diagonal elements of $L$ are themselves elements of $L^{-1}$, we can also say that:
$$ \big\lVert L^{-1} \big\rVert_{1,2,\infty,F}\geq \left( \min\limits_{1\leq i \leq n }\ell_{ii}\right)^{-1} \tag{3} \label{eq3} $$
Equation $\eqref{eq3}$ is denoted as equation (2.1) in Higham's paper.
Since we are dealing with a square matrix $A$ and all the considered norms are sub-multiplicative: $$ \begin{aligned} \big\lVert A\big\rVert_{1,2,\infty,F}&\leq \big\lVert L\big\rVert_{1,2,\infty,F} \big\lVert L^T\big\rVert_{1,2,\infty,F}\\ &\leq \big\lVert L\big\rVert_{1,2,\infty,F} \big\lVert L\big\rVert_{1,2,\infty,F}\\ &\leq \big\lVert L\big\rVert_{1,2,\infty,F}^2 \end{aligned} \tag{4} \label{eq4} $$
From $\eqref{eq4}$, we can say that:
$$ \kappa(A)\leq\big\lVert L\big\rVert_{1,2,\infty,F}^2 \big\lVert L^{-1}\big\rVert_{1,2,\infty,F}^2 $$
therefore, considering also $\eqref{eq2}$ and $\eqref{eq3}$:
$$ \begin{aligned} \frac{1}{\kappa(A)}&\geq \frac{1}{\big\lVert L\big\rVert_{1,2,\infty,F}^2 \big\lVert L^{-1}\big\rVert_{1,2,\infty,F}^2}\\ &\geq \left( \frac{\min\limits_{1\leq i \leq n }\ell_{ii}}{\max\limits_{1\leq i \leq n} \ell_{ii}} \right)^2 \end{aligned} \tag{5} \label{eq5} $$
Equation $\eqref{eq5}$ is exactly what CHOLMOD's function
cholmod_rcond() returns.
Is this a good estimate? Not necessarily. You may want to look at better ways (and discussion on the reliability of this particular estimate) to estimate the condition number from the aforementioned paper or newer research. Is it a useful one? Certainly. Because if you already have a Cholesky factorization, it is $\mathcal O(n)$ cheap.
|
Divergence and curl are two measurements of vector fields that are very useful in a variety of applications. Both are most easily understood by thinking of the vector field as representing a flow of a liquid or gas; that is, each vector in the vector field should be interpreted as a velocity vector. Roughly speaking, divergence measures the tendency of the fluid to collect or disperse at a point, and curl measures the tendency of the fluid to swirl around the point. Divergence is a scalar, that is, a single number, while curl is itself a vector. The magnitude of the curl measures how much the fluid is swirling, the direction indicates the axis around which it tends to swirl. These ideas are somewhat subtle in practice, and are beyond the scope of this course. You can find additional information on the web, for example at
Div, Grad, Curl, and All That: An Informal Text on Vector Calculus, by H. M. Schey.
Recall that if $f$ is a function, the gradient of $f$ is given by $$\nabla f=\left\langle {\partial f\over\partial x},{\partial f\over\partial y},{\partial f\over\partial z}\right\rangle.$$ A useful mnemonic for this (and for the divergence and curl, as it turns out) is to let $$\nabla = \left\langle{\partial \over\partial x},{\partial \over\partial y},{\partial \over\partial z}\right\rangle,$$ that is, we pretend that $\nabla$ is a vector with rather odd looking entries. Recalling that $\langle u,v,w\rangle a=\langle ua,va,wa\rangle$, we can then think of the gradient as $$\nabla f=\left\langle{\partial \over\partial x},{\partial \over\partial y},{\partial \over\partial z}\right\rangle f = \left\langle {\partial f\over\partial x},{\partial f\over\partial y},{\partial f\over\partial z}\right\rangle,$$ that is, we simply multiply the $f$ into the vector.
The divergence and curl can now be defined in terms of this same odd vector $\nabla$ by using the cross product and dot product. The divergence of a vector field ${\bf F}=\langle f,g,h\rangle$ is $$\nabla \cdot {\bf F} = \left\langle{\partial \over\partial x},{\partial \over\partial y},{\partial \over\partial z}\right\rangle\cdot \langle f,g,h\rangle = {\partial f\over\partial x}+{\partial g\over\partial y}+{\partial h\over\partial z}.$$ The curl of $\bf F$ is $$\nabla\times{\bf F} = \left|\matrix{{\bf i}&{\bf j}&{\bf k}\cr {\partial \over\partial x}&{\partial \over\partial y}&{\partial \over\partial z}\cr f&g&h\cr}\right| = \left\langle {\partial h\over\partial y}-{\partial g\over\partial z}, {\partial f\over\partial z}-{\partial h\over\partial x}, {\partial g\over\partial x}-{\partial f\over\partial y}\right\rangle.$$
Here are two simple but useful facts about divergence and curl.
In words, this says that the divergence of the curl is zero.
That is, the curl of a gradient is the zero vector. Recalling that gradients are conservative vector fields, this says that the curl of a conservative vector field is the zero vector. Under suitable conditions, it is also true that if the curl of $\bf F$ is $\bf 0$ then $\bf F$ is conservative. (Note that this is exactly the same test that we discussed in section 18.3.)
Example 18.5.3 Let ${\bf F} = \langle e^z,1,xe^z\rangle$. Then $\nabla\times{\bf F} = \langle 0,e^z-e^z,0\rangle = {\bf 0}$. Thus, $\bf F$ is conservative, and we can exhibit this directly by finding the corresponding $f$.
Since $f_x=e^z$, $f=xe^z+g(y,z)$. Since $f_y=1$, it must be that $g_y=1$, so $g(y,z)=y+h(z)$. Thus $f=xe^z+y+h(z)$ and $$xe^z = f_z = xe^z + 0 + h'(z),$$ so $h'(z)=0$, i.e., $h(z)=C$, and $f=xe^z+y+C$.
We can rewrite Green's Theorem using these new ideas; these rewritten versions in turn are closer to some later theorems we will see.
Suppose we write a two dimensional vector field in the form ${\bf F}=\langle P,Q,0\rangle$, where $P$ and $Q$ are functions of $x$ and $y$. Then $$\nabla\times {\bf F} = \left|\matrix{{\bf i}&{\bf j}&{\bf k}\cr {\partial \over\partial x}&{\partial \over\partial y}&{\partial \over\partial z}\cr P&Q&0\cr}\right|= \langle 0,0,Q_x-P_y\rangle,$$ and so $(\nabla\times {\bf F})\cdot{\bf k}=\langle 0,0,Q_x-P_y\rangle\cdot \langle 0,0,1\rangle = Q_x-P_y$. So Green's Theorem says $$\eqalignno{ \int_{\partial D} {\bf F}\cdot d{\bf r}&= \int_{\partial D} P\,dx +Q\,dy = \dint{D} Q_x-P_y \,dA =\dint{D}(\nabla\times {\bf F})\cdot{\bf k}\,dA.& (18.5.1)\cr }$$ Roughly speaking, the right-most integral adds up the curl (tendency to swirl) at each point in the region; the left-most integral adds up the tangential components of the vector field around the entire boundary. Green's Theorem says these are equal, or roughly, that the sum of the "microscopic'' swirls over the region is the same as the "macroscopic'' swirl around the boundary.
Next, suppose that the boundary $\partial D$ has a vector form ${\bf r}(t)$, so that ${\bf r}'(t)$ is tangent to the boundary, and ${\bf T}={\bf r}'(t)/|{\bf r}'(t)|$ is the usual unit tangent vector. Writing ${\bf r}=\langle x(t),y(t)\rangle$ we get $${\bf T}={\langle x',y'\rangle\over|{\bf r}'(t)|}$$ and then $${\bf N}={\langle y',-x'\rangle\over|{\bf r}'(t)|}$$ is a unit vector perpendicular to $\bf T$, that is, a unit normal to the boundary. Now $$\eqalign{ \int_{\partial D} {\bf F}\cdot{\bf N}\,ds&= \int_{\partial D} \langle P,Q\rangle\cdot{\langle y',-x'\rangle\over|{\bf r}'(t)|} |{\bf r}'(t)|dt= \int_{\partial D} Py'\,dt - Qx'\,dt\cr &=\int_{\partial D} P\,dy - Q\,dx =\int_{\partial D} - Q\,dx+P\,dy.\cr }$$ So far, we've just rewritten the original integral using alternate notation. The last integral looks just like the right side of Green's Theorem (18.4.1) except that $P$ and $Q$ have traded places and $Q$ has acquired a negative sign. Then applying Green's Theorem we get $$ \int_{\partial D} - Q\,dx+P\,dy=\dint{D} P_x+Q_y\,dA= \dint{D} \nabla\cdot{\bf F}\,dA.$$ Summarizing the long string of equalities, $$\eqalignno{ \int_{\partial D} {\bf F}\cdot{\bf N}\,ds&=\dint{D} \nabla\cdot{\bf F}\,dA.& (18.5.2)\cr }$$ Roughly speaking, the first integral adds up the flow across the boundary of the region, from inside to out, and the second sums the divergence (tendency to spread) at each point in the interior. The theorem roughly says that the sum of the "microscopic'' spreads is the same as the total spread across the boundary and out of the region.
Exercises 18.5
Ex 18.5.1Let ${\bf F}=\langle xy,-xy\rangle$ and let $D$ be given by $0\le x\le 1$, $0\le y\le 1$.Compute $\ds\int_{\partial D} {\bf F}\cdot d{\bf r}$ and$\ds\int_{\partial D} {\bf F}\cdot{\bf N}\,ds$.(answer)
Ex 18.5.2Let ${\bf F}=\langle ax^2,by^2\rangle$ and let $D$ be given by $0\le x\le 1$, $0\le y\le 1$.Compute $\ds\int_{\partial D} {\bf F}\cdot d{\bf r}$ and$\ds\int_{\partial D} {\bf F}\cdot{\bf N}\,ds$.(answer)
Ex 18.5.3Let ${\bf F}=\langle ay^2,bx^2\rangle$ and let $D$ be given by $0\le x\le 1$, $0\le y\le x$.Compute $\ds\int_{\partial D} {\bf F}\cdot d{\bf r}$ and$\ds\int_{\partial D} {\bf F}\cdot{\bf N}\,ds$.(answer)
Ex 18.5.4Let ${\bf F}=\langle \sin x\cos y,\cos x\sin y\rangle$ and let $D$ be given by $0\le x\le \pi/2$, $0\le y\le x$.Compute $\ds\int_{\partial D} {\bf F}\cdot d{\bf r}$ and$\ds\int_{\partial D} {\bf F}\cdot{\bf N}\,ds$.(answer)
Ex 18.5.5Let ${\bf F}=\langle y,-x\rangle$ and let $D$ be given by $x^2+y^2\le 1$.Compute $\ds\int_{\partial D} {\bf F}\cdot d{\bf r}$ and$\ds\int_{\partial D} {\bf F}\cdot{\bf N}\,ds$.(answer)
Ex 18.5.6Let ${\bf F}=\langle x,y\rangle$ and let $D$ be given by $x^2+y^2\le 1$.Compute $\ds\int_{\partial D} {\bf F}\cdot d{\bf r}$ and$\ds\int_{\partial D} {\bf F}\cdot{\bf N}\,ds$.(answer)
Ex 18.5.7Prove theorem 18.5.1.
Ex 18.5.8Prove theorem 18.5.2.
Ex 18.5.9If $\nabla \cdot {\bf F}=0$, $\bf F$ is said to be incompressible. Show that any vector fieldof the form ${\bf F}(x,y,z) = \langle f(y,z),g(x,z),h(x,y)\rangle$ isincompressible. Give a non-trivial example.
|
Prove that $\frac{1}{n}+\frac{1}{n+1}+\frac{1}{n+2}+...+ \frac{1}{n^2} \ge 1$, for all natural numbers $n$. I tried to use mathematical inducion but failed and I tried to figure out a short formula for the sum but I couldnt find any.
Without integrals. If $n\ge4$ $$ \sum_{k=n}^{n^2}\frac1k>\sum_{k=n}^{2n-1}\frac1k+\sum_{k=2n}^{3n-1}\frac1k+\sum_{k=3n}^{4n-1}\frac1k>n\,\frac{1}{2\,n}+n\,\frac{1}{3\,n}+n\,\frac{1}{4\,n}=\frac12+\frac13+\frac14=\frac{14}{12}. $$
$$ \sum_{k=n}^{n^2} \frac{1}{k} = \frac{1}{n} + \sum_{k=n+1}^{n^2} \frac{1}{k} \ge \frac{1}{n} + \frac{n^2-(n+1)+1}{n^2} = 1.$$ The final fraction due to there being $n^2-(n+1)+1$ terms, each of which is at least $\frac{1}{n^2}$.
Using some calculus you can see that $$\sum_{k=n}^{n^2} \frac{1}{k} \ge \int_{n}^{n^2} \frac{1}{x} dx = \ln(n^2) - \ln(n) = \ln(n).$$
Certainly this is larger than 1 for $n\ge 3$. For $n=1$ the claim follows automatically. For $n=2$ we check $$\frac12 + \frac13 + \frac14 = \frac{6+4+3}{12} = \frac{13}{12}.$$
Since we know this holds for $n=2$, another approach is to show that $f(n)=\sum_{k=n}^{n^2}1/k$ is an increasing function. Since between $f(n)$ and $f(n+1)$ we lose $1/n$ and gain $$\frac{1}{n^2+1} + \frac{1}{n^2+2} + \cdots + \frac{1}{(n+1)^2}$$ we need to show that the sum of the new terms is larger than $1/n$.
We can get a lower bound on this tail: $$\frac{1}{n^2+1} + \frac{1}{n^2+2} + \cdots + \frac{1}{(n+1)^2} > \frac{2n+1}{(n+1)^2}.$$ If this left term is larger than $1/n$ we win. Using some precalculus type reasoning we can see that $$\frac{2n+1}{(n+1)^2} \ge \frac1n$$ is equivalent to $$n^2 - n - 1 \ge 0.$$
This is satisfied for $n \ge 2$.
Just another way, using Cauchy-Schwarz inequality: $$\frac1{n+k}+\frac1{n^2-k} \ge \frac4{n^2+n}$$
$$\implies 2\sum_{k=0}^{n^2-n}\frac1{n+k} \ge \frac4{n^2+n}(n^2-n+1)$$
So all we need to show is $2(n^2-n+1) \ge n^2+n \iff (n-1)(n-2)\ge 0$ which is evidently true for integer $n$.
Using the Arithmetic-Harmonic Inequality: $$\frac{n^2-n+1}{n+(n+1)+\cdots+n^2}\leq\frac{1}{n^2-n+1}\left(\frac{1}{n}+\frac{1}{n+1}+\cdots+\frac{1}{n^2}\right)$$
Now, the denominator of the left-hand fraction is: $$\sum_{k=0}^{n^2-n}(n+k)=\sum_{k=1}^{n^2-n+1}(n+k-1)=\sum_{k=1}^{n^2-n+1}(n-1)+\sum_{k=1}^{n^2-n+1}k=\frac{n^4+n}{2}$$
Hence, we have that $$1\leq\frac{2(n^2-n+1)}{n(n+1)}=\frac{2(n^2-n+1)^2}{n(n+1)(n^2-n+1)}=\frac{2(n^2-n+1)^2}{n^4+n}\leq\frac{1}{n}+\cdots+\frac{1}{n^2}$$
We can recover the first inequality here from the fact that $0\leq (n-1)(n-2)$ for all $n\geq 1$.
|
I'm adding another answer since you've substantially changed the question. I feel we're getting out of the realm of appropriate stackexchange etiquette here with so much discussion and changing thing around. Maybe a mod can suggest how best to proceed. Anyways, here's my answer.
I agree whole-heartedly with your first paragraph
I disagree whole-heartedly with your second paragraph. your first statement is:
It seems to me that we would see the mirror moving (and measure it as
moving) at +np – which would be the radiation pressure from the
reflection.
This is incorrect. If we measured the momentum of the mirror we could measure any momentum $0, +2p, +4p, \ldots, +np, \ldots, +2np$. As you've pointed out, it is
mostly likely that we measure $+np$. However, since the mirror is in a superposition of many momentum states it is not possible for us to predict before-hand what momentum we will measure if we measure it. We can only ascribe probabilities to each possible momentum, where the probability is given by the weighting of that term in the description of the state of the mirror.
Consider a mirror which is first hit by one photon, $\gamma_1$ and then a second photon, $\gamma_2$. The initial state of this system is
$$\lvert 0 \rangle_M \lvert +p\rangle_{\gamma_1} \lvert+p\rangle_{\gamma_2}$$
Where the subscripts refer to the state of the mirror, $M$, and the two photons, $\gamma_1$ and $\gamma_2$.
After the first photon hits the mirror the state is
$$\frac{1}{\sqrt{2}} \big( \lvert 0 \rangle_M \lvert +p\rangle_{\gamma_1} \lvert+p\rangle_{\gamma_2} +\lvert +2p \rangle_M \lvert -p \rangle_{\gamma_1} \lvert +p \rangle_{\gamma_2} \big) $$
After the second photon hits the mirror the quantum state is
$$\frac{1}{\sqrt{2}}\Big(\frac{1}{\sqrt{2}}\big(\lvert 0 \rangle_M \lvert +p \rangle_{\gamma_1} \lvert +p \rangle_{\gamma_2} + \lvert +2p \rangle_M \lvert +p \rangle_{\gamma_1} \lvert -p \rangle_{\gamma_2}\big) + \frac{1}{\sqrt{2}}\big(\lvert +2p \rangle_M \lvert -p \rangle_{\gamma_1} \lvert +p \rangle_{\gamma_2} + \lvert +4p \rangle_M \lvert -p \rangle_{\gamma_1} \lvert -p \rangle_{\gamma_2}\big)\Big)$$
You see that upon each reflection each term splits into two terms. One where the mirror had no change in momentum and the photons momentum was not changed and one where the mirror got a kick of $+2p$ and the photon was reflected.
Say we now perform a measurement on the momentum of the mirror. The possible outcomes are $0$, $+2p$, or $+4p$.
If we measure $0$ then we know we have "collapsed" the quantum state into the first term or first "branch". This means that we know both $\gamma_1$ and $\gamma_2$ would reveal momentum $+p$ upon measurement of their momenta. The state has collapsed to $\lvert 0 \rangle_M \lvert +p \rangle_{\gamma_1} \lvert +p \rangle_{\gamma_2}$. Note that momentum is conserved.
If we measure the mirror to have momentum $+4p$ we know we are in the last branch and thus both $\gamma_1$ and $\gamma_2$ would reveal momentum $-p$ upon measurement of their momenta. The state has collapsed to $\lvert +4p \rangle_M \lvert -p \rangle_{\gamma_1} \lvert -p \rangle_{\gamma_2}$. Note that momentum is conserved.
Now, if we measure the momentum of the mirror to be $+2p$, then intuitively we know if we were to measure the momentum of the photons one of them would have been transmitted and one of them would have been reflected, but, just by measuring the momentum of the mirror we cannot determine which. This means that the state of the system after the measurement would be
$$\frac{1}{\sqrt{2}}\big(\lvert +2p \rangle_M \lvert +p \rangle_{\gamma_1} \lvert -p \rangle_{\gamma_2} + \lvert +2p \rangle_M \lvert -p \rangle_{\gamma_1} \lvert +p \rangle_{\gamma_2} \big)$$
That is, even after measurement the system is still in a superposition. This is because the measurement didn't give us FULL information about the quantum state. You can see that the momentum is definite but the photon is still in a superposition state.
Perhaps this explication helps you already?
Anyways, back to your question and the second paragraph. It is not clear what you mean when you talk about 'halves' of the photon or the 'overall' photon momentum. I think what is confusing you is whatever you mean by 'overall' momentum. I'm pretty what you are referring to as 'overall' momentum is not a thing. Instead, you should think about the photons momentum as I have illustrated it above. The total state of the system a superposition of different terms in which different things happened. In each of these terms the photon has a well-defined momentum. Whenever an interaction happens each term can split into multiple other terms. These different branches split and split until a measurement is made. When a measurement is made the state "collapses" into the subspace of branches which are consistent with that measurement.
The language I am using here is borrowed from the many-worlds interpretation of quantum mechanics, but you need not adopt that interpretation for this simple description of superposition/entanglement to make sense.
Let's extend this example a little more. Imagine we DON'T measure the momentum of the mirror or the first photon, but instead measure the momentum of photon 2, $\gamma_2$. Imagine we measure $+p$. Then the quantum state collapses to
$$\frac{1}{\sqrt{2}}\big(\lvert 0 \rangle_M \lvert +p \rangle_{\gamma_1} \lvert +p \rangle_{\gamma_2} + \lvert +2p \rangle_M \lvert -p \rangle_{\gamma_1} \lvert +p \rangle_{\gamma_2} \big)$$
We know the momentum of the second photon, but the first photon and the mirror remain in a superposition state and the whole system remains in an entangled state.
You keep asking if this has been tested. I'm not sure exactly what experiment you are imagining, but I can tell you that if you shine light on a beamsplitter and then use one output of the beamsplitter to illuminate atoms the light will certainly impart the expected momentum onto the atoms. I have performed this experiment.
In the comments you ask about an experiment in which a single photon hits a beamsplitter and confirmation that beamsplitter is only seen to be in the $\lvert 0 \rangle_M$ or $\lvert +2p \rangle_M$ and never $\lvert +p \rangle_M$. I can't think off of the top of my head of an experiment that does PRECISELY this. The first reason is that it is very difficult to measure the recoil of a massive mirror due to a single photon. I think many would say it is impossible. However, I have worked in the field of optomechanics where people regularly see single-photon, single-phonon interactions between an optical field (photons) and some mechanical object such as a mirror. Perhaps you can look up experiments on optomechanics to see if there is a specific experiment which satisfies your question.
What I can say is that the concepts of superposition, entanglement, and radiation pressure have been thoroughly studied and the theory underlying countless experiments relies on these concepts. The measurement of the mirror in state $\lvert +p \rangle_M$ would contradict all of these experimental results so I can say with certainty that if this experiment was able to be performed with the required precision you would not measure the mirror to be in state $\lvert +p \rangle_M$.
What I can also say is that the the single photon interaction is very similar to an EPR experiment for example. Notice that that photon and mirror form an EPR entangled state after the interaction. Many EPR pair experiments have been performed to test Bell's inequality for example, and these are all consistent with the usual results of quantum mechanics. These EPR experiments also demonstrate a kind of conservation law. if $\lvert \uparrow \rangle$ and $\lvert \downarrow \rangle$ represent angular momentum states then the EPR state
$$\frac{1}{\sqrt{2}}\big(\lvert \uparrow \rangle_1 \lvert \downarrow \rangle_2 + \lvert \downarrow \rangle_1 \lvert \uparrow \rangle_2 \big)$$
exhibits conservation of momentum in each "branch" just like the photons and beamsplitter. That is, if one of the particles is measured in state $\lvert \uparrow \rangle$ then we KNOW we could not measure the other particle to be in state $\lvert \rightarrow \rangle$, for example, because that would violate conservation of momentum. That is, unless the particles interact with something else which can carry away momentum.
Anyways, the point is these are basic results in superposition/entanglement upon which a lot of quantum theory and quantum experiments rely so I am certain of these results. There is probably a specific experiment out there in the field of single photon single phonon optomechanics or atom interferometry but I can't point to it now. EPR/Bell's inequality measurements may be of interest to you as well.
|
Well, for the longest coherence time ever, I'm finding this Science from 2013 entitled Room-Temperature Quantum Bit Storage Exceeding 39 Minutes Using Ionized Donors in Silicon-28, which indicates qubits that lasted for over 39 minutes; these, however, only had an 81% fidelity rate. (This is for qubits used in computation, not memory storage. For memory ...
Nielsen and Chuang in their book "Quantum Computation and Quantum Information" have section (Chapter 9) on distance measures for quantum information.Surprisingly they say in Section 9.3 " How well does a quantum channel preserve information?" that when comparing fidelity to the trace norm:Using the properties of the trace distance established in the ...
I guess you're looking at equations (130) and (131)? So, here, you have $|\psi\rangle=(|0\rangle|a\rangle+|1\rangle|b\rangle)/\sqrt{2}$ and $|\phi\rangle=|a| |0\rangle+|b| |1\rangle$. When it says to calculate $\langle\phi|\psi\rangle$, what it really means is$$(\langle\phi|\otimes\mathbb{I})|\psi\rangle,$$padding everything with identity matrices to ...
Simply it is the distance (similarity measure) between two quantum states, for example the fidelity between $|0\rangle$ and $|1\rangle$ is less than the fidelity between $|0\rangle$ and $\frac{1}{\sqrt{2}}\big(|0\rangle + |1\rangle\big)$. or you can say it is the cosine of the smallest angle between two states, also called the cosine similarity
When you ask about an 'ideal' fidelity measure, it assumes that there is one measure which inherently is the most meaningful or truest measure. But this isn't really the case.For unitary operators, our analysis of the error used in approximating one unitary by another involves the distance induced by the operator norm: $$ \bigl\lVert U - V \bigr\rVert_\...
The quantity $\text{Tr}(\sqrt{A}\sqrt{B})$ that you defined there is actually referred to as the "just-as-good fidelity" (see 1801.02800) because it does have a relationship with the trace distance very similar to the standard fidelity and is therefore "just as good" for quantifying the distinguishability of states. There is no intrinsic reason to prefer the ...
(I will give the argument with formulas for now, hopefully I find time for some pictures later.)Let $|m\rangle$ be the (unnormalized) maximally entangled state. Then, a purification of $\rho$ is given by$$|\rho\rangle_{AB}=(\sqrt{\rho}_A\otimes1\!\!1_B)|m\rangle\ ,$$and correspondingly for $\sigma$ -- this can be seen most easily by first tracing the $...
I'll provide a slightly different (but of course equivalent) way to prove Uhlmann's theorem, which I personally find more explicit than the standard one, and might help to understand what is going on.I don't know if this qualifies as sufficient "intuition" (it certainly doesn't fully satisfy me), but I at least prefer it to the standard approach with ...
Purifications play an important role in the theory of density matrices (or more generally quantum states) because they provide a geometric tool in the explanation and description algebraic relations.(I'll be following here Bengtsson and Życzkowski'sreasoning in derivation of the fidelity formula (section 9.4)).A positive $ N \times N$ matrix $\rho$ ...
Actually, there should be a minus.There is a mistake in the paper.Wittek uses a minus in his (expensive) book.Indeed say :$$ |\psi\rangle = \frac{1}{\sqrt{2}} (|0,a\rangle + |1,b\rangle) $$$$ |\phi\rangle = \frac{1}{\sqrt{Z}} (|a||0\rangle - |b||1\rangle) $$Then :$$ \langle \phi |\psi\rangle = \frac{1}{\sqrt{2Z}} (|a|\langle 0| - |b|\langle 1|) (|...
Fidelity is a single-number measure of how good a gate is. Since there are many ways that a gate can go wrong, there are multiple ways that the fidelity can be defined. The exact answer to your question will therefore depend on which kind of fidelity you want.Any measure of fidelity will typically involve comparing the gate that you wanted to the channel ...
A few thoughts:It mostly depends on what you are trying to quantify.The inner product of states, $\text{Tr}(\rho\sigma)$, is used to quantify the distance in state space. More precisely, the squared distance between two states is commonly defined as$$D(\rho,\sigma)^2\equiv \|\rho-\sigma\|_2^2=1-\text{Tr}(\rho\sigma).$$This is useful and used for ...
It might be worth mentioning the physical motivation for these definitions and the concept of fidelity itself.Unlike the classical computers we all know and love, quantum computers are fundamentally analog machines. what that means practically is that the gates you apply when you run code on a real quantum computer are going to be parameterized by a real ...
You are correct in both assumptions.A total phase on a qubit state $|{\psi}>$ is often referred to as the global phase. Any measurement of a quantum state is the expectation value $\lambda_{M}$ of some (Hermitian) observable $M$: $\lambda_{M} = |<\psi|M|\psi>|^{2}$. Because this is invariant to the global phase, there is no physical meaning to ...
Qualitatively, fidelity is the measure of the distance between two quantum states. Fidelity equal to 1, means that two states are equal.In the case of a density matrix, fidelity represents the overlap with a reference pure state. Fidelity equal to 1, means that the density matrix is diagonal and equivalent to the pure reference state.Like every distance ...
The idea is to use CS inequality in the form$\newcommand{\tr}{\operatorname{Tr}}\lvert \sum_{ij}A_{ij}^* B_{ij}\rvert\le\sqrt{\sum_{ij} \left\lvert A_{ij}\right\rvert^2}\sqrt{\sum_{ij}\left\lvert B_{ij}\right\rvert^2}$, which in matrix formalism reads $\lvert\tr(A^\dagger B)\rvert\le\sqrt{\tr(A^\dagger A})\sqrt{\tr(B^\dagger B)}$.Therefore,$$\lvert\tr(...
The best I have it's this generic answer, which I put here for clarity, hoping for improvements/corrections or even to be superseded by something better:If the limiting factor for fidelity in a given architecture+algorithm are the single-qubit gates, or the two-qubit gates, or the measurement, and if this limiting factor is not optimized in a ZEFOZ point,...
|
In Miles Reid's Undergraduate Commutative Algebra he defines a ring $B$ to be finite as an $A$ - algebra if it is finite as an $A$ - module. Now what I don't understand is suppose we look at the polynomial ring $k[x_1,\ldots,x_n]$ where $k$ is a field. Then as a $k$ - algebra it is finitely generated. Is this the same as being a finite $k$ - algebra? For if it is the same this means that $k[x_1,\ldots,x_n]$ is finitely generated as a $k$ - module which is just a $k$ - vector space. However this cannot be possible because the $x_i's$ are not even algebraic over $k$. What am I misunderstanding here?
An $A$-algebra $B$ is called finite if $B$ is a finitely generated $A$-module, i.e. there are elements $b_1,\dotsc,b_n \in B$ such that $B=A b_1 + \dotsc + A b_n$. It is called finitely generated / of finite type if $B$ is a finitely generated $A$-algebra, i.e. there are elements $b_1,\dotsc,b_n$ such that $B=A[b_1,\dotsc,b_n]$. Clearly every finite algebra is also a finitely generated one. The converse is not true (consider $B=A[T]$). However, there is the following important connection:
An algebra $A \to B$ is finite iff it is of finite type and integral.
For example, $\mathbb{Z}[\sqrt{2}]$ is of finite type over $\mathbb{Z}$ and integral, thus finite. In fact, $1,\sqrt{2}$ is a basis as a module. You can find the proof of the claim above in every introduction to commutative algebra.
|
Let
$E_4(z)= - \frac{B_4}{8}+ \sum_{n=1}^\infty \sigma_3(n) q^n$
and
$E_6(z)= - \frac{B_6}{12}+ \sum_{n=1}^\infty \sigma_5(n) q^n$
How does one show they are algebraically independant over $\mathbb{C}$ ?
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community
Let
$E_4(z)= - \frac{B_4}{8}+ \sum_{n=1}^\infty \sigma_3(n) q^n$
and
$E_6(z)= - \frac{B_6}{12}+ \sum_{n=1}^\infty \sigma_5(n) q^n$
How does one show they are algebraically independant over $\mathbb{C}$ ?
First of all, $E_4$ and $E_6$ are modular forms of weights 4 and 6. Therefore, if we have $P(E_4,E_6)=0$ for some nonzero polynomial $P$, then there exists some polynomal $G$ such that $G(t^4x,t^6y)=t^kG(x,y)$ for all $x,y,t$. (That is true, because $P(E_4,E_6)$ is always equal to the sum of some modular forms of different weights, that are polynomials in $E_4$ and $E_6$, and modular forms of different weights are obviously linearly independent). So, we should have $G(E_4,E_6)=0$ for some polynomial $G$ of the form
$$G(x,y)=\sum_{4n+6m=k} a_{nm} x^ny^m,$$
where $k$ is some positive integer. Let us choose $G$ with minimal possible degree. Now, $E_4(e^{2\pi i/3})=0$ and $E_6(i)=0$. But $G(x,y)=Ax^u+By^v+xyF(x,y)$ for some complex $A$ and $B$ and $4u=6v=n$. If $A$ or $B$ is nonzero, then we would have $G(E_4,E_6)(e^{2\pi i/3})\neq 0$ or $G(E_4,E_6)(i)\neq 0$, which is impossible. Therefore, $G(x,y)=xyF(x,y)$, so $F(E_4,E_6)=0$, which contradicts the minimality.
|
We have seen how to compute certain areas by using integration; some volumes may also be computed by evaluating an integral. Generally, the volumes that we can compute this way have cross-sections that are easy to describe.
Example 8.3.1 Find the volume of a pyramid with a square base that is 20 meters tall and 20 meters on a side at the base. As with most of our applications of integration, we begin by asking how we might approximate the volume. Since we can easily compute the volume of a rectangular prism (that is, a "box''), we will use some boxes to approximate the volume of the pyramid, as shown in figure 8.3.1: on the left is a cross-sectional view, on the right is a 3D view of part of the pyramid with some of the boxes used to approximate the volume.
Each box has volume of the form $\ds (2x_i)(2x_i)\Delta y$. Unfortunately, there are two variables here; fortunately, we can write $x$ in terms of $y$: $x=10-y/2$ or $\ds x_i=10-y_i/2$. Then the total volume is approximately $$\sum_{i=0}^{n-1} 4(10-y_i/2)^2\Delta y$$ and in the limit we get the volume as the value of an integral: $$ \int_0^{20} 4(10-y/2)^2\,dy=\int_0^{20} (20-y)^2\,dy= \left.-{(20-y)^3\over3}\right|_0^{20}= -{0^3\over3}- -{20^3\over3}={8000\over3}. $$ As you may know, the volume of a pyramid is $(1/3)(\hbox{height})(\hbox{area of base})=(1/3)(20)(400)$, which agrees with our answer.
Example 8.3.2 The base of a solid is the region between $\ds f(x)=x^2-1$ and $\ds g(x)=-x^2+1$, and its cross-sections perpendicular to the $x$-axis are equilateral triangles, as indicated in figure 8.3.2. Find the volume of the solid.
A cross-section at a value $\ds x_i$ on the $x$-axis is a triangle with base $\ds 2(1-x_i^2)$ and height $\ds \sqrt3(1-x_i^2)$, so the area of the cross-section is $$ {1\over2}(\hbox{base})(\hbox{height})= (1-x_i^2)\sqrt3(1-x_i^2), $$ and the volume of a thin "slab'' is then $$(1-x_i^2)\sqrt3(1-x_i^2)\Delta x.$$ Thus the total volume is $$\int_{-1}^1 \sqrt3(1-x^2)^2\,dx={16\over15}\sqrt3.$$
One easy way to get "nice'' cross-sections is by rotating a plane figure around a line. For example, in figure 8.3.3 we see a plane region under a curve and between two vertical lines; then the result of rotating this around the $\ds x$-axis, and a typical circular cross-section.
Of course a real "slice'' of this figure will not have straight sides, but we can approximate the volume of the slice by a cylinder or disk with circular top and bottom and straight sides; the volume of this disk will have the form $\ds \pi r^2\Delta x$. As long as we can write $r$ in terms of $x$ we can compute the volume by an integral.
Example 8.3.3 Find the volume of a right circular cone with base radius 10 and height 20. (A right circular cone is one with a circular base and with the tip of the cone directly over the center of the base.) We can view this cone as produced by the rotation of the line $y=x/2$ rotated about the $x$-axis, as indicated in figure 8.3.4.
At a particular point on the $x$-axis, say $\ds x_i$, the radius of the resulting cone is the $y$-coordinate of the corresponding point on the line, namely $\ds y_i=x_i/2$. Thus the total volume is approximately $$\sum_{i=0}^{n-1} \pi (x_i/2)^2\,dx$$ and the exact volume is $$ \int_0^{20} \pi {x^2\over4}\,dx={\pi\over4}{20^3\over3}={2000\pi\over3}. $$ Note that we can instead do the calculation with a generic height and radius: $$ \int_0^{h} \pi{r^2\over h^2}x^2\,dx ={\pi r^2\over h^2}{h^3\over3}={\pi r^2h\over3}, $$ giving us the usual formula for the volume of a cone.
Example 8.3.4 Find the volume of the object generated when the area between $\ds y=x^2$ and $y=x$ is rotated around the $x$-axis. This solid has a "hole'' in the middle; we can compute the volume by subtracting the volume of the hole from the volume enclosed by the outer surface of the solid. In figure 8.3.5 we show the region that is rotated, the resulting solid with the front half cut away, the cone that forms the outer surface, the horn-shaped hole, and a cross-section perpendicular to the $x$-axis.
We have already computed the volume of a cone; in this case it is $\pi/3$. At a particular value of $x$, say $\ds x_i$, the cross-section of the horn is a circle with radius $\ds x_i^2$, so the volume of the horn is $$\int_0^1 \pi(x^2)^2\,dx=\int_0^1 \pi x^4\,dx=\pi{1\over 5},$$ so the desired volume is $\pi/3-\pi/5=2\pi/15$.
As with the area between curves, there is an alternate approach that computes the desired volume "all at once'' by approximating the volume of the actual solid. We can approximate the volume of a slice of the solid with a washer-shaped volume, as indicated in figure 8.3.5.
The volume of such a washer is the area of the face times the thickness. The thickness, as usual, is $\Delta x$, while the area of the face is the area of the outer circle minus the area of the inner circle, say $\ds \pi R^2-\pi r^2$. In the present example, at a particular $\ds x_i$, the radius $R$ is $\ds x_i$ and $r$ is $\ds x_i^2$. Hence, the whole volume is $$ \int_0^1 \pi x^2-\pi x^4\,dx= \left.\pi\left({x^3\over3}-{x^5\over5}\right)\right|_0^1= \pi\left({1\over3}-{1\over5}\right)={2\pi\over15}. $$ Of course, what we have done here is exactly the same calculation as before, except we have in effect recomputed the volume of the outer cone.
Suppose the region between $f(x)=x+1$ and $\ds g(x)=(x-1)^2$ is rotated around the $y$-axis; see figure 8.3.6. It is possible, but inconvenient, to compute the volume of the resulting solid by the method we have used so far. The problem is that there are two "kinds'' of typical rectangles: those that go from the line to the parabola and those that touch the parabola on both ends. To compute the volume using this approach, we need to break the problem into two parts and compute two integrals: $$ \pi\int_0^1 (1+\sqrt{y})^2-(1-\sqrt{y})^2\,dy+ \pi\int_1^4 (1+\sqrt{y})^2-(y-1)^2\,dy={8\over3}\pi + {65\over6}\pi ={27\over2}\pi. $$ If instead we consider a typical vertical rectangle, {but still rotate around the $y$-axis,} we get a thin "shell'' instead of a thin "washer''. If we add up the volume of such thin shells we will get an approximation to the true volume. What is the volume of such a shell? Consider the shell at $\ds x_i$. Imagine that we cut the shell vertically in one place and "unroll'' it into a thin, flat sheet. This sheet will be almost a rectangular prism that is $\Delta x$ thick, $\ds f(x_i)-g(x_i)$ tall, and $\ds 2\pi x_i$ wide (namely, the circumference of the shell before it was unrolled). The volume will then be approximately the volume of a rectangular prism with these dimensions: $\ds 2\pi x_i(f(x_i)-g(x_i))\Delta x$. If we add these up and take the limit as usual, we get the integral $$ \int_0^3 2\pi x(f(x)-g(x))\,dx= \int_0^3 2\pi x(x+1-(x-1)^2)\,dx={27\over2}\pi. $$ Not only does this accomplish the task with only one integral, the integral is somewhat easier than those in the previous calculation. Things are not always so neat, but it is often the case that one of the two methods will be simpler than the other, so it is worth considering both before starting to do calculations.
Example 8.3.5 Suppose the area under $\ds y=-x^2+1$ between $x=0$ and $x=1$ is rotated around the $x$-axis. Find the volume by both methods.
Disk method: $\ds \int_0^1 \pi(1-x^2)^2\,dx={8\over15}\pi$.
Shell method: $\ds \int_0^1 2\pi y \sqrt{1-y}\,dy={8\over15}\pi$.
Exercises 8.3
Ex 8.3.1Verify that $\ds\pi\int_0^1 (1+\sqrt{y})^2-(1-\sqrt{y})^2\,dy+\pi\int_1^4 (1+\sqrt{y})^2-(y-1)^2={8\over3}\pi + {65\over6}\pi={27\over2}\pi$.
Ex 8.3.2Verify that $\ds\int_0^3 2\pi x(x+1-(x-1)^2)\,dx={27\over2}\pi$.
Ex 8.3.3Verify that $\ds \int_0^1 \pi(1-x^2)^2\,dx={8\over15}\pi$.
Ex 8.3.4Verify that $\ds \int_0^1 2\pi y \sqrt{1-y}\,dy={8\over15}\pi$.
Ex 8.3.5Use integration to find the volume of the solid obtained by revolving the region bounded by $x+y=2$ and the $x$ and $y$ axes around the$x$-axis. (answer)
Ex 8.3.6Find the volume of the solid obtained by revolving the region bounded by $\ds y=x-x^2$and the $x$-axis around the$x$-axis. (answer)
Ex 8.3.7Find the volume of the solid obtained by revolving the region bounded by $\ds y=\sqrt{\sin x}$ between $x=0$ and$x=\pi/2$, the $y$-axis, and the line$y=1$ around the$x$-axis. (answer)
Ex 8.3.8Let $S$ be the region of the $xy$-plane bounded above by the curve$\ds x^3y=64$, below by the line $y=1$, on the left by the line $x=2$, andon the right by the line $x=4$. Findthe volume of the solid obtained by rotating $S$ around (a) the $x$-axis,(b) the line $y=1$, (c) the $y$-axis, (d) the line $x=2$.(answer)
Ex 8.3.9The equation $\ds x^2/9+y^2/4=1$ describes an ellipse. Find thevolume of the solid obtained by rotating the ellipse around the$x$-axis and also around the $y$-axis. These solids arecalled ellipsoids; one is vaguely rugby-ball shaped, one issort of flying-saucer shaped, or perhaps squished-beach-ball-shaped.(answer)
Ex 8.3.10Use integration to compute the volume of a sphere of radius$r$. You should of course get the well-known formula $\ds 4\pi r^3/3$.
Ex 8.3.11A hemispheric bowl of radius $r$ contains water to a depth $h$. Findthe volume of water in the bowl.(answer)
Ex 8.3.12The base of a tetrahedron (a triangular pyramid) of height $h$is an equilateral triangle of side $s$. Its cross-sectionsperpendicular to an altitude are equilateral triangles. Express itsvolume $V$ as an integral, and find a formula for $V$ in terms of $h$and $s$. Verify that your answer is $(1/3)(\hbox{area of base})(\hbox{height})$.
Ex 8.3.13The base of a solid is the region between $f(x)=\cos x$ and$g(x)=-\cos x$, $-\pi/2\le x\le\pi/2$,and its cross-sections perpendicular to the $x$-axis are squares.Find the volume of the solid.(answer)
|
As you might guess, a first order linear differential equation has the form $\ds \dot y + p(t)y = f(t)$. Not only is this closely related in form to the first order homogeneous linear equation, we can use what we know about solving homogeneous equations to solve the general linear equation.
Suppose that $y_1(t)$ and $y_2(t)$ are solutions to $\ds \dot y + p(t)y = f(t)$. Let $\ds g(t)=y_1-y_2$. Then $$\eqalign{ g'(t)+p(t)g(t)&=y_1'-y_2'+p(t)(y_1-y_2)\cr &=(y_1'+p(t)y_1)-(y_2'+p(t)y_2)\cr &=f(t)-f(t)=0.\cr} $$ In other words, $\ds g(t)=y_1-y_2$ is a solution to the homogeneous equation $\ds \dot y + p(t)y = 0$. Turning this around, any solution to the linear equation $\ds \dot y + p(t)y = f(t)$, call it $y_1$, can be written as $y_2+g(t)$, for some particular $y_2$ and some solution $g(t)$ of the homogeneous equation $\ds \dot y + p(t)y = 0$. Since we already know how to find all solutions of the homogeneous equation, finding just one solution to the equation $\ds \dot y + p(t)y = f(t)$ will give us all of them.
How might we find that one particular solution to $\ds \dot y + p(t)y= f(t)$? Again, it turns out that what we already know helps. We knowthat the general solution to the homogeneous equation$\ds \dot y + p(t)y = 0$ looks like $\ds Ae^{P(t)}$. We now make aninspired guess: consider the function $\ds v(t)e^{P(t)}$, in which wehave replaced the constant parameter $A$ with the function$v(t)$. This technique is called
variation of parameters.Forconvenience write this as $s(t)=v(t)h(t)$ where $\ds h(t)=e^{P(t)}$ is a solution to thehomogeneous equation. Now let's compute a bit with $s(t)$:$$\eqalign{s'(t)+p(t)s(t)&=v(t)h'(t)+v'(t)h(t)+p(t)v(t)h(t)\cr&=v(t)(h'(t)+p(t)h(t)) + v'(t)h(t)\cr&=v'(t)h(t).\cr}$$The last equality is true because $\ds h'(t)+p(t)h(t)=0$, since $h(t)$is a solution to the homogeneous equation. We are hoping to find afunction $s(t)$ so that $\ds s'(t)+p(t)s(t)=f(t)$; we will have such afunction if we can arrange to have $\ds v'(t)h(t)=f(t)$, that is,$\ds v'(t)=f(t)/h(t)$. But this is as easy (or hard) as finding ananti-derivative of $\ds f(t)/h(t)$. Putting this all together, thegeneral solution to $\ds \dot y + p(t)y = f(t)$ is$$v(t)h(t)+Ae^{P(t)} = v(t)e^{P(t)}+Ae^{P(t)}.$$
Example 19.3.1 Find the solution of the initial value problem $\ds \dot y+3y/t=t^2$, $y(1)=1/2$. First we find the general solution; since we are interested in a solution with a given condition at $t=1$, we may assume $t>0$. We start by solving the homogeneous equation as usual; call the solution $g$: $$g=Ae^{-\int (3/t)\,dt}=Ae^{-3\ln t}=At^{-3}.$$ Then as in the discussion, $\ds h(t)=t^{-3}$ and $\ds v'(t)=t^2/t^{-3}=t^5$, so $\ds v(t)=t^6/6$. We know that every solution to the equation looks like $$v(t)t^{-3}+At^{-3}={t^6\over6}t^{-3}+At^{-3}={t^3\over6}+At^{-3}.$$ Finally we substitute to find $A$: $$\eqalign{ {1\over 2}&={(1)^3\over6}+A(1)^{-3}={1\over6}+A\cr A&={1\over 2}-{1\over6}={1\over3}.\cr} $$ The solution is then $$y={t^3\over6}+{1\over3}t^{-3}.$$
Here is an alternate method for finding a particular solution to thedifferential equation, using an
integrating factor. In the differential equation $\ds \doty+p(t)y=f(t)$, we note that if we multiply through by a function$I(t)$ to get $\ds I(t)\dot y+I(t)p(t)y=I(t)f(t)$, the left hand sidelooks like it could be a derivative computed by the product rule:$${d\over dt}(I(t)y)=I(t)\dot y+I'(t)y.$$Now if we could choose $I(t)$ so that $I'(t)=I(t)p(t)$, this would beexactly the left hand side of the differential equation. But this isjust a first order homogeneous linear equation, and we know a solutionis $\ds I(t)=e^{Q(t)}$, where $\ds Q(t)=\int p\,dt$; note that $Q(t)=-P(t)$, where $P(t)$ appears in the variation of parametersmethod and $P'(t)=-p$. Now the modified differential equation is $$\eqalign{e^{-P(t)}\dot y+e^{-P(t)}p(t)y&=e^{-P(t)}f(t)\cr{d\over dt}(e^{-P(t)}y)&=e^{-P(t)}f(t).\cr}$$Integrating both sides gives$$\eqalign{e^{-P(t)}y&=\int e^{-P(t)}f(t)\,dt\cry&=e^{P(t)}\int e^{-P(t)}f(t)\,dt.\cr}$$If you look carefully, you will see that this is exactly the samesolution we found by variation of parameters, because$\ds e^{-P(t)}f(t)=f(t)/h(t)$.
Some people find it easier to remember how to use the integrating factor method than variation of parameters. Since ultimately they require the same calculation, you should use whichever of the two you find easier to recall. Using this method, the solution of the previous example would look just a bit different: Starting with $\ds \dot y+3y/t=t^2$, we recall that the integrating factor is $\ds e^{\int 3/t}=e^{3\ln t}=t^3$. Then we multiply through by the integrating factor and solve: $$\eqalign{ t^3\dot y+t^3 3y/t&=t^3t^2\cr t^3\dot y+t^2 3y&=t^5\cr {d\over dt}(t^3 y)&=t^5\cr t^3 y&=t^6/6\cr y&=t^3/6.\cr} $$ This is the same answer, of course, and the problem is then finished just as before.
Exercises 19.3
In problems 1–10, find the general solution of the equation.
Ex 19.3.1$\ds\dot y +4y=8$(answer)
Ex 19.3.2$\ds\dot y-2y=6$(answer)
Ex 19.3.3$\ds\dot y +ty=5t$(answer)
Ex 19.3.4$\ds\dot y+e^ty=-2e^t$(answer)
Ex 19.3.5$\ds\dot y-y=t^2$(answer)
Ex 19.3.6$\ds 2\dot y +y=t$(answer)
Ex 19.3.7$\ds t\dot y -2y=1/t$, $t>0$(answer)
Ex 19.3.8$\ds t\dot y+y=\sqrt{t}$, $t>0$(answer)
Ex 19.3.9$\ds\dot y\cos t+y\sin t=1$, $-\pi/2< t< \pi/2$(answer)
Ex 19.3.10$\ds\dot y + y\sec t=\tan t$, $-\pi/2< t< \pi/2$(answer)
|
"Gold and silver ornaments are precious".
The following notations are used $:$
$G(x):x$ is a gold ornament
$S(x):x$ is a silver ornament
$P(x):x$ is precious
Options are $:$
$\forall x(P(x) \implies (G(x) \wedge S(x)))$ $\forall x((G(x) \wedge S(x)) \implies P(x))$ $\exists x((G(x) \wedge S(x)) \implies P(x))$ $\forall x((G(x) \vee S(x)) \implies P(x))$
I try to explain $:$ At the same time an ornament can not be gold and silver , it should be either gold are silver , therefore option $(4)$ is correct .
Please check whether my solution is correct ?
|
Consider the helium atom with two electrons, but ignore coupling of angular momenta, relativistic effects, etc.
The spin state of the system is a combination of the triplet states and the singlet state. I will denote a linear combination of the three triplet states as $\lvert\chi_+\rangle$ (because it's symmetrical under exchange of electrons) and $\lvert\chi_-\rangle$ the singlet state (because it's anti-symmetric).
Then, the orbital state of the electrons. Suppose one electron is in the state $\lvert\phi_a\rangle$; the other in the state $\lvert\phi_b\rangle$. The orbital state of the system is:
$$ \lvert\phi_{\pm}\rangle = \frac{1}{\sqrt 2} \left (\lvert\phi_a\rangle\lvert\phi_b\rangle \pm \lvert\phi_b\rangle\lvert\phi_a\rangle \right )$$
Because the overal state $\lvert\psi\rangle$ of the electrons must be anti-symmetric, is it correct to construct it as following:
$$\lvert\psi\rangle = \lvert\phi_{\pm}\rangle \lvert\chi_{\mp}\rangle \text{ ?}$$
|
I try to find the local minimum of a function $$\Pi(\vec{d})$$ where $$\vec{d} = (d_1,d_2,...,d_n)$$ should satisfy $$d_i>0$$ for $i \in [1,n]$.
I use the
FindMinimum function in
Mathematica; i.e., use something like $$\mathtt{FindMinimum}[\{\Pi(\vec{d}),d_i>0\},\vec{d}].$$ But I don't know how to apply the constraints $d_i>0$. If it is indeed possible, it would very much help me to know how to formulate such a command. Thanks!
|
Here's one way of interpreting this statement (this is essentially an elaboration of the comment by use Learning is a mess). The basic idea behind Wilsonian renormalization is that renormalization can be regarded as the process by which we understand how the theory
effectively behaves at different scales, like momentum scales. Disclaimer: the math in this answer is schematic.
Consider a quantum theory of fields $\phi$ with a hard momentum space cutoff $\Lambda$. Such a QFT would be described, in the Euclideanized functional integral picture, by its partition function\begin{align} Z(\mathbf u, \Lambda) = \int [d\tilde\phi]_0^\Lambda e^{-S[\mathbf u, \Lambda,\tilde\phi]}\end{align}Here, $\mathbf u = (u_1, \dots, u_n)$ represents the parameters on which the action of the theory depends (like coupling constants), $S[\mathbf u, \Lambda,\phi]$ is the Euclidean action, and\begin{align} [d\tilde\phi]_{k_a}^{k_b} = \prod_{k_a<|k|<k_b}d\tilde\phi(k)\end{align}schematically denotes integration over field degrees of freedom corresponding to momenta between $k_a$ and $k_b$.
Now, let any real number $0<s\leq 1$ be given. We will call $s$ the "scale." Then we note that we can split the functional integration measure into an integration over momenta in the range $(0,s\Lambda)$ and those in the range $(s\Lambda, \Lambda)$, and we imagine defining a rescaled action $S_s[\mathbf u, \Lambda,\phi]$ by\begin{align} e^{-S_s[\mathbf u, \Lambda, \phi]} = \int [d\tilde\phi]_{s\Lambda}^\Lambda e^{-S[\mathbf u, \Lambda,\tilde\phi]}\end{align}We can now write the original partition function as an integral over only the momentum modes in the range $(0, s\Lambda)$ provided we use the action $S_s$;\begin{align} Z(\mathbf u, \Lambda) = \int [d\tilde\phi]_0^{s\Lambda} e^{-S_s[\mathbf u, \Lambda,\tilde\phi]}\end{align}and we say that we have "integrated out" the higher momentum modes. The action $S_s$ can now be regarded as the action that effectively governs the physics for momentum scales below $s\Lambda$.
Let's call $S_s$ the action "at scale $s$". Then in some situations, the action at scale $s$ can simply be related to the original action at scale $s=1$ by taking the couplings to depend on the scale $s$;\begin{align} S_s[\mathbf u, \Lambda, \phi] = S[\mathbf u_s, \Lambda, \phi_s]\end{align}This is often referred to as the "running of the couplings." In other words, the process of renormalization which leads to the running of the couplings is simply the process by which we examine how the field theory effectively behaves at different scales; this is conceptually distinct from the issue of removing infinities.
|
Contents
Intersecting a ray with a sphere is probably the simplest form of ray-geometry intersection test which is the reason why so many raytracers show images of spheres. It also has the advantage (because of its simplicity) to be very fast. However, to get it working reliably, they are always a few subtitles which are important to give some attention to.
This test can be implemented using essentially two methods. The first one solves the problem using geometry. The second technique which is often the preferred solution (because it can be reused for a wider variety of surfaces called quadric surfaces) uses an analytic (or algebraic, e.g. can be resolved using a closed-form expression) solution.
Geometric Solution
The geometric solution to the ray-sphere intersection test relies on simple maths. Mainly geometry, trigonometry and the
Pythagorean theorem. If you look at figure 1, you will understand that to find the position of the point P and P' which corresponds to the points where the ray intersects with the sphere, we need to find value for \(t_0\) and \(t_1\).
Remember that a ray can be expressed using the following
parametric form:
Where \(O\) represents the origin of the ray and \(D\) is the ray direction (usually normalized). Changing the value for \(t\) makes it possible to define any position along the ray. When \(t\) is greater than 0, the point is located in front of the ray's origin (looking down the ray's direction), when \(t\) is equal to 0, the point coincides with the ray's origin (O), and when \(t\) is negative the point is located behind its origin. By looking at figure 1, you can see that \(t_0\) can be found by subtracting \(t_{hc}\) from \(t_{ca}\) and \(t_1\) can be found by adding this time, \(t_{hc}\) to \(t_{ca}\). All we need to do is find ways of computing these two values (\(t_{hc}\) and \(t_{ca}\)) from which we can find \(t_0\), \(t_1\), and then P and P' using the ray parametric equation:$$ \begin{array}{lcl} P & = & {O+t_{0}D}\\ P' &= & {O+t_{1}D} \end{array} $$
We will start by noting that the triangle formed by the edges \(L\), \(t_{ca}\) and \(d\) is a right triangle. We can easily compute \(L\) which is just the vector between \(O\) (the ray's origin) and C (the sphere's center). We don't know anything about \(t_{ca}\) though, but we can use trigonometry to solve this problem.
We know \(L\) and we know \(D\), the ray's direction. We also know that the
dot (or scalar) product of a vector \(\vec{b}\) and \(\vec{a}\), corresponds to projecting \(\vec{b}\) onto the line defined by the vector \(\vec{a}\), and the result of this projection is the length of the segment AB as shown in figure 2 (for more information on the properties of the dot product, check the Geometry lesson):
In other words, the dot product of \(L\) and \(D\) simply gives us \(t_{ca}\). Note that they can only be an intersection between the ray and the sphere if \(t_{ca}\) is positive (if it is negative, it means that the vector \(L\) and the vector \(D\) points in opposite directions. If there is an intersection, it could potentially be behind the ray's origin but anything that happens behind the ray's origin is of no use to us). We now have \(t_{ca}\) and \(L\).$$ \begin{array}{l} L=C-O\\ t_{ca}=L \bullet D\\ if \; (t_{ca} \lt 0) \; return \; false \end{array} $$
There is a second right triangle in this construction which is defined by \(d\), \(t_{hc}\) and the radius of the sphere. We know the radius of the sphere already, and we are looking for \(t_{hc}\) which we need to find \(t_0\) and \(t_1\). To get there, we need to compute \(d\). Remember that \(d\) is also the opposite side of the right triangles defined by \(d\), \(t_{ca}\) and \(L\). The Pythagorean theorem says that:$$opposite \; side^2+adjacent \; side^2=hypotenuse^2$$
We can replace the opposite side, the adjacent side and the hypotenuse respectively by \(d\), \(t_{ca}\) and \(L\) and we get:$$ \begin{array}{l} d^2+t_{ca}^2=L^2\\ d=\sqrt{L^2-t_{ca}^2}=\sqrt{L \bullet L - t_{ca} \bullet t_{ca} }\\ if \; (d < 0)\; return \; false \end{array} $$
Note that if \(d\) is greater than the sphere radius, the ray misses the sphere and there's no intersection (the ray overshoots the sphere). We finally have all the terms we need to compute \(t_{hc}\). We can use the Pythagorean theorem again:$$ \begin{array}{l} d^2+t_{hc}^2=radius^2\\ t_{hc}=\sqrt{radius^2-d^2}\\ t_{0}=t_{ca}-t_{hc}\\ t_{1}=t_{ca}+t_{hc} \end{array} $$
In the last paragraph of this section we will show how to implement this algorithm in C++ and make a few optimisations to speed things up.
Analytic Solution
Remember that a ray can be expressed using the following function: \(O+tD\) (equation 1) where \(O\) is a point and corresponds to the origin of the ray, \(D\) is a vector and corresponds to the direction of the ray, and \(t\) is a parameter of the function. By varying \(t\) (which can be either positive or negative) we can compute any point on the line defined by the ray origin and direction. When \(t\) is greater than 0, then the point on the ray is in "front" of the ray's origin. When \(t\) is negative, the point is behind the ray's origin. When \(t\) is exactly 0, the point and the ray's origin are the same.
The idea behind solving the ray-sphere intersection test, is that spheres too can be defined using an algebraic form. The equation for a sphere is:$$ \begin{array}{l} x^2+y^2+z^2=R^2 \end{array} $$
Where x, y and z are the coordinates of a cartesian point and \(R\) is the radius of a sphere centred at the origin (will see later how to change the equation so that it works with spheres which are not centred at the origin). It says that there is a set of points for which the above equation is true. This set of points defines the surface of a sphere which is centred at the origin and has radius \(R\). Or more simply, if we consider that x, y, z are the coordinates of point P, we can write (equation 2):$$ \begin{array}{l} P^2-R^2=0 \end{array} $$
This equation is typical of what we call in Mathematics and CG an
implicit function and a sphere expressed in this form is also called an implicit shape or surface. Implicit shapes are shapes which can be defined not in terms of polygons connected to each other for instance (which is the type of geometry you might be familiar with if you have modelled object in a 3D application such as Maya or Blender) but simply in terms of equations. Many shapes (often quite simple though) can be defined in terms of a function (cube, cone, sphere, etc.). However simple, these shapes can be combined together to create more complex forms. This is the idea behind modeling geometry using blobs for instance (blobby surfaces are also called metaballs). But before we got too far off course here, let's get back to the ray-sphere intersection test (check the advanced section for a lesson on Implicit Modeling).
All we need to do now, is to substitute equation 1 in equation 2 that is, to replace P in equation 2 with the equation of the ray (remember that O+tD defines all points along the ray):$$ \begin{array}{l} \begin{matrix}{|O+tD|}\end{matrix}^2-R^2=0 \end{array} $$
When we develop this equation we get (equation 3):$$ \begin{array}{l} O^2+(Dt)^2+2ODt-R^2=O^2+D^2t^2+2ODt-R^20 \end{array} $$
which in itself is an equation of the form (equation 4):$$ \begin{array}{l} f(x)=ax^2+bx+c \end{array} $$
with \(a=D^2\), b=2OD and \(c=O^2-R^2\) (remember that x in equation 4 corresponds to \(t\) in equation 3 which is the unknown). Being able to re-write equation 3 into equation 4 is important because equation 4 is known as a
quadratic function. It is a function for which the roots (when x takes a value for which f(x) = 0) can easily be found using the following equations (equation 5):
Note the +/- sign in the formula. The first root uses the sign + and the second root uses the sign -. The letter \(\Delta\) (Greek letter delta) is called the
discriminant. The sign of the discriminant indicates whether there is two, one or no root to the equation.: when \(\Delta\) > 0 there is two roots which can be computed with: $$ \begin{array}{l} \dfrac{-b+\sqrt{\Delta}}{2a}\quad and \quad\dfrac{-b-\sqrt{\Delta}}{2a} \end{array} $$ In that case, the ray intersects the sphere in two places (at \(t_0\) and \(t_1\)). when \(\Delta\) = 0 there is one root which can be computed with: $$ \begin{array}{l} -\dfrac{b}{2a} \end{array} $$ The ray intersects the sphere in one place only (\(t_0\)=\(t_1\)). when \(\Delta\) < 0, there is not root at (which means that the ray doesn't intersect the sphere).
Since we have a, b and c, we can easily compute these equations to get the values for \(t\) which correspond to the two intersections point of the ray with the sphere (\(t_0\) and \(t_1\) in figure 1). Note that the root values can be negative which means that the ray intersects the sphere but behind the origin. One of the roots can be negative and the other positive which means that the origin of the ray is inside the sphere. There also might be no solution to the quadratic equations which means that the ray doesn't intersect the sphere at all (no intersection between the ray and the sphere).
Before we see how to implement this algorithm in C++, let's see how we can solve the same problem when the sphere is not centred at the origin. We can simply rewrite equation 2 as:$$ \begin{array}{l} |P-C|^2-R^2=0 \end{array} $$
where C is the location of the center of the sphere in 3D space. Equation 4 can now be re-written as:$$|O+tD-C|^2-R^2=0.$$
Which gives us:$$ \begin{array}{l} a =1\\ b=2D(O-C)\\ c=|O-C|^2-R^2 \end{array} $$
In a more intuitive form, this comes back to say that we can translate the ray by -C and test this transformed ray against the sphere as if it was centered at the origin.
"Why a=1?" Because r is a vector which is normally normalized. The result of a vector raised to the power of 2 is the same as a dot product of the vector with itself. We know that dot product of a normalised vector with itself is 1 hence setting a=1. However, you must be very careful in your code because the rays which are tested for intersections with a sphere don't always have their direction vector normalised, in which case you will have to compute the value for a (check code further down). This is a pitfall which is often the source of bugs in the code.
Computing the Intersection Point
Once we know the value for \(t_0\) computing the position of the intersection or hit point is straightforward. We just need to use the ray parametric equation:$$P_{hit} = O + t_0 D.$$
Computing the Normal at the Intersection Point
There are different methods to compute the normal of a point lying on the surface of an implicit shape. One of these methods uses differential geometry which as mentioned in the first chapter of this lesson, is mathematically quite complex. You can find this solution explained in the lesson on Differential Geometry [link]. For this series of basic lessons on rendering, we will use a much simpler solution instead. The normal of a point on a sphere, can simply be computed as the point position minus the sphere centre (don't forget to normalize the resulting vector):$$N = ||P - C||.$$
Computing the Texture Coordinates at the Intersection Point
Texture coordinates are, interestingly enough, just the spherical coordinates of the point on the sphere remapped to the range [0, 1]. As recalled in the previous chapter and the lesson on Geometry, the cartesian coordinates of a point can be computed from its spherical coordinates as follows:$$ \begin{array}{l} P.x = \cos(\theta)\sin(\phi),\\ P.y = \cos(\theta),\\ P.z = \sin(\theta)\sin(\phi).\\ \end{array} $$
These equations might look different if you use a different convention. The spherical coordinates \(\theta\) and \(\phi\) can also be found from the point Cartesian coordinates using the following equations:$$ \begin{array}{l} \phi = atan(z, x),\\ \theta = acos(\dfrac{P.y}{R}). \end{array} $$
Where \(R\) is the radius of the sphere. These equations are explained in the lesson on Geometry. Sphere coordinates are useful for texture mapping or procedural texturing. The program of this lesson will show how they can be used to draw a pattern on the surface of the spheres.
Implementing the Ray-Sphere Intersection Test in C++
Let's now see how we can implement the ray-sphere intersection test using the analytic solution. We could use equation 5 directly (you can implement it and it will work) to compute the roots but, on computers, we have a limited capacity to represent real numbers with the precision needed to keep the calculation of these roots as accurate as possible. Thus the formula suffers from the effect of what we call a
loss of significance. This happens for instance when b and the root of the discriminant don't have the same sign but have values very close to each other. Because of the limited numbers used to represent floating numbers on the computer, in that particular case, the numbers would either cancel out when they shouldn't (this is called catastrophic cancellation) or round off to an unacceptable error (you will easily find more information related to this topic on the internet). However, equation 5 can easily be replaced with a slightly different equation that proves to be more stable when implemented on computers. We will use instead:
Where sign is -1 when b is lower than 0 and 1 otherwise. This formula insures that the quantities added for q have the same sign, avoiding catastropic cancellation. Here is how the routine looks in C++:
Finally here is the completed code for the ray-sphere intersection test. For the geometric solution, we have mentioned that we can reject the ray early on if \(d\) is greater than the sphere radius. However that would require to compute the square root of \(d^2\) which has a cost. Furthermore, \(d\) is actually never used in the code. Only \(d^2\) is. Instead of computing \(d\), we test if \(d^2\) is greater than \(radius^2\) (which is the reason why we compute \(radius^2\) in the constructor of the Sphere class) and reject the ray if this test is true. It is a simple way of speeding things up a little.
Note that if scene contains more than one sphere, then the spheres are tested for any given ray in the order they were added to the scene. The spheres are thus unlikely to be sorted in depth (with respect to the camera position). The solution to this problem is to keep track of the sphere with the closest intersection distance in other words, with the closest \(t\). In the image below, you can see on the left a render of the scene in which we display the latest sphere in the object list that the ray intersected (even if it is not the closest object). On the right, we keep track of the object with the closest distance to the camera and only display this object in the final image, which gives us the correct result. An implementation of this technique is provided in the next chapter.
|
No such liquid, safe or otherwise, can exist. Evaporation is a strictly endothermic process in all cases.The change in state from liquid to gas is marked by the individual particles gaining enough translational kinetic energy to overcome the mutual attractions present in the liquid phase to "fly free" in the gas phase. It is logically inconsistent for a ...
According to the Intergovernmental Panel on Climate Change (IPCC):"Greenhouse gases are those that absorb and emit infrared radiation in the wavelength range emitted by Earth."In order for a molecule to absorb and emit in the infrared (IR) region, its chemical bonds must rotate and vibrate in a manner that affects something called the molecule's ...
Thermite is a solid-solid reaction that I think would be greatly inhibited by any sort of intermediate. In either case, the smoke, the dust and the copious amounts of nitrous oxides produced will not make your asthma any better off than if you were to pick up a shovel.And then there will likely be other safety-, legal-, environment- and relationship-with-...
The wick temperature does not have to be the same as the flame temperature.The flame is hottest at the bottom, but the wick is hottest at the top.For a candle, the wick burning isn't the intended purpose of the wick; light comes from burning wax (more generally: fuel), you want to burn the wax not the wick. Rather the purpose of a wick is to help fuel ...
When you add salt to an ice cube, you end up with an ice cube whose temperature is above its melting point.This ice cube will do what any ice cube above its melting point will do: it will melt. As it melts, it cools down, since energy is being used to break bonds in the solid state.(Note that the above point can be confusing if you're new to thinking ...
Heating water on a hot plate is safe, because the hottest point is at the bottom of the pot. A lot of relatively small bubbles appear there without much overheating of the water, because there is a lot of nucleation at the uneven phase boundary steel-water.In a microwave, the hottest place is IN the water. The glass does not get heated by microwave (at ...
Because fire is not the same thing as light.Michael Faraday did a wonderful job of explaining how the candle works, and I direct you to look at it (there are also Youtube videos giving a modern take on this work) if you're interested.In short, the candle produces light, not because it is hot, but because it is sooty. The particles of soot glow when they ...
Well, let's do some math:Assuming 30 mL of water is 30 g, and we want to heat our water from 20 °C to 90 °C, the energy required is:$$\begin{align}E&=C_Pm\Delta K \\&=\left(4.18 \mathrm{\frac{ J}{gK}}\right)(30\mathrm{\ g})(70\mathrm{\ K})\\&=8.778\mathrm{\ kJ}\end{align}$$So how much power do we need to do this in a given time? "Instant" ...
Diamond is one of the best thermal conductors known, in fact diamond is a better thermal conductor than many metals (thermal conductivity (W/m-K): aluminum=237, copper=401, diamond=895). The carbon atoms in diamond are $\ce{sp^3}$ hybridized and every carbon is bonded to 4 other carbon atoms located at the vertices of a tetrahedron. Hence the bonding in ...
In addition to the points Stian raised, it's important to realize that thermite is very difficult to ignite. The ignition temperature is very, very high, higher than you can easily get by burning ordinary fuels such as butane, which means that whatever ignition method you use, if it's hot enough to ignite the thermite, would most likely set off the butane ...
Let’s consider the following cases:getting $1\,\mathrm{mol}$ of $100\,\mathrm{°C}$ water on one’s skingetting $1\,\mathrm{mol}$ of $100\,\mathrm{°C}$ air on one’s skingetting $1\,\mathrm{mol}$ of $100\,\mathrm{°C}$ water vapour on one’s skinWith the slightly irrealistic assumption that all of these liberate all their thermal energy to the skin while ...
Why won't water freeze if you put ice in it?It will, even at room temperature. You just need a big enough, cold enough ice cube.Don't believe it? Add a few drops of water to an ice cube in an ice cube tray (which is the same as adding an ice cube to a few drop of water). Wait a few seconds, turn the tray upside down. No water will fall, presumably ...
Many organic reactions are unreasonably slow and can take an extended period of time to achieve any noticeable effect so heating is often used to increase the rate of reaction. However, many organic compounds have low boiling points and will vaporise upon exposure to such high heat, preventing the reaction from proceeding in full.To address this, heating ...
When heat is leaving earth it leaves as infrared radiation. Greenhouse gases are gases that are able to absorb this infrared radiation. If we look at the infrared emission spectrum from Earth[1]:We can see that between $\pu{400 cm-1}$ and $\pu{700 cm-1}$, a lot of the infrared radiation is absorbed by $\ce{CO2}$. Gases like $\ce{N2}$ and $\ce{O2}$ don't ...
A container that is not at ambient temperature will generate air currents around it. If you place such a non-ambient container in a balance, air currents will develop around the container as it heats or cools to ambient. These air currents will cause the balance to read incorrectly.
Absolutely yes.Lighting a torch in such an environment would simply be the reverse physical process (and same chemical process) of what is done in our oxygen-containing atmosphere. In the chamber or alien world of hydrogen gas, providing an ignition source to a stream of oxygen would give a flame. The chemical reaction would actually be the same as if ...
Ron's answer is great, but I'd just like to touch on the mechanisms behind thermal conductivity so we can rationalize the differences between the behaviour of diamond, graphite, and metals:There are two ways in which heat is transmitted through solids: phonons and electronic conductivity. The latter occurs in electrically conductive solids, where ...
Your approach is quite correct, but as Jan already pointed out, it is incomplete. What you calculated is the difference in the electronic energy of the reaction $$\ce{H2 (g) + 1/2 O2 (g,\,{}^1\Delta_{g}) -> H2O (g)}.$$Let's turn this into some kind of a tutorial and a little exercise, that you can try to reproduce at home. For a more detailed ...
Most heat capacities go through a maximum as the temperature increases. $C_V = \left( \frac{dU}{dT} \right)_V$ so a maximum in $C_V$ corresponds to a minimum in $\left( \frac{dT}{dU} \right)_V$, i.e. the point where the temperature changes very little as energy is being supplied to the system. At this point (most of) the energy is being used to excite ...
PreliminariesConsider $U = U(V,T, p)$. However, assuming that it is possible to write an equation of state of the form $p = f(V,T)$, I don't have to explicitly address the $p$ dependence of $U$, and I can write the following differential:$$\mathrm{d}U = \underbrace{\left ( \frac{\partial U}{\partial V} \right)_T}_{\pi_T} \mathrm{d}V + \underbrace{\...
What actually happens in real life depends on a lot of things. Factors like the shape of the pot can make a big difference to how much of that 2500 W actually goes into vaporizing the water and how much of it is lost to the surroundings. If we assume that all 2500 W is going into the water, it makes things a lot simpler. If the water is already at the ...
Water has hydrogen bonding in it. Hydrogen bonding is some kind of intermolecular force (a tutorial and the wikipedia page) that is usually seen in molecules that have $\ce{OH}$, $\ce{NH}$ or $\ce{FH}$ somewhere in their structure.How does it happen?Hydrogen atom is really small (atomic radius: About 37 pm) When it bonds with some very electronegative ...
The heat capacities are defined as$$C_p = \left(\frac{\partial H}{\partial T}\right)_{\!p} \qquad \qquad C_V = \left(\frac{\partial U}{\partial T}\right)_{\!V} \tag{1}$$and since $H = U + pV$, we have$$\begin{align}C_p - C_V &= \left(\frac{\partial H}{\partial T}\right)_{\!p} - \left(\frac{\partial U}{\partial T}\right)_{\!V} \tag{2} \\&= \...
I'm not at all sure that termite is a useful solution to this problem. The thermite reaction takes place between iron oxide and aluminium, producing a lot of heat and molten iron as a result. This makes it useful for applications like demolition of steel structures and on-site welding of railway rails. In this sort of situation it is a reasonably compact, ...
From the comments:What do you want to do with the hot water? Swim in it. I was thinkingof thousands of litersThat's an interesting idea, but unfortunately, I don't think adding chemicals to a pool in order to heat it is a good idea (especially yellowish chemicals). The water temperature would drop in a few hours, tops, and you'd have to be ...
You’re looking for materials that either react with water with a large reaction enthalpy $\Delta H_\mathrm{r}$, or have a large enthalpy of solution $\Delta H_\mathrm{sol}$ (which really amounts to the same thing but with a slightly different scope).The problem with this as a general method, though, is that water has a really high heat capacity: take a ...
The Enthalpy $H$ is defined as $H=U+PV$. Therefore,$$\Delta H=\Delta U + P\Delta V +V\Delta P$$For an adiabatic process, $q=0$. therefore from the first law of thermodynamics,$$\Delta U = q +w =q-P\Delta V$$$$\Delta U=w=-P\Delta V$$Substituting this in the first equation you get,$$\Delta H=V\Delta P$$If $\Delta P$ is zero during the process (...
The mode of heating of a water glass in a microwave and on a stove is actually very similar. While it's true that microwave radiation penetrates somewhat into the body of water, the penetration depth is rather small.The main problem is that on a stove, you get uniform heating from the bottom, with temperature usually far higher than the boiling point of ...
Compounds A-D all have the same molecular formula, $\ce{C6H12}$. We can burn each compound and measure the heat given off (heat of combustion). Since they are isomers, they will each burn according to the same equation$$\ce{C6H12 + 9O2 -> 6CO2 + 6H2O + heat}$$Any differences in the heat given off can be used to say that a compound is more stable (it ...
|
I am trying to show that for a convex set $A$ and $s,r>0$ positive real numbers we have $rA+sA = (r+s)A$.
Clearly $(r+s)A$ is contained in $rA+sA$ but I am having trouble showing the other inclusion.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Here's one trick—If you divide each of $r$ and $s$ by $r+s$, you get a number between 0 and 1, which allows you to make a convex sum:
Suppose $z \in rA + sA$. Then $z = rx + sy$ for some $x,y \in A$.
Because $r$ and $s$ are positive, the numbers $t_1 \equiv \frac{r}{r+s}$ and $t_2 \equiv \frac{s}{r+s}$ are between 0 and 1.
Because $A$ is convex and $x,y\in A$, we know that $t_1 x + t_2 y \in A$.
But $z = (r+s)(t_1x + t_2 y)$, hence $z \in (r+s)A$.
To show the other inclusion $ rA+sA \subset (r+s)A $:
let $\zeta \in rA+sA $, then there exist $ x,y \in A$, such that $ \zeta=rx+sy$.
Now we set $\eta =\frac{r}{r+s}x+\frac{s}{r+s}y \in A $(convexity), obviously, $$ \zeta =(r+s)\eta \in (r+s)A $$ that is, $$ rA+sA \subset (r+s)A $$
let $a \in rA + sA$
then $\exists b \in A, c \in A, a=rb+sc$
$$a= (r+s) \left(\frac{r}{r+s}b + \frac{s}{r+s}c \right)$$
Note that we have $\left(\frac{r}{r+s}b + \frac{s}{r+s}c \right) \in A$
|
I have a function $f:\mathbb{C}\to\mathbb{C}$ that is analytic on $A = \{s\mid \mathfrak{R}(s)\geq1\}$ and does not vanish on $A$.
Intuitively, I would expect this to mean that $f$ has an analytic logarithm on $A$, but I am not sure if that is true.
I know that it has a holomorphic logarithm on $\{s\mid\mathfrak{R}(s)>1\}$, since that is open and simply connected.
It is also clear that there exists an open set $B \supseteq A$ in which $f$ is holomorphic and does not vanish, but I am not sure if I can assume $B$ to be simply connected, so I cannot simply conclude that $f$ has a holomorphic logarithm on $B$.
It is also clear that, at every point on the $\mathfrak{R}(s) = 1$ line, I can find some open neighbourhood in which $f$ is holomorphic and does not vanish, so I can extend the logarithm of $f$ on $A$ to that neighbourhood analytically. But then I have a ‘different’ logarithm for each point on that line and I need to somehow stitch them together, which is not clear to me.
So my question is: Is my intuition correct? Can I somehow justify the existence of an analytic logarithm of $f$ on $A$?
|
Given a general 2nd degree homogeneous differential equation $y'' +p(x)y' +q(x)y=0$. Change the independent variable from $x$ to $z=z(x)$. Show that the above given homogeneous 2nd order differential equation can be transformed into an equation with constant coefficients if and only if $(q' + 2pq)/q^{3/2}$ is constant.
When we change variables from $x$ to $z(x)$ what we're doing is effectively looking for a function $z$ that shifts $y$ and its derivatives so that $p(x)$ and $q(x)$ are fixed relative to them, allowing us to consider $p$ and $q$ as constants in this new variable.
By the chain rule, \begin{equation} \frac{\mathrm d}{\mathrm{d}x} y(z(x)) = \frac{\mathrm d}{\mathrm{d}z} y(z) \frac{\mathrm{d}z}{\mathrm{d}x} \end{equation} and \begin{equation} \frac{\mathrm d}{\mathrm{d}x} (\frac{\mathrm d}{\mathrm{d}z} y(z) \frac{\mathrm{d}z}{\mathrm{d}x}) = \frac{\mathrm{d}^2}{\mathrm{d}z^2} y(z) (\frac{\mathrm{d}z}{\mathrm{d}x})^2 +\frac{\mathrm d}{\mathrm{d}z} y(z) \frac{\mathrm{d}^2 z}{\mathrm{d}x^2} \end{equation}.
Substituting into the original equation and rearranging, we get
\begin{equation} y''(z) +y'(z)(\frac{z''(x) +p(x)z'(x)}{(z'(x))^2}) + \frac{q(x)}{(z'(x))^2} y(z) = 0 \end{equation}
And so for the new coefficients to be constant, we require $\frac{z''(x) +p(x)z'(x)}{(z'(x))^2}= A_1$ and $\frac{q(x)}{(z'(x))^2} = \frac{1}{A_2}$ where $A_1,A_2$ are constants.
the second of these is easier to solve, giving us $z'(x) = \sqrt{A_2q(x)}$, which we can differentiate to give \begin{equation}z''(x) = \frac{1}{2}\frac{A_2}{\sqrt{A_2 q(x)}} q'(x) \end{equation} We can then plug this into the equation for the other coefficient and solve for $z$, \begin{equation} \frac{\frac{1}{2}\frac{A_2}{\sqrt{A_2 q(x)}} q'(x) +p(x)\sqrt{A_2 q(x)}}{A_2 q(x)} = A_1 \end{equation} Multiplying the top and bottom of the LHS by $\sqrt{A_2 q(x)}$ and rearranging gives us \begin{equation} \frac{q'(x) + 2p(x)q(x)}{q(x)^\frac{3}{2}} = 2\sqrt{A_2}A_1 \end{equation} and so the desired expression is clearly constant.
In the question nothing about the domain. It becomes problematic when $q^{3/2}$ appears. What can we say if $q<0$ and everything is real? Using the calculations of Baron we can write $$ \begin{align} A_2q &=(z')^2, \\ A_2q'&=2z'z'',\\ z''&=\frac{A_2q'}{2z'}. \end{align}$$ Substituting back we obtain $$ \frac{A_2q'+2p(z')^2}{2(z')^3}=A_1. $$ Since $(z')^2=A_2q$, from the last formula we have $$ A_2(q'+2pq)=2A_1(z')^3. $$ Squaring both side and using again the expression for $(z')^2$ we arrive at $$ \frac{(q'+2pq)^2}{q^3}=4A_1^2A_2. $$
Chris Cunningham's answer here seems to have an alternative proof.
(That's probably just stupidity, but I still can't follow Baron Mingus' proof - shouldn't it be p(
z) when you substitute?)
|
Informally, two functions $f$ and $g$ are
inverses ifeach reverses, or undoes, the other. More precisely:
Definition 9.1.1 Two functions $f$ and $g$ are inverses if for all $x$ in the domain of $g$, $f(g(x))=x$, and for all $x$ in the domain of $f$, $g(f(x))=x$.
Example 9.1.2 $f=x^3$ and $g=x^{1/3}$ are inverses, since $\ds (x^3)^{1/3}=x$ and $(x^{1/3})^3=x$.
Example 9.1.3 $f=x^2$ and $g=x^{1/2}$ are not inverses. While $(x^{1/2})^2=x$, it is not true that $(x^2)^{1/2}=x$. For example, with $x=-2$, $((-2)^2)^{1/2}=4^{1/2}=2$.
The problem in the previous example can be traced to the fact that there are two different numbers with square equal to 4. This turns out to be precisely descriptive of functions without inverses.
Definition 9.1.4 Let $A$ and $B$ be sets and let $f:A\to B$ be afunction. We say that $f$ is
injective or one-to-one if $f(x)=f(y)$ implies that $x=y$.
We say that $f$ is
surjective or onto if for every $b\in B$ there is an $a\in A$ such that $f(a)=b$.
If $f$ is both injective and surjective, then $f$ is
bijective or one-to-one and onto.
We are interested only in the case that $A$ and $B$ are sets of real numbers, and in this case there is a nice geometric interpretation of injectivity. It is often easy to use this interpretation to decide whether a function is or is not 1-1.
Theorem 9.1.5 (Horizontal line test) If $f$ is a function defined on some subset of the real numbers, then $f$ is injective if and only if every horizontal line intersects the graph of $f$ at most once.
Example 9.1.6 The function $f=x^2$ fails this test: horizontal lines $y=k$ for $k>0$ intersect the graph of $f$ twice. (The horizontal line $y=0$ does intersect it only once, and lines $y=k$, $k< 0$, do not intersect the graph at all.)
Example 9.1.7 In each of these cases, we assume that $f\colon A\to \R$, where $A$ is the set of all real numbers for which $f$ makes sense.
The function $f(x)=x $ is bijective.
The function $f(x) = x^2 $ is neither injective nor surjective. If we think of $f$ as a function from $\R$ to the non-negative real numbers, then $f$ is surjective; in other words, if a function is not surjective this is not a major stumbling block.
The function $f(x) =1/x$ is injective but not surjective since there is no value of $x$ such that $f(x)=0$.
The function $f(x) = x(x-1)(x+1) $ is surjective but not injective; $f(x) =0 $ for three different values of $x$. On the other hand $\ds\lim_{x\to \infty} f(x)=\infty$ and $\ds\lim _{x\to -\infty} f(x)=-\infty$. Since $f$ is continuous on $\R$, the intermediate value theorem (2.5.6) guarantees that $f$ takes all values between $-\infty $ and $\infty$.
The derivative furnishes us with a convenient criterion for injectivity without explicitly looking for points where injectivity may fail.
Proof. Suppose that $f(a) =f(b) $ for some $a< b$. By Rolle's theorem (6.5.1) there exists $c\in (a,b) $ such that $f'(c) =(f(b) -f(a))/(b-a)=0$, which contradicts the hypothesis that $f'(x) >0 $. Hence, if $f(a)=f(b)$ then $a=b$.
In the same way, we can see that if $f'(x)< 0$ then $f$ is injective.
Example 9.1.9 Let $f(x) =x^5 + x $. Since $f'(x) = 5x^4 + 1 >0$, $f$ is injective.
Example 9.1.10 Let $f(x) =2x+\sin x $. Then $f'(x) = 2+\cos x \geq 1$ for every $x$. Hence, $f$ is injective.
Example 9.1.11 Let $f(x) =x^3 $. This $f$ is injective although the above theorem does not apply, since $f'(0)=0$. Therefore, the conditions in the theorem are sufficient but not necessary.
Our knowledge of derivatives can also lead us to conclude that a function is not injective.
Proof. Suppose that $f$ has a local maximum at $x=c$. Then in some interval $(c-h, c+h)$, $f(x)\le f(c)$. Let $a\in (c-h, c)$. If $f(a) =f(c)$ then $f$ is not injective; otherwise, $f(a) < f(c)$.
Let $b\in (c,c+h)$. If $f(b) = f(c)$ or $f(b) =f(a) $ then $f$ is not injective. Otherwise, either $f(b) < f(a)< f(c) $ or $f(a) < f(b)< f(c) $. If $f(b) < f(a) $ then by the intermediate value theorem, there is a number $d\in (c, b) $ such that $f(d) =f(a)$ and so $f$ is not injective. Likewise, if $f(a) < f(b) $ then there is a number $d$ in $(a,c) $ such that $f(d) = f(b)$ and so $f$ is not injective.
In every case, we see that $f$ is not injective.
To return to our principal interest, inverse functions, we now connect bijections and inverses.
Theorem 9.1.13 Suppose $f\colon A\to B$ is a bijection. Then $f$ has an inverse function $g\colon B\to A$.
Proof. Suppose $b\in B$. Since $f$ is onto, there is an $a\in A$ such that $f(a)=b$. Since $f$ is 1–1, $a$ is the only element of $A$ with this property. We let $g(b)=a$. Now it is easy to see that for all $a\in A$, $g(f(a))=a$ and for all $b\in B$, $f(g(b))=b$.
We really don't have any choice about how to define $g$ in this proof; if $f$ is a bijection, its inverse is completely determined. Thus, instead of using a new symbol $g$, we normally refer to the inverse of $f$ as $f^{-1}$.
Unfortunately, it is often difficult to find an explicit formula for the inverse of a given function, $f$, even if it is known that $f$ is bijective. Generally, we attempt to find an inverse in this way:
1. Write $y=f(x)$.
2. Interchange $x$ and $y$.
3. Solve for $y$.
4. Replace $y$ with $f^{-1}(x)$.
Step 3 is the hard part; indeed it is sometimes impossible to perform using algebraic operations.
Example 9.1.14 Find the inverse of $f(x) =(2x-6)/(3x+7)$. First we write $x=(2y-6)/(3y+7)$. Now we solve for $y$: $$\eqalign{ x&= {2y-6\over3y+7}\cr x(3y+7) &= 2y-6\cr 3xy+7x &= 2y - 6\cr 7x+6 &= 2y-3xy\cr 7x+6 &= y(2-3x)\cr {7x+6\over 2-3x} &= y\cr }$$ Finally, we say $f^{-1} (x) =(7x+6)/(2-3x)$
Example 9.1.15 Find the inverse function of $f(x) =x^2 - 4x + 8 $ where $x \geq 2 $. What are the domain and range of the inverse function?
First, $y =x^2 - 4x + 8$ becomes $x=y^2-4y+8$. Now we complete the square: $x=(y-2)^2 +4$ and rearrange to get $x-4=(y-2)^2$. Since in the original function $x-2\ge 0$, and we have switched $x$ and $y$, we know that $y-2\ge 0$. Thus taking the square root, we know $y-2=\sqrt{x-4}$, {\bf not} $y-2=-\sqrt{x-4}$. Finally we write $y=f^{-1}(x)=2+\sqrt{x-4}$. The domain of $f$ is $x\geq 4$ and the range is $y\geq 2$.
While simple in principle, this method is sometimes difficult or impossible to apply. For example, consider $f(x)=x^3 + x $. Since $f'(x) = 3x^2 +1 >0 $ for every $x$, $f$ is injective. (In fact it is bijective.) To find the inverse as above, we would need to solve $x=y^3+y$ for $y$; while possible, this is considerably more difficult than solving the quadratic of the previous example. Some simple looking equations are impossible to solve using algebraic manipulation.
For example, consider $f(x) =x^5 + x^3 + x +1$ a "quintic'' polynomial (i.e., a fifth degree polynomial). Since $f'(x)= 5x^4 + 3x^2 + 1>0$, $f$ is injective (and indeed $f$ is bijective). If there were a quintic formula, analogous to the quadratic formula, we could use that to compute $f^{-1}$. Unfortunately, no such formula exists—fifth degree equations cannot in general be solved. (There are exceptions; $x^5=1$ can be solved, for example.)
Fortunately, it is often more important to know that a function has an inverse then to be able to come up with an explicit formula. Once an inverse is known to exist, numerical techniques can often be employed to obtain approximations of the inverse function. Thus, theorem 9.1.8 and proposition 9.1.12 provide useful criteria for deciding whether a function is invertible.
We now turn to the calculus of inverse functions.
Theorem 9.1.16 Let $A$ be an open interval and let $f:A\to \R$ be injective and continuous. Then $f^{-1}$ is continuous on $f(A)$.
Proof. Since $A$ is an open interval and $f$ is injective and continuous it follows by proposition 9.1.12 that $f$ has no local maxima or minima. Hence, $f$ is either strictly increasing or strictly decreasing. Without loss of generality, $f$ is strictly increasing.
Fix $b\in f(A)$ . Then there exists a unique $a\in A$ such that $f(a) = b $. Let $\epsilon > 0 $ and we may assume that $(a-\epsilon , a+\epsilon ) \subseteq A $. Let $\delta =\min \{ b- f(a-\epsilon ) , f(a+\epsilon ) -b \}$ and note that $\delta >0 $ since $f$ is increasing. Then the interval $(b-\delta , b+\delta ) $ is mapped by $f^{-1} $ into $(a-\epsilon , a+\epsilon ) $. Since $\epsilon $ was arbitrary, it follows that $f^{-1 }$ is continuous at $b$.
Our principal interest in inverses is the simple relationship between the derivative of a function and its inverse.
Theorem 9.1.17 (Inverse function theorem) Let $A$ be an open interval and let $f:A\to \R$ be injective and differentiable. If $f'(x) \neq 0 $ for every $x\in A $ then $f^{-1}$ is differentiable on $f(A)$ and $(f^{-1})'(x) = 1/f'(f^{-1}(x))$.
Proof. Fix $b\in f(A)$. Then there exists a unique $a\in A$ such that $f(a)=b$. For $y\neq b $, let $x=f^{-1} (y) $. Since $f$ is differentiable, it follows that $f$ and hence $f^{-1}$ are continuous.
Then $$\lim_{y\to b}{f^{-1} (b) - f^{-1} (y)\over b-y} = \lim_{x\to a}{a-x\over f(a) -f(x)} ={1\over f'(a)}.$$
In Leibniz notation, this can be written as $\ds{dx\over dy} ={1\over dy/dx}$, which is easy to remember since it looks like ordinary fractional algebra.
Example 9.1.18 Let $f(x) = 3x^3 + 5x - 7$. Since $f(0)=-7$, $f^{-1} (-7) =0$. Since $f'(x) = 9x^2 + 5$, $f'(0)=5$ and so $(f^{-1})' (-7)=1/f'(0)=1/5$.
Exercises 9.1
Ex 9.1.1Which of the following functions are injective? Whichare surjective? Which are bijective? Sketch the graph of eachfunction to illustrate your answers.
a. $f\colon \R\to\R$, $f(x)=x^2$.
b. $f\colon [0, \infty )\to\R$, $f(x)=x^2$.
c. $f\colon \R\to[0, \infty )$, $f(x)=x^2$.
d. $f\colon (-\infty,0]\to[0, \infty )$, $f(x)=x^2$.
Ex 9.1.2Which of the following functions are injective? Whichare surjective? Which are bijective? Sketch the graph of eachfunction to illustrate your answers.
a. $f\colon \R\to\R$, $f(x)=x^3$.
b. $f\colon [0, \infty )\to\R$, $f(x)=\sqrt{x}$.
c. $f\colon \R\to[-1, 1]$, $f(x)=\sin x$.
d. $f\colon [0,\pi]\to[-1, 1]$, $f(x)=\cos x$.
Ex 9.1.3Define $$f(x)=\cases{\ds-{1\over x}&$x\neq 0$\cr\ds 1&$x=0$\cr}$$
Show that $f$ is not injective on $\R$. Show that $f'(x) >0 $ for $x\neq 0 $. Why does this not contradict theorem 9.1.8?
Ex 9.1.4Define $$f(x)=\cases{\ds 1-x&$-2\leq x< 0$\cr\ds 0&$x=0$\cr\ds 10+x&$0< x \leq 2$\cr}$$
Show that $f$ is injective and has a local minimum. Why does this not contradict theorem 9.1.12?
Ex 9.1.5If $A=\R$ sketch the graph of the identityfunction on $A$.
Ex 9.1.6Find the inverse function of $f(x) =(4x-6)/(7x+ 5)$. Whatare the domain and range of $f^{-1}$?
Ex 9.1.7Find the inverse function of $f(x) = 11x/(13x-6)$. What arethe domain and range of $f^{-1}$?
Ex 9.1.8Find the inverse function of $f(x)=ax+b $ when $a\neq0$. What are the domain and range of $f^{-1}$?
Ex 9.1.9Find the inverse function of $f(x) = 1/(cx+d)$when $c\neq 0$. What are the domain and range of $f^{-1}$?
Ex 9.1.10Suppose that $ad-bc \neq 0$. Find the inverse functionof $f(x) = (ax+b)/(cx+d)$. What are the domain and range of$f^{-1}$? (The domain and range will depend on which if any of$a,b,c$, and $d$ are zero.)
Note: The condition $ad-bc \neq 0 $ is a technical condition which ensures that both the domain of $f$ will be all real numbers with perhaps one exception and that that the range of $f$ will be all real numbers with perhaps one exception.
Ex 9.1.11Find the inverse function of $f(x) =|x-4|$ for$x\leq4$. What are the domain and range of $f^{-1}$?
Ex 9.1.12Find the inverse function of $f(x) = \sqrt{x-5}$. What are the domain and range of $f^{-1}$?
Ex 9.1.13Find the inverse function of $f(x) = x^3 - 5$. What are thedomain and range of $f^{-1}$?
Ex 9.1.14Find the inverse function of $f(x) =x^7 - 2$. What are thedomain and range of $f^{-1}$?
Ex 9.1.15Find the inverse function of $f(x) =2x^2 +8x - 4 $ for$x\geq -2 $. What are the domain and range of $f^{-1 } $?
Ex 9.1.16Find the inverse function of $f(x) =x^2 -9x + 10 $ for$x\leq 3 $. What are the domain and range of $f^{-1 } $?
Ex 9.1.17Find the inverse function of $f(x)= x^2 +bx+ c$ for $x\geq-b/2$. What are the domain and range of $f^{-1 } $?
Ex 9.1.18Find the inverse function of $f(x)= x^2 +bx+ c$ for$x\leq -b/2$. What are the domain and range of $f^{-1}$?
Ex 9.1.19Find the inverse function of $f(x) =(1+\sqrt{x})/(1-\sqrt{x})$. What are the domain and range of $f^{-1 }$?
Ex 9.1.20Show that $f(x) =x^7 + 3x $ has an inversefunction on $\R$.
Ex 9.1.21Show that $f(x) =x^{19/9} +x^5 $ has an inversefunction on $\R$.
Note that the point $P(a, f(a)) $ is on the graph of $f$ and that $Q(f(a), a) $ is the corresponding point on the graph of $f^{-1} $.
a. Show that if $a\neq f(a)$ then the slope of the line segment $PQ$ is $-1 $.
b. Conclude that if $a\neq f(a)$ the line segment $PQ$ is perpendicular to the graph, $L$, of the identity function on $\R$.
c. Show that the midpoint of $PQ$ is on $L$.
d. Conclude that the graph of $f^{-1}$ is the graph of $f$ reflected through $L$.
Ex 9.1.23Let $f(x) =x^3 +x $. Sketch the graph of $f$ and $f^{-1} $ on the same diagram.
Ex 9.1.24Let $f(x) =x^5 + x^3 +1 $. Sketch the graph of $f$ and $f^{-1} $ on the same diagram.
Ex 9.1.25
a. Suppose that $f$ is an increasing function on $\R$. What can you say about $f^{-1}$?
b. Suppose that $f$ is a concave up function on $\R$. What can you say about $f^{-1} $?
In both parts, use exercise 22 to illustrate your claim.
Ex 9.1.26Let $f(x) = 3x^3 + 9x + 4$. Compute $(f^{-1})'(4)$.
Ex 9.1.27Let $f(x) = 2x^2 + 11$. Show that $f$ is increasing at$x=0$. Thus, there is an interval $I$ containing $0$ such that $f$ isinjective on $I$. Compute $(f^{-1})'(11)$.
Ex 9.1.28Let $f(x) = ax+b$ with $a\neq 0$. Compute$(f^{-1})'(b)$. Why do we need the condition $a\neq 0$?
Ex 9.1.29Let $f(x) =ax^2 +bx + c $ with $b\neq 0$. Compute$(f^{-1})'(c)$. Why do we need the condition $b\neq 0$?
Ex 9.1.30Let $f(x) =a_n x^n + a_{n-1} x^{n-1 } + \cdots + a_1 x +a_0 =\sum_{k=0}^n a_k x^k$ with $a_1 \neq 0$. Compute $(f^{-1} )'(a_0)$. Why do we need the condition $a_1 \neq 0$?
Ex 9.1.31Suppose that $f$ is injective on some interval containing $3$. If $f(3)=4$ and $f'(3)=6$ what is $(f^{-1})'(4)$?
|
Visitors sometimes feel bored with our web blog because of too many boring stuffs which not often appear in their casual work/study. I just want to post such a simple tutorial for beginners and if you are experienced in CTF's pwn then just skip it. Enjoy!
Chinese Remainder Theorem =========================
Suppose are positive integers and coprime in pair. For any sequence of integers , there exists an integer x solving the following system of congruence equations:
There exists an unique modulo solution of the system of simultaneous congruences above:
in which:
M &= m_1 \cdots m_k \\ M_1 &= \frac{M}m_1 , \cdots, M_k = \frac{M}m_k \\ y_1 &\equiv (M_1)^{-1} \pmod{m_1}, \cdots , y_k\equiv (M_k)^{-1}\pmod{m_k} \end{aligned}" /> THRESHOLD SIGNATURE SCHEME Introduction
Assuming there are 20 employees in a company, and if each employee has his or her own copy of the secret key then it is hard to assure on individuals due to compromise and machine break down. In the other hand, if there is a valid signature requires all 20 employees’ signature in the company then it will be very secure but not be easy to use. Therefore we can implement a scheme which requires only sign 5 or more out of 20 employees then it will be valid and that is exactly what a (5,20) threshold signature scheme tries to achieve. In addition, if a threat agent wants to compromise the system and obtain a message, he must compromise at least 5 people in the scheme and that is a harder thing to do compared to a traditional public scheme.
Deciphering
Ciphertext: “ VaqrprzoreoeratraWhyvhfraJnygreUbyynaqreqrgjrroebrefinaRqvguZnetbganne
The given ciphertext has only letters without space, punctuation or separated key, there are two classic cipher systems such as substitution cipher and transposition cipher which are known to be easy to attack by using frequency analysis or bruteforce techniques. Continue reading Deciphering Ceasar basic concept
|
In Gali chapter 2 we have the following constraint to the classical monetary model
$P_t C_t + Q_t B_t \leq B_{t-1} + W_t N_t-T_t$.
Then it seems that this is treated as an equality. Therefore my question is: are we assumig that $\partial U/\partial B_t >0$? So households would achieve the optimun at the equality.
Another question is, in general if I have the problem
$Max E_0 \sum_{t=0}^{\infty}\beta^t U(C_t,N_t)$
Should I solve the OP
$Max \sum_{t=0}^{\infty}\beta^t U(C_t,N_t)$
And then take expectation to the relations I obtain by optimizing?. For example if we add the constraint
$P_t C_t + Q_t B_t \leq B_{t-1} + W_t N_t-T_t$ to the OPs, in the problem without expectation I get $Q_t=\beta \frac{\partial U/\partial C_{t+1}}{\partial U/\partial C_{t}} \frac{P_t}{P_{t+1}}$ and taking $E_t$ I obtain Euler's equation. Is the procedure of solving the problem without expectation and then take expectation correct?
EDIT:If it is incorrect to forget the expectation and optimize, then I am confused about this: How to relate real rate of return on capital to bond interest rate: Lagrangian. As there the procedure to solve the problem seems to forget the expectation.
Second, if I do $\frac{\partial }{\partial C_t}E[U]=E\left[\frac{\partial U}{\partial C_t}\right]$. I obtain $Q_t=\beta\frac{ E_0\left[\partial U/\partial C_{t+1} \right]}{E_0\left[\partial U/\partial C_{t} \right]}\frac{P_t}{P_{t+1}}$ because I can not get rid off $E_0$. Nevertheless this would be equivalent to $Q_t=\beta E_0 \left[\frac{\partial U/\partial C_{t+1} }{\partial U/\partial C_{t} }\right]\frac{P_t}{P_{t+1}}$ if $E_0 \left[ \frac{\partial U/\partial C_{t+1} }{\partial U/\partial C_{t} }\right]=\frac{ E_0\left[\partial U/\partial C_{t+1} \right]}{E_0\left[\partial U/\partial C_{t} \right]}$. Is this because $C_t$ is independent of $C_{t+1}$?.
Third, I realized that if I solve $Max \sum_{t=0}^{\infty}\beta^t E_tU(C_t,N_t)$ s.t the constraint as before and proceed with $\frac{\partial }{\partial C_t}E[U]=E\left[\frac{\partial U}{\partial C_t}\right]$ I obtain the correct Euler's equation. Is this the correct problem to optimize? but what about Gali's propose?
|
Wikidot uses a markup language called $\LaTeX$ (pronounced "lay-tech") along with jsMath to generate properly typeset mathematics. $\LaTeX$ requires memorizing (or looking up) "commands" for creating certain kinds of characters. Since you'll be posting to the forum and creating wiki pages with serious mathematical content, it's good that you get some practice writing mathematical expressions in $\LaTeX$.
the basics
When you're in the editing panel, you can insert mathematical expressions within your text (i.e., "inline") by using the code
[[$ your-mathematical-expression-here $]]
For instance, this sentence — which includes the equation $x^{2}+y^{2} = r^{2}$ — is typeset as
For instance, this sentence -- which includes the equation [[$ x^{2}+y^{2} = r^{2} $]] -- is typeset as
You can also have your mathematical expressions separated from the text and placed on their own line for emphasis. For instance, if you wanted to type:
Here's some fancy mathematics that I don't really understand(1)
Man, that's complicated!
then you'd use the code
Here's some fancy mathematics that I don't really understand [[math]] \log \zeta(s) = s\int_{2}^{\infty} \frac{\pi(x)}{x(x^{s}-1)}~dx = \log \prod_{p} (1-p^{-s})^{-1}.[[/math]] Man, that's complicated!
general comments
Here are a few things to keep in mind:
All wiki commands are of the form
[[blah]].
All inline mathematical notation must be framed by dollar signs. That is, in the wiki, all inline mathematical notation is of the form
[[$math-stuff$]].
All displayed mathematical notation (i.e., on its own line and centered) is of the form
[[math]] math-stuff [[/math]].
All special symbols in $\LaTeX$ are of the form
\some-command. Once you've used $\LaTeX$ enough, you can almost guess what the command is for a certain symbol.
some examples
Here are a few more examples that illustrate some of the mathematical notation we may want to use:
expression you want code you type $n \in \mathbb{N} \subseteq \mathbb{Z}$ [[$n \in \mathbb{N} \subseteq \mathbb{Z} $]] $\sum_{i=1}^n i^2=1^2+2^2+ \cdots +(n-1)^2+n^2$ [[$\sum_{i=1}^n i^2=1^2+2^2+ \cdots (n-1)^2+n^2 $]] $\sqrt{2} \notin \mathbb{Q}$ [[$ \sqrt{2} \notin \mathbb{Q}$]] $2\in \{2,3,4\} \cap \{1,2,3\}$ [[$2\in \{2,3,4\} \cap \{1,2,3\}$]] $f:A\to B$ [[$f:A\to B$]] $f(x_1)\neq f(x_2)$ [[$f(x_1)\neq f(x_2)$]] $\{a_n\}_{n=1}^{\infty}$ [[$\{a_n\}_{n=1}^{\infty}$]] $(f\circ g)(x)=f(g(x))$ [[$(f\circ g)(x)=f(g(x))$]] $\frac{a}{b}+\frac{c}{d}\neq \frac{a+b}{c+d}$ [[$\frac{a}{b}+\frac{c}{d}\neq \frac{a+b}{c+d}$]]
Greek letters are typeset using
\name: for example,
\theta produces $\theta$. In order to produce a left or right brace, the brace needs to be preceded by a backslash. For example, $\mathbb{N}=\{1,2,3,\ldots\}$ is typeset with
[[$\mathbb{N}=\{1,2,3,\ldots\}$]] and notice the use of
\{ and
\}.
Using $\LaTeX$ allows you to do fancy things like the following:(2)
which is typeset using
[[math]]\begin{align*}\sum_{i=1}^{k+1}i & = \left(\sum_{i=1}^{k}i\right) +(k+1)\\ & = \frac{k(k+1)}{2}+k+1 & (\text{by inductive hypothesis})\\& = \frac{k(k+1)+2(k+1)}{2}\\& = \frac{(k+1)(k+2)}{2}\\& = \frac{(k+1)((k+1)+1)}{2}.\end{align*}[[/math]]
more information
Here is a more comprehensive list of symbols taken from http://www.math.harvard.edu/texman/node21.html.
In addition, Wikidot provides a brief summary of how to include mathematical expressions into the wiki. For a list of some of the more common $\LaTeX$ symbols, see here. If you want to see a really, really, really long list of symbols, go here.
|
Let $G=(V,E)$ be a directed graph, $\omega : E \rightarrow R$ a weight function, and $s,t \in V$ a pair of different nodes. It's given that $G$ doesn't have a negative cycle. Moreover, 10 of its edges are colored in red (let's say that the rest are colored in blue).
I want to find an efficient algorithm that find the shortest path between $s$ and $t$ that goes through at least 5 different red edges.
Notice that going more than once through the same red edge is still considered as going through 1 red edge in the path.
One idea that I had was creating a graph $G'$ that will have a copy of every node for every possible set of red edges in $G$. That means that if we go through some red edge $e$, then the path will "move" to a variation of $G$ where $e$ is colored in blue and all the rest is the same.
However in this solution I will have to copy each node and each edge around 400 times. This will result in the same complexity asymptotically, but with such a big constant it seems really not efficient.
Another idea, was to somehow build the new graphs "on the run" of Bellman-Ford algorithm, but I don't really know how to do it.
I'll appreciate some help.
|
Using randomized approach we can guarantee that the equality problem has O(1) complexity (in communication). With other definitions of of equality (not strictly equal), is there a general approach to design a randomized algorithm with bounded error? Or how to determine if there is no way such an algorithm can be designed?
Suppose that the inputs are $x,y \in \{0,1\}^n$.
Consider the following protocol:
The parties decide on a random string $z \in \{1,2\}^n$. Alice sends Bob $\alpha = \sum_{i=1}^n x_i z_i \pmod{3}$, and Bob sends Alice $\beta = \sum_{i=1}^n y_i z_i \pmod{3}$. The parties compute $\gamma = \alpha - \beta \pmod{3}$.
Suppose that the Hamming distance between $x$ and $y$ is $d$. Then $\gamma$ is the sum of $d$ random values in $\{1,2\}$. Using linear algebra, we can calculate $$ \Pr[\gamma = 0] = \frac{1}{3} + \frac{2}{3} \left(-\frac{1}{2}\right)^d.$$ (You can also prove this formula by induction.)
In particular, when $d=0$ the probability is $1$, when $d=1$ the probability is $0$, and otherwise the probability is in the range $[1/4,1/2]$ (corresponding to $n=3$ and $n=2$, respectively).
The suggests a protocol of the following form:
The parties decide on $N$ random strings $z_1,\ldots,z_N \in \{1,2\}^n$. Alice sends Bob $\alpha_j = \sum_{i=1}^n x_i z_{j,i} \pmod{3}$ and Bob sends Alice $\beta_j = \sum_{i=1}^n y_i z_{j,i} \pmod{3}$, for $1 \leq j \leq N$. The parties compute $\gamma_j = \alpha_j - \beta_j \pmod{3}$ for $1 \leq j \leq N$. If all $\gamma_j$ are zero or all are non-zero, output "Hamming distance is probably at most 1", otherwise output "Hamming distance is definitely more than 1".
As in the case of equality, this protocol has only one-sided error: whenever the algorithm outputs "Hamming distance is more than 1", it is always correct. It remains to determine the connection between $N$ and the error probability $\epsilon$. Let $p = \Pr[\gamma = 0]$. If $p \neq 0,1$, the probability that the algorithm outputs the incorrect answer is $$ p^N + (1-p)^N \leq (1/2)^N + (3/4)^N. $$ This shows that $\epsilon$ is exponentially small in $N$, and so in order to guarantee an error probability of $\epsilon > 0$, it suffices to take $N = O(\log(1/\epsilon))$.
|
This is what we call in the pitch-detection biz, the "
octave problem".
First of all, I would change the AMDF to ASDF. And I would not reduce the window size as the lag increases. (Also, I am changing notation to what I consider to be more conventional. "$x[n]$" is a discrete-time signal.)
The Average Squared Difference Function (ASDF) of $x[n]$ in the neighborhood of sample $x[n_0]$ is:
$$ Q_x[k, n_0] \triangleq \frac{1}{N} \sum\limits_{n=0}^{N-1} \left(x[n+n_0-\left\lfloor \tfrac{N+k}{2}\right\rfloor] \ - \ x[n+n_0-\left\lfloor \tfrac{N+k}{2}\right\rfloor + k] \right)^2 $$
$\left\lfloor \cdot \right\rfloor$ is the
floor() function and, if $k$ is even then $ \left\lfloor \frac{k}{2}\right\rfloor = \left\lfloor \frac{k+1}{2}\right\rfloor = \frac{k}{2} $.
Now, expand the square and consider what the summations look like as $N \to \infty$ (not that $N$
is going to infinity, but to give you an idea if $N$ is large). The ASDF is directly related to the autocorrelation. It is essentially the autocorrelation turned upside down. These steps I will leave to you. take a look at this answer.
So now consider this finite-length
"autocorrelation" (in the neighborhood of sample $x[n_0]$) defined from the ASDF:
$$ R_x[k,n_0] = R_x[0,n_0] - \tfrac12 Q_x[k, n_0] $$
where
$$ R_x[0, n_0] \triangleq \frac{1}{N} \sum\limits_{n=0}^{N-1} \Big(x[n+n_0-\left\lfloor \tfrac{N}{2}\right\rfloor]\Big)^2 $$
Since $Q_x[0, n_0] = 0$ and $Q_x[k, n_0] \ge 0$ for all lags $k$, that means that $ R_x[k, n_0] \le R_x[0, n_0] $ for all lags $k$.
Suppose for a minute that $x[n]$ is periodic with period $P$ (and $P$ happens to be an integer), then
$$ x[n+P] = x[n] \quad \forall n $$
and $Q_x[mP, n_0] = 0$ and $R_x[mP, n_0] = R_x[0, n_0] \ge R_x[k, n_0]$ for any integer number of periods ($m$ is an integer). So you get a peak at $k=0$ and at $k$ equal to any other multiple of $P$ if $x[n]$ is periodic. If $x[n]$ is
not perfectly periodic, what we might expect is the biggest peak at $k=0$, another peak (but slightly smaller) at $k=P$ (the period we are looking for) and progressively smaller peaks for larger multiples of $P$.
So the
octave problem comes about because of a couple of reasons. First of all, $P$ is not necessarily an integer. That is an interpolation problem, not a big deal.
The second reason and more difficult problem is that of
subharmonics. Consider that you're listening to a nice periodic tone at exactly A-440 Hz and it sounds like an A that is 9 semitones above middle C. Now suppose someone adds to that tone a very tiny-amplitude (like down 60 dB) A-220? What will it sound like and mathematically what is the "true" period?
Choosing the "right" peak for the period.
Let's say you run your note through a DC-blocking filter, so that the mean of $x[n]$ is zero. It turns out that causes the mean of the autocorrelation $R_x[k, n_0]$ for every $n_0$ to also be zero (or close to it if $N$ is large). That means $R_x[k, n_0]$ must sum (over $k$) to be about zero which means there is as much area above zero as below.
Okay, so $R_x[0, n_0]$ represents the power of $x[n]$ in the vicinity around $n=n_0$ and must be non-negative. $R_x[k, n_0]$ never exceeds $R_x[0, n_0]$ but can get as large as it when $x[n]$ is periodic. $R_x[P, n_0] = R_x[0, n_0]$ if $x[n+P]=x[n]$. So if $x[n]$ is periodic with period $P$ and you have a bunch of peaks spaced apart by $P$ and you have an idea for how high those peaks should be. And if the DC component of $R_x[k, n_0]$ is zero, that means in-between the peaks, it
must have negative values.
If $x[n]$ was "quasi-periodic", one cycle of $x[n]$ will look a lot like an adjacent cycle, but not so much like a cycle of $x[n]$ farther down the signal in time. That means the first peak $R_x[P, n_0]$ will be higher than the second at $R_x[2P, n_0]$ or the third $R_x[3P, n_0]$. One could use the rule to always pick the highest peak and expect the highest peak to always be the first one. But, because of inaudible subharmonics, sometimes that is not the case. sometimes the second or possibly the third peak is oh-so-slightly higher. Also, because the period $P$ is likely not an integer number of samples but $k$ in $R_x[k, n_0]$ is always an integer, so the true peak will likely be in-between integer values of $k$. Even if you were to interpolate where the smooth peak is (which I recommend and quadratic interpolation is good enough), and how high it really is between integer $k$, your interpolation alg could make a peak slightly higher or slightly lower than it really is. So choosing the absolutely highest peak can result in spuriously picking the second over the first peak (or vise versa) when you really wanted the other.
So somehow you have to
handicap the peaks at increasing $k$ so that the first peak has a slight advantage over the second, and the second over the fourth (the next octave down), etc. How do you do that?
You do that by multiplying $R_x[k, n_0]$ with a decreasing function of $k$ so that the peak at $k=2P$ is reduced by some factor, relative to an identical peak at $k=P$. Turns out that the power function (not the exponential) does that. so compute
$$ k^{-\alpha} \ R_x[k, n_0] $$
So, if $x[n]$ were perfectly periodic with period $P$, and ignoring interpolation issues for non-integer $P$, then
$$ R_x[2P, n_0] = R_x[P, n_0] $$
but
$$\begin{align} (2P)^{-\alpha} R_x[2P, n_0] & = \\ (2P)^{-\alpha} R_x[P, n_0] & < P^{-\alpha} R_x[P, n_0] \\ \end{align}$$
The factor by which the peak for a pitch of one octave lower is reduced is the ratio
$$ \frac{(2P)^{-\alpha} R_x[2P, n_0]}{P^{-\alpha} R_x[P, n_0]} = \frac{(2P)^{-\alpha}}{P^{-\alpha}} = 2^{-\alpha} $$
So if you want to give your first peak a 1% boost over the second peak, which means you will not choose the pitch to be the sub-harmonic pitch, unless the sub-harmonic pitch autocorrelation is at least 1% more than the first peak, you would solve for $\alpha$ from
$$ 2^{-\alpha} = 0.99 $$
That is the consistent way to weight or de-emphasize or handicap the peak corresponding to the subharmonic pitch one octave below.
It still leaves you with a thresholding issue. You have to choose $\alpha$ well. But this is a consistent way emphasize the first peak over the second, which is an octave lower, but not so much that if the note really
is an octave lower, but the energy in all of the even harmonics was strong, compared to the odd harmonics, this will still leave a possibility for the second peak being chosen.
|
I could understand the derivation of the "bulk-to-boundary" propagators ($K$) for scalar fields in $AdS$ but the iterative definition of the "bulk-to-bulk" propagators is not clear to me.
On is using the notation that $K^{\Delta_i}(z,x;x')$ is the bulk-to-boundary propagator i.e it solves $(\Box -m^2)K^{\Delta_i}(z,x;x') = \delta (x-x')$ and it decays as $cz^{-\Delta _i}$ (for some constant $c$) for $z \rightarrow 0$. Specifically one has the expression, $K^{\Delta_i}(z,x;x') = c \frac {z^{\Delta _i}}{(z^2 + (x-x')^2)^{\Delta_i}}$
Given that this $K$ is integrated with boundary fields at $x'$ to get a bulk field at $(z,x)$, I don't understand why this is called a bulk-to-boundary propagator. I would have thought that this is the "boundary-to-bulk" propagator! I would be glad if someone can explain this terminology.
Though the following equation is very intuitive, I am unable to find a derivation for this and I want to know the derivation for this more generalized expression which is written as,
$\phi_i(z,x) = \int d^Dx'K^{\Delta_i}(z,x;x')\phi^0_i(x') + b\int d^Dx' dz' \sqrt{-g}G^{\Delta_i}(z,x;z',x') \times$ $\int d^D x_1 \int d^D x_2 K^{\Delta_j}(z,x;x_1)K^{\Delta_k}(z,x;x_2)\phi^0_j(x_1) \phi^)_k(x_2) + ...$
where the "b" is as defined below in the action $S_{bulk}$, the fields with superscript of $^0$ are possibly the values of the fields at the boundary and $G^{\Delta_i}(z,x;z',x')$ - the "bulk-to-bulk" propagator is defined as the function such that,
$(\Box - m_i^2)G^{\Delta_i}(z,x;z',x') = \frac{1}{\sqrt{-g}} \delta(z-z')\delta^D(x-x')$
Here what is the limiting value of this $G^{\Delta_i}(z,x;z',x')$ that justifies the subscript of $\Delta_i$.
Also in this context one redefined $K(z,x;x')$ as,
$K(z,x;x') = lim _ {z' \rightarrow 0} \frac{1}{\sqrt{\gamma}} \vec{n}.\partial G(z,x;z',x')$ where $\gamma$ is the metric $g$ restricted to the boundary.
How does one show that this definition of $K$ and the one given before are the same? (..though its very intuitive..)
I would also like to know if the above generalized expression is somehow tied to the following specific form of the Lagrangian,
$S_{bulk} = \frac{1}{2} \int d^{D+1}x \sqrt{-g} \left [ \sum _{i=1}^3 \left\{ (\partial \phi)^2 + m^2 \phi_i^2 \right\} + b \phi_1\phi_2 \phi_3 \right ]$
Is it necessary that for the above expression to be true one needs multiple fields/species? Isn't the equation below the italicized question a general expression for any scalar field theory in any space-time?
Is there a general way to derive such propagator equations for lagrangians of fields which keep track of the behaviour at the boundary?
|
So my question is: is Pauli-repulsion a phenomenon that has also not
yet been explained in terms of any of the three other forces that we
know of?
$\def\ket#1{|#1\rangle} \let\up=\uparrow \let\dn=\downarrow \def\PD#1#2{{\partial#1\over\partial#2}}$There is no repulsion and no unexplained force. I would also add that PEP is an outdated way of describing the matter. In QM you should rather speak of antisymmetry* of fermion states. It's only when we build up a many particle state as a tensor product of one particle states that antisymmetry forces us to keep only different states for each single particle. A simple example with two particles will explain this (I hope).
Two identical fermions in an infinite well
Consider particles in one dimension, constrained in a segment $0\lex\le L$ (what is usally called an "infinite potential well"). Energy eigenfunctions (standing waves) are sinusoidal waves vanishing at boundaries:$$\psi_n = \sin {n\,\pi\,x \over L} \qquad (n = 1,2,\ldots)$$(these aren't normalized, but it's of no consequence for my present purposes.) The corresponding energy eigenvalues are$$E_n = {n^2 h^2 \over 8\,m\,L^2}.\tag1$$A short derivation follows, which you may skip with no harm.
$\psi_n$ has wavelength $2L/n$, then momentum$$p = {h \over \lambda} = {n\,h \over 2\,L}.$$Then energy (only kinetic) is$$E_n = {p^2 \over 2\,m} = {n^2 h^2 \over 8\,m\,L^2}.$$
Assume your particles are non-interacting* spin 1/2 fermions. Then above expression for energy eigenfunction is to be supplemented by specifying the spin state. Then Dirac's ket notation is preferable:$$\ket{n\up} \quad \hbox{or} \quad \ket{n\dn}$$both belonging to $E_n$ eigenvalue.
If your system consists of just two particles, a set of base kets would be obtained by taking tenso products, which in Dirac's notation are written just putting two kets one after another. E.g.$$\ket{m\up} \ket{n\up} \quad \ket{m\up} \ket{n\dn} \quad \ket{m\dn} \ket{n\up} \quad \ket{m\dn} \ket{n\dn}$$for all positive integers $m$, $n$. A shorthand may be used:$$\ket{m\up\,;\,n\up} \ \ket{m\up\,;\,n\dn} \ \ket{m\dn\,;\,n\up} \ \ket{m\dn\,;\,n\dn} \tag2$$where labels preceding ";" refer to first particle, those following to the second.
But states in (2) are wrong for identical fermion particles, as theyaren't antisymmetrized. The right ones are$$\eqalign{ &\ket{m\up\,;\,n\up} - \ket{n\up\,;\,m\up} \qquad \ket{m\up\,;\,n\dn} - \ket{n\dn\,;\,m\up} \cr &\ket{m\dn\,;\,n\up} - \ket{n\up\,;\,m\dn} \qquad \ket{m\dn\,;\,n\dn} - \ket{n\dn\,;\,m\dn} \cr}$$(once again I'm neglecting normalization).
Observe however that if $m=n$ first and fourth expressions are identically zero, whereas second and third are the same apart for sign, thus representing the same state. This is the mathematical form PEP assumes in QM: for $m=n$ just one state exists for two particles, for $m\ne n$ there are four.
For more particles we would proceed analogously, with a somewhat highercomplication.
Let's compute pressure
First of all let me remark that not fermions alone exert a pressure when confined in a finite volume. Bosons do as well. Radiation pressure is an example, and photons are bosons. So let's compute the pressure exerted by a gas of non-interacting bosons at $0\,$K, whenall particles are in the ground state (this isn't forbidden for bosons).
If we have $N$ particles, overall energy is given by (1) taken for $n=1$ and multiplied by $N$; $$E = {N h^2 \over 8\,m\,L^2}.$$As we are in one dimension we'll speak of force, not of pressure. It's most easily computed by$$F = -\PD EL = {N h^2 \over 4\,m\,L^3}.\tag3$$
For those who find too abstract the above derivation I'll add a semiclassical one. In our box we have free particles bouncing back an forth between boundaries. Their momentum is $p=h/(2L)$. A particle hits one boundary (e.g. the left one) once in a time$${2L \over v} = {2mL \over p} = {4 m L^2 \over h}$$and every time it exchanges with the boundary a momentum $2p$. Then the momentum exchanged per unit of time, i.e. the force, is $$f = 2p\, {h \over 4 m L^2} = {h^2 \over 4 m L^3}.$$This holds for one particle. It's only left to multiply by $N$ to get (3)
Now for fermions
What's the difference? Simply that even at $0\,$K a fermion gasdoesn't have all particles in ground state. We've seen why it'sforbidden by antisymmetry. So we have the task to arrange anantisymmetrical ket for $N$ particles, which sounds prohibitive.Actually it's not so much so, but we'll follow a roundabout way, inprinciple an approximated one but absolutely adequate to our purposes.
For each $n$ there are two states allowed, spin up and spin down. Wealready saw that for $m=n=1$ and two particles only one state ispossible, wheres none is possibile for three. If we accept values 1 and2 for $m$, $n$ we can accomodate up to four particles$$\ket{1\up;1\dn;2\up;2\dn}$$(to be antisymmetrized). So we see that for $N$ particles all statesfrom 1 to $N/2$ will be occupied, each by two particles with oppositespins.
And now we are able to compute the energy:$$E = 2\,\sum_{n=1}^{N/2} E_n = 2\,\sum_{n=1}^{N/2} {n^2 h^2 \over 8\,m\,L^2} = {h^2 \over 4\,m\,L^2} \sum_{n=1}^{N/2} n^2$$
(the sum has to be multiplied by 2 since for every $n$ there are twospin states). If $N$ is large we may approximate the sum to ${1 \over 24}\,N^3$ and get$$E = {N^3 h^2 \over 96\,m\,L^2}.$$As before$$F = -\PD EL = {N^3 h^2 \over 48\,m\,L^3}.\tag4$$
You can see the difference between (3) and (4). Whereas for bosonsforce is $\propto N$, for fermions it's $\propto N^3$, then much largerif $N$ is large. Actually extremely larger for a white dwarf: try toestimate how much is $N$ (number of electrons) for a star having Sun'smass.
To be sure we should reason about pressure, not about force. Thisrequires leaving our naive 1D model for a more realistic 3D one. I'llcontent myself to give the result$$P = {(3\,\pi^2)}^{2/3} \left(\!{\hbar^2 \over m}\!\right)\,{N \over V}^{\!5/3}.$$
The most important difference is in the dependence on $N$: $N^{5/3}$instead of $N^3$. I can't explain its origin (it has to do with thedifferent accounting in 1D and in 3D for the one-particle states up to$N/2$). I'll only say that even with the smaller exponent resultingpressure is enough to counterbalance gravity for dwarfs of mass nearSun's and size about Earth's.
A final comment
It should be clear that no mysterious force could account for ourresults. Note that total energy of $N$ particles depends on a power of$N$ and it would be hard to explain that with some interaction betweenparticles. Instead all depends on which and how many independent statesare allowed when identical particles are concerned. In a differentway for bosons against fermions and both different of the one thatwould be used for classical particles.
As Feynman liked to say, this is the way things are.
|
The idea for estimating the mean is roughly as follows:
For any $f(x)$ that gives outputs in the reals, define a rescaled $F(x)$ that gives outputs in the range 0 to 1. We aim to estimate the mean of $F(x)$.
Define a unitary $U_a$ whose operation is $$U_a:|0\rangle|0\rangle\mapsto\frac{1}{2^{n/2}}\sum_x|x\rangle(\sqrt{1-F(x)}|0\rangle+\sqrt{F(x)}|1\rangle).$$ It is important to note that this unitary is easily implemented. You start with a Hadamard transform on the first register, perform a computation of $f(x)$ on an ancilla register, use this to implement a controlled-rotation of the second register, and then uncompute the ancilla register.
Define the unitary $G=U_a (\mathbb{I}-2|0\rangle\langle 0|\otimes |0\rangle\langle 0|)U_a^\dagger \mathbb{I}\otimes Z$.
Starting from a state $U_a|0\rangle|0\rangle$, use $G$ much like you would use the Grover iterator to estimate the number of solutions to a search problem.
The main bulk of this algorithm is amplitude amplification, as described here. The main idea is that you can define two states$$|\psi\rangle=\frac{1}{\sqrt{\sum_x F(x)}}\sum_x\sqrt{F(x)}|x\rangle|1\rangle \qquad |\psi^\perp\rangle=\frac{1}{\sqrt{\sum_x 1-F(x)}}\sum_x\sqrt{1-F(x)}|x\rangle|0\rangle,$$and this defines a subspace for the evolution. The initial state is $U_a|0\rangle|0\rangle=(\sqrt{\sum_x F(x)}|\psi\rangle+\sqrt{\sum_x 1-F(x)}|\psi^\perp\rangle)2^{-n/2}$. The amplitude of the $|\psi\rangle$ term clearly contains the information about the mean of $F(x)$, if we could just estimate it. You could just repeatedly prepare this state and measure the probability of getting a $|1\rangle$ on the second register, but Grover's search gives you a quadratic improvement. If you compare to the way Grover's is usually set up, the amplitude of this $|\psi\rangle$ which you can 'mark' (in this case by applying $\mathbb{I}\otimes Z$) would be $\sqrt{\frac{m}{2^n}}$ where $m$ is the number of solutions.
Incidentally, this is interesting to compare to the "power of one clean qubit", also known as DQC1. There, if you apply $U_a$ to $\frac{\mathbb{I}}{2^n}\otimes|0\rangle\langle 0|$, the probability of getting the 1 answer is just the same as the non-accelerated version, and gives you an estimate of the mean.
For the median, it can apparently be defined as the value $z$ that minimises$$\sum_x|f(x)-f(z)|.$$There are two steps here. The first is to realise that the function we're trying to minimise over is basically just a mean. Then the second step is to use a minimisation algorithm which can also be accelerated by a Grover search. The idea here is to use a Grover's search, and mark all items for which the function evaluation gives a value less than some threshold $T$. You can estimate the number of inputs $x$ that give $f(x)\leq T$, then repeat for a different $T$ until you localise the minimum value sufficiently.
Of course, I am skipping over some details of precise running times, error estimates etc.
|
High Dimensional LDA Posted on
This note is for Cai, T. T., & Zhang, L. (n.d.). High dimensional linear discriminant analysis: Optimality, adaptive algorithm and missing data. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 0(0).
Introduction
Suppose $Z$ is drawn with equal probability from one of the two Gaussian distributions $N_p(\mu_1,\Sigma)$ (class 1) and $N_p(\mu_2,\Sigma)$ (class 2). If all the parameters $\theta = (\mu_1,\mu_2,\Sigma)$ are known, Fisher’s linear discriminant rule is given by
where $\delta = \mu_2-\mu_1$, and $\Omega = \Sigma^{-1}$.
Note that
The misclassification error is given by $R_{\opt}(\theta)=\Phi(-\Delta/2)$.
Let $Y=\delta^T\Omega Z$, then
thus,
However, the parameters $\mu_1,\mu_2$ and $\Sigma$ are typically unknown and need to estimated.
in the low dimensional setting, plug the sample means and pooled sample covariance matrix into Fisher’s rule. in the high dimensional setting, sample covariance matrix is not even invertible regularized classification methods: regularized LDA, covariance-regularized classification, hard thresholding. (all of them rely on the individual sparsity assumptions on $\Omega$ (or $\Sigma$) and $\delta$) more flexible ( really?) assumption is on the sparsity of $\beta=\Omega\delta$. Someone proposes to estimate the discriminant direction $\beta$ directly instead of estimating $\Sigma$ and $\delta$ separately, under the assumption that $\beta$ is sparse. ( similarity with inverse method?)
Much recent progress in methodological development on high dimensional classification problems, but relatively little fundamental study on optimality theory for discriminant analysis:
someone conduct minimax study in the special case where $\Sigma=\sigma^2I$ for some $\sigma > 0$.
Questions:
even in the above simple setting, still a gap between the minimax upper and lower bounds unclear what the optimal rate of convergence for the minimax misclassification risk which classification rule is rate optimal under the general Gaussian distribution
Goal of the paper:
answer the above questions there is a paucity of methods for inference with incomplete high dimensional data, so develop optimality theory for high dimensional analysis with incomplete data construct a data-driven adaptive classifier with theoretical guarantees. Methodology Data-driven adaptive classifier for complete data Linear programming discriminant (LPD) directly estimates the discriminant direction $\beta$ through solving the following optimization problem:
Three drawbacks:
it uses a common constraint $\lambda_n$ for all co-ordinates of $a=\hat\Sigma\beta-(\hat\mu_2-\hat\mu_1)$. not adaptive in the sense that the tunning parameter $\lambda_n$ is not fully specified and needs to be chosen through an empirical method such as cross-validation. does not come with theoretical optimality guarantees.
To resolve these drawbacks, the paper introduces an adaptive algorithm for high dimensional LDA with complete data, called LDA.
AdaLDA classifier relies on an accurate estimate of the right-hand side of \eqref{eq:7}, where $\sigma_{jj}$ can be easily estimated by the sample variances, but $\Delta^2$ is more difficult to estimate.
Construct a preliminary estimator $\tilde \beta$, estimating $\Delta^2$ by $\vert\tilde \beta^T(\hat\mu_2-\hat\mu_1)\vert$, then apply the above lemma to refine the estimation of $\beta$:
(estimating $\Delta^2$). Construct a preliminary estimator $\tilde \beta$, estimating $\Delta^2$ by $\vert\tilde \beta^T(\hat\mu_2-\hat\mu_1)\vert$ (adaptive estimate $\beta$). Construct through the linear optimization $\hat\beta_{\text{AdaLDA}}=\arg\min_\beta\Vert\beta\Vert_1$. plug into Fisher’s rule. Concern related to the implementation of the algorithm.(Resolved)
Given a linear program
we can convert the inequality constraints to equalities by introducing a vector of
slack variable $z$and writing
This form is still not quite standard, since not all the variables are constrained to be nonnegative. We deal with this by splitting $x$ into its nonnegative and nonpositive parts, $x=x^+-x^-$, then we have
Here, the implementation of the algorithms also split $x$. At first, I was wondering whether it is proper. Now, although the space might be enlarger, the solution space still agree with the original problem.
Take a toy example, suppose $\beta = 4-0 = 5-1$, both of them satisfy the constraints, but the object function $4+0 < 5+1$, so the second case is discarded.
ADAM with randomly missing data
estimate generalize sample mean and generalize sample covariance matrix
ADAM:
estimate $\beta$ by a preliminary estimator, and then estimate $\Delta^2$. adaptive estimate $\beta$. plug into Fisher’s rule.
Extensions to other missingness mechanisms such as missingness not at random is possible but challenging. The paper claims that, the consistency of their algorithm relies only on consistent estimation of the mean vectors and the covariance matrix, thus if the means and the covariance matrix can be estimated consistently under some missingness not at random model, they can construct a consistent classification rule based on these estimators.
extensions? Theoretical properties
Develop an optimality theory for high dimensional LDA for both the complete data and the incomplete data settings.
Theoretical analysis of AdaLDA
Consider the parameter space
Then characterize the accuracy of the classification rule $\hat C_{AdaLDA}$, measured by the excess misclassification risk $R_\theta(\hat C)-R_\opt(\theta)$.
Theoretical analysis of ADAM
Under the MCR model, suppose the missingness pattern $S\in\{0,1\}^{n_1\times p}\times \{0,1\}^{n_2\times p}$ is a realization of a distribution $\cF$. Consider the distribution space
Two theorems talk about $\bbE\Vert \hat\beta_{ADAM}-\beta\Vert_2$ and $R_\theta(\hat C_{ADAM})-R_\opt(\theta)$, respectively.
Minimax lower bounds
Only the results for the missing data setting (compete-data is a special case).
The lower bound results show that the rates of convergence that are obtained by the AdaLDA and ADAM algorithm are indeed optimal, for both estimation of the discriminant direction $\beta$ and classification.
Reduce the loss $R_\theta(\hat C)-R_\opt(\theta)$ to the risk function $L_\theta(\hat C)$.
|
This will be a talk for the CUNY Logic Workshop, September 16, 2016, at the CUNY Graduate Center, Room 6417, 2-3:30 pm.
Abstract. In analogy with the ancient views on potential as opposed to actual infinity, set-theoretic potentialism is the philosophical position holding that the universe of set theory is never fully completed, but rather has a potential character, with greater parts of it becoming known to us as it unfolds. In this talk, I should like to undertake a mathematical analysis of the modal commitments of various specific natural accounts of set-theoretic potentialism. After developing a general model-theoretic framework for potentialism and describing how the corresponding modal validities are revealed by certain types of control statements, which we call buttons, switches, dials and ratchets, I apply this analysis to the case of set-theoretic potentialism, including the modalities of true-in-all-larger-$V_\beta$, true-in-all-transitive-sets, true-in-all-Grothendieck-Zermelo-universes, true-in-all-countable-transitive-models and others. Broadly speaking, the height-potentialist systems generally validate exactly S4.3 and the height-and-width-potentialist systems generally validate exactly S4.2. Each potentialist system gives rise to a natural accompanying maximality principle, which occurs when S5 is valid at a world, so that every possibly necessary statement is already true. For example, a Grothendieck-Zermelo universe $V_\kappa$, with $\kappa$ inaccessible, exhibits the maximality principle with respect to assertions in the language of set theory using parameters from $V_\kappa$ just in case $\kappa$ is a $\Sigma_3$-reflecting cardinal, and it exhibits the maximality principle with respect to assertions in the potentialist language of set theory with parameters just in case it is fully reflecting $V_\kappa\prec V$.
This is current joint work with Øystein Linnebo, in progress, which builds on some of my prior work with George Leibman and Benedikt Löwe in the modal logic of forcing.
CUNY Logic Workshop abstract | link to article will be posted later
This will be a talk for the workshop conference Mathematical Logic and Its Applications, which will be held at the Research Institute for Mathematical Sciences, Kyoto University, Japan, September 26-29, 2016, organized by Makoto Kikuchi. The workshop is being held in memory of Professor Yuzuru Kakuda, who was head of the research group in logic at Kobe University during my stay there many years ago.
Abstract. Set-theoretic potentialism is the ontological view in the philosophy of mathematics that the universe of set theory is never fully completed, but rather has a potential character, with greater parts of it becoming known to us as it unfolds. In this talk, I should like to undertake a mathematical analysis of the modal commitments of various specific natural accounts of set-theoretic potentialism. After developing a general model-theoretic framework for potentialism and describing how the corresponding modal validities are revealed by certain types of control statements, which we call buttons, switches, dials and ratchets, I apply this analysis to the case of set-theoretic potentialism, including the modalities of true-in-all-larger-$V_\beta$, true-in-all-transitive-sets, true-in-all-Grothendieck-Zermelo-universes, true-in-all-countable-transitive-models and others. Broadly speaking, the height-potentialist systems generally validate exactly S4.3 and the height-and-width-potentialist systems validate exactly S4.2. Each potentialist system gives rise to a natural accompanying maximality principle, which occurs when S5 is valid at a world, so that every possibly necessary statement is already true. For example, a Grothendieck-Zermelo universe $V_\kappa$, with $\kappa$ inaccessible, exhibits the maximality principle with respect to assertions in the language of set theory using parameters from $V_\kappa$ just in case $\kappa$ is a $\Sigma_3$-reflecting cardinal, and it exhibits the maximality principle with respect to assertions in the potentialist language of set theory with parameters just in case it is fully reflecting $V_\kappa\prec V$.
This is joint work with Øystein Linnebo, which builds on some of my prior work with George Leibman and Benedikt Löwe in the modal logic of forcing. Our research article is currently in progress.
Slides | Workshop program
Erin Carmody successfully defended her dissertation under my supervision at the CUNY Graduate Center on April 24, 2015, and she earned her Ph.D. degree in May, 2015. Her dissertation follows the theme of
killing them softly, proving many theorems of the form: given $\kappa$ with large cardinal property $A$, there is a forcing extension in which $\kappa$ no longer has property $A$, but still has large cardinal property $B$, which is very slightly weaker than $A$. Thus, she aims to enact very precise reductions in large cardinal strength of a given cardinal or class of large cardinals. In addition, as a part of the project, she developed transfinite meta-ordinal extensions of the degrees of hyper-inaccessibility and hyper-Mahloness, giving notions such as $(\Omega^{\omega^2+5}+\Omega^3\cdot\omega_1^2+\Omega+2)$-inaccessible among others.
G+ profile | math genealogy | MathOverflow profile | NY Logic profile | ar$\chi$iv
Erin Carmody, “Forcing to change large cardinal strength,” Ph.D. dissertation for The Graduate Center of the City University of New York, May, 2015. ar$\chi$iv | PDF
Erin has accepted a professorship at Nebreska Wesleyan University for.the 2015-16 academic year.
Erin is also an accomplished artist, who has had art shows of her work in New York, and she has pieces for sale. Much of her work has an abstract or mathematical aspect, while some pieces exhibit a more emotional or personal nature. My wife and I have two of Erin’s paintings in our collection:
A. W. Apter, J. Cummings, and J. D. Hamkins, “Singular cardinals and strong extenders,” Central European J.~Math., vol. 11, iss. 9, pp. 1628-1634, 2013.
@article {ApterCummingsHamkins2013:SingularCardinalsAndStrongExtenders,
AUTHOR = {Apter, Arthur W. and Cummings, James and Hamkins, Joel David},
TITLE = {Singular cardinals and strong extenders},
JOURNAL = {Central European J.~Math.},
FJOURNAL = {Central European Journal of Mathematics},
VOLUME = {11},
YEAR = {2013},
NUMBER = {9},
PAGES = {1628--1634},
ISSN = {1895-1074},
MRCLASS = {03E55 (03E35 03E45)},
MRNUMBER = {3071929},
MRREVIEWER = {Samuel Gomes da Silva},
DOI = {10.2478/s11533-013-0265-1},
URL = {http://jdh.hamkins.org/singular-cardinals-strong-extenders/},
eprint = {1206.3703},
archivePrefix = {arXiv},
primaryClass = {math.LO},
}
Brent Cody asked the question whether the situation can arise that one has an elementary embedding $j:V\to M$ witnessing the $\theta$-strongness of a cardinal $\kappa$, but where $\theta$ is regular in $M$ and singular in $V$.
In this article, we investigate the various circumstances in which this does and does not happen, the circumstances under which there exist a singular cardinal $\mu$ and a short $(\kappa, \mu)$-extender $E$ witnessing “$\kappa$ is $\mu$-strong”, such that $\mu$ is singular in $Ult(V, E)$.
|
I'm trying to solve a full-vectorial wave equation for an arbitrarily shaped wave guide, by using
NDSolve and perfectly matched layer (PML) conditions.
The PML conditions can be stated as a coordinate transformation of the form $\text{$\partial $/$\partial $x $\to $ $\alpha $x(x) $\partial $/$\partial $x}$ and so on. As the functionality of the Mathematica built-in
Curl doesn't extend so far I set up a function that acts as intended:
generalizedCurl3D[coordTransfVector_,applicationVector_,coordNamesVector_]:={ coordTransfVector[[2]] D[applicationVector[[3]],coordNamesVector[[2]]] - coordTransfVector[[3]] D[applicationVector[[2]],coordNamesVector[[3]]], coordTransfVector[[3]] D[applicationVector[[1]],coordNamesVector[[3]]] - coordTransfVector[[1]] D[applicationVector[[3]],coordNamesVector[[1]]], coordTransfVector[[1]] D[applicationVector[[2]],coordNamesVector[[1]]] - coordTransfVector[[2]] D[applicationVector[[1]],coordNamesVector[[2]]]}
If the coordinate transformation is set to unity $\text{$\{$1,1,1$\}$}$ the function is identical to the normal Curl, as the comparison yields
True:
generalizedCurl3D[{1, 1, 1}, {ψx[x, y, z], ψy[x, y, z], ψz[x, y, z]}, {x, y, z}] == {-Derivative[0, 0, 1][ψy][x, y, z] + Derivative[0, 1, 0][ψz][x, y, z], Derivative[0, 0, 1][ψx][x, y, z] - Derivative[1, 0, 0][ψz][x, y, z], -Derivative[0, 1, 0][ψx][x, y, z] + Derivative[1, 0, 0][ψy][x, y, z]}
Now I derive the wave equation I need to solve. In the following I will consider only the magnetic field $\text{$\psi $[x,y,z]}$ because it is continuous at the boundary of the wave guide. The refractive index profile $n$ depends on $x$ and $y$ ($n=n[x,y]$) and is constant along the $z$ direction. The functions for PML will be set up later, but at this moment it is only important that the coordinate transformation functions $\text{$\alpha $x}$ and $\text{$\alpha $y}$ depend only on $x$ and $y$ ($\text{$\alpha $x[x]}$ and $\text{$\alpha $y[y]}$) while $\text{$\alpha $z}=1$.
I start with the Helmholtz equation, assuming that the time separation ansatz works. This way I only have to assume that the field will have fast oscillations in z direction with the effective refractive index $n0$ and the wave number $k$ ($e^{-i \cdot k \cdot \text{n0}\cdot z}$). The rest should be only slow oscillations of the field components in $x$ and $y$ direction. I also tend to set the $\text{$\psi $z}$ component of the $\psi$-vector to zero, because probably the cross-talk of $x$,$y$ components to the $z$ field component should be negligible.
Edit: In case you try to check my calculation, please make sure that the 3rd component of $\psi $ is defined to be zero. To keep it general for 3dimensional analysis (with easy "switching on or off") I define it nevertheless, but multiply it with 0.
cTV = {αx[x], αy[y], 1};cNV = {x, y, z};ψ = {ψx[x, y, z] E^(-I k n0 z), ψy[x, y, z] E^(-I k n0 z), 0 ψz[x, y, z] E^(-I k n0 z)};
Plugging now the still analytical expressions into the Helmholtz equation
($\text{$\nabla \times \nabla \times \psi $== - }\mu \text{$\epsilon $ }\partial ^2\left/\partial t^2\right.\text{ $\psi $ = + }k^2\text{ $\psi $}$)
and dividing all by the fast changing $z$-term ($e^{-i \cdot k \cdot \text{n0}\cdot z}$) gives me the following :
eqs = FullSimplify[(generalizedCurl3D[cTV, 1/n[x, y]^2 generalizedCurl3D[cTV, ψ, cNV], cNV] - k^2 ψ ) 1/E^(-I k n0 z)]
Now I replace the analytical expressions by numerical values and functions and define the size of the boundary and the PML-layer (all in SI units). (Note that $n0$ is effectively a propagation constant that has to be found by trial and error: if it is chosen wrong, then the field should spread outside the waveguide.)
Edit: in the meanwhile I found out that I should use numerical quantities close to 1 instead of SI units for micrometer length scales, because obviously Mathematica performs numerical integration with machine precision and if the numbers are too close to machine precision the numerical "signal-to-noise-ratio" can cause singularities in the solution which causes in turn
NDSolve to get stuck or to blow up the required memory.
So now I am using the following values (instead of 10^-6):
xBound = 12; yBound = 12; zBound = 5;nVak = 1.; nMat = 1.5; λ = 0.8; k = (2 π)/λ;n0 = 1.48 (*this is the "arbitrarily chosen" because yet unknown propagation constant*);waveGuideR = 1; (*wave guide radius*)n[x_, y_] := If[x^2 + y^2 <= waveGuideR^2, nMat, nVak];(*let's have a circular waveguide with refractive index of 1.5*)theoReflCoeff = 10.^-2;(*1/theoReflCoeff is the theoretical dampingcoefficient of the PML layer*)pmlWidth = 1;(*size of the PML-layer*)αx[x_] := Piecewise[{{1 - I 3 λ (x + (xBound - pmlWidth))^2/(4 π nVak pmlWidth^3) Log[1/theoReflCoeff], x < -(xBound - pmlWidth)}, {nVak, -(xBound - pmlWidth) <= x <= (xBound - pmlWidth)}, {1 - I 3 λ (x - (xBound - pmlWidth))^2/(4 π nVak pmlWidth^3) Log[1/theoReflCoeff], x > (xBound - pmlWidth)}}];αy[y_] := Piecewise[{{1 - I 3 λ (y + (yBound - pmlWidth))^2/(4 π nVak pmlWidth^3) Log[1/theoReflCoeff], y < -(yBound - pmlWidth)}, {nVak, -(yBound - pmlWidth) <= y <= (yBound - pmlWidth)}, {1 - I 3 λ (y - (yBound - pmlWidth))^2/(4 π nVak pmlWidth^3) Log[1/theoReflCoeff], y > (yBound - pmlWidth)}}];
The PML - coordinate transformation is formed such that outside the PML layer the derivatives are multiplied by one and inside the PML are multiplied by a complex value (so that they should cause damping on the wave). For clarity the imaginary part and the absolute part of the PML layer is shown in the following:
Of course the Helmholtz equation in input line 6 is actually a vector in 3 dimensions. However, I am interested only in the evolution of the field in $x$ and $y$ direction. That is why I will neglect the calculation for the field in $z$-dimension. (Here I am not totally sure that what I am doing is physically correct, since I chose for $\text{$\psi $z}$ to be zero in the initial calculation, but due to the
Curl-operation I still get the contribution in the z-dimension. Anyway, if plug
eqs[[3]] into
NDSolve Mathematica tells me that the system is overdetermined).
The 2 coupled differential equations that have to be solved are then:
diffEq1 = eqs[[1]] == 0;diffEq2 = eqs[[2]] == 0;
Now I define at the edges (after the PML layers) periodic boundary conditions for $x$ and $y$ directions and fields.
boundDirectionXfieldX = ψx[-xBound, y, z] == ψx[xBound, y, z];boundDirectionYfieldX = ψx[x, -yBound, z] == ψx[x, yBound, z];boundDirectionXfieldY = ψy[-xBound, y, z] == ψy[xBound, y, z];boundDirectionYfieldY = ψy[x, -yBound, z] == ψy[x, yBound, z];
Furthermore I need to define the starting field that is launched into the waveguide. In this case I assume the y-component of the field to be zero, because due to the coupling of the equations also higher and more complicated transversal field modes should develop as the they indeed do in real experiments. Since I am looking only for solutions that will be guided it is unimportant what kind of initial field I chose. (Even if it is the wrong field distribution for the correct $n0$ a stable distribution in the waveguide should evolve.)
startCondDirectionZfieldX = ψx[x, y, 0] == E^(-((x^2 + y^2)/waveGuideR^2));startCondDirectionZfieldY = ψy[x, y, 0] == 0;
Because there is a second derivative of $\psi$ in $z$ - direction, I need additionally a Neumann - boundary condition for the field launched into the waveguide at $z=0$. I think this is physically correct, because if the waveguide is infinite in z-direction and I manage somehow to generate a field with the above shape along an extended length of the waveguide then all the physical effects should still occur after this. (Correct me if I'm wrong here)
neumannCondFieldX = Derivative[0, 0, 1][ψx][x, y, 0] == 0;neumannCondFieldY = Derivative[0, 0, 1][ψy][x, y, 0] == 0;
With this I can set up a equation system for NDSolve to solve. Additionally I used EvaluationMonitor to have at least an inkling where NDSolve currently is as well as a MemoryConstrained evaluation limited to 13 GB of RAM (but I don't it works the way I implemented it)
MemoryConstrained[ Monitor[Fkt = {ψx, ψy} /. First@NDSolve[{diffEq1, diffEq2, boundDirectionXfieldX, boundDirectionYfieldX, boundDirectionXfieldY, boundDirectionYfieldY, startCondDirectionZfieldX, startCondDirectionZfieldY, neumannCondFieldX, neumannCondFieldY}, {ψx, ψy}, {z, 0., zBound}, {x, -xBound, xBound}, {y, -yBound, yBound}, Method -> {"MethodOfLines","SpatialDiscretization" -> {"TensorProductGrid", "DifferenceOrder" -> "Pseudospectral"}}, EvaluationMonitor :> (stepz = z)], AngularGauge[Dynamic[stepz/zBound], {0, 1}, GaugeLabels -> Automatic]], 13 2^30]
The most puzzling part during is the error message:
NDSolve::mxsst: Using maximum number of grid points 100 allowed by the MaxPoints or MinStepSize options for independent variable y.
This is where the strange things happen and
I would like to know how to arrive at a proper well-behaved solution: Mathematica gives me an answer in spite of the above error message. The solution is not well behaved because it changes, depending on how large I chose the size of the boundaries. Even more troubling is the fact that for some combinations of the boundary sizes Mathematica Kernel hangs itself up as it uses almost the full available memory. The obvious thing to do would be to change the number of grid points. However if I chose smaller or larger number of "MaxPoints" than the default 100, NDSolve runs always into a memory limit. I wonder how this can be if I chose a smaller size? I would have thought that in such case the solution would be just less precise? The setting "DifferenceOrder"->"Pseudospectral" seems to be paramount. Anything else runs into the above memory problems. Only thanks to the postings of Complex valued 2+1D PDE Schrödinger equation, numerical method for `NDSolve`? I was able to get any result at all.
Here is the result calculated and displayed:
zSlices = Table[Plot3D[Abs[Fkt[[1]][x, y, z]], {x, -xBound, xBound}, {y, -yBound, yBound}, PlotRange -> {0, All}, PlotPoints -> 200], {z, 0, zBound, zBound/6.}];Export["UnstableSolution.gif", zSlices, AnimationRepetitions -> Infinity, "DisplayDurations" -> .4]
Edit: Another culprit in my calculations seems to be the PML itself. If I discard the PML and use the normal Helmholtz-equation without any coordinate transformations and just pure periodic boundary condition I get a physically plausible solution:
But my joy is slightly marred by the fact that since I want to find stable propagating solutions in the waveguide, I must somehow get rid off the "reflected noisy waves". I would greatly appreciate if somebody could help me with the perfectly matched layer conditions. Thank you!
Edit: you can probably skip the next part, because as long as my university doesn't give me access to Mathematica 10.4 I will be stuck on the Finite Element Method. But it would be still cool to know if the Finite Element method makes better work of the PML-conditions than the pseudospectral decomposition I'm forced to use above.. ;-)
Ok, the next thing I thought to use were the new capabilities of Mathematica considering Finite Elements for calculation, because probably the observed issues are caused by the discontinuities on the waveguide or the PML transformation of cartesian coordinates. In such a case the Method in NDSolve must be changed as in the following, if I understand Mathematica help correctly. Although I had to change the boundary conditions to be zero instead of periodic and starting condition to be 0 at the boundaries to be consistent everywhere...
Fkt = {ψx, ψy} /. First@NDSolve[{diffEq1, diffEq2, ψx[-xBound, y, z] == ψx[xBound, y, z] == 0, ψx[x, -yBound, z] == ψx[x, yBound, z] == 0, ψy[-xBound, y, z] == ψy[xBound, y, z] == 0, ψy[x, -yBound, z] == ψy[x, yBound, z] == 0, ψx[x, y, 0] == If[x^2 + y^2 <= waveGuideR^2, 1, 0], ψy[x, y, 0] == 0, Derivative[0, 0, 1][ψx][x, y, 0] == 0, Derivative[0, 0, 1][ψy][x, y, 0] == 0}, {ψx, ψy}, {z, 0., 20 zBound}, {x, -xBound, xBound}, {y, -yBound, yBound}, Method -> {"PDEDiscretization" -> {"MethodOfLines", "SpatialDiscretization" -> "FiniteElement"}}]
Now I get two new errors:
CompiledFunction::cfex: Could not complete external evaluation at instruction 10; proceeding with uncompiled evaluation. NDSolve::femdpop: The FEMStiffnessElements operator failed.
where one of them has already been asked about in Error message for FEMStiffnessElements. @ilian mentioned that this problem has been resolved in Mathematica 10.4.
The problem is that the newest Mathematica version my university is able to provide is 10.3 (I don't know how long it will take for the data processing center to distribute the 10.4 version).
So I currently I have no opportunity to see whether the 10.4 can solve my problem so that I just wait a little bit or if I have to find a different way, e.g. to use COMSOL, or something similar, with which I'm not really familiar...
Thanks for bearing with me and my rantings up to the end!
|
This question is a continuation of the discussion at How can a linear operator on DFT vector produce the same vector using only half of the DFT vector?, where the higher level motivation is better explained, in case there is interest.
Question:
How to take the positive frequencies from an hermitian symmetric DFT vector, take the IDFT of this (which is complex), multiply by a carrier to shift the spectrum, resample, take the real part of it, and obtain the same IDFT result as the result that would be obtained by the IDFT of the original hermitian symmetric symbol?
Details:
I would really appreciate to have my understanding and the concepts I mention checked.
Consider that the DFT vector $\mathbf{X}$ of a certain signal is hermitian symmetric and represents a bandwidth of $B$ Hz. If I consider a signal represented by the positive frequencies (from DC to Nyquist), it is the same as shifting the positive side of the spectrum, and centralizing it at frequency 0. Now, the new spectrum is not hermitian symmetric anymore, yields a complex IDFT and represents a Bandwidth of $\frac{B}{2}$.
To use a numerical example, for $N=8$, considering from DC to Nyquist, that would give 5 positive frequencies and 3 negative. Then, I would shift the 5 DFT samples (correspondent to the positive frequencies) to the center of the spectrum. In this case, the new DFT size would be odd, and the Nyquist frequency (at k= 5/2 = 2.5) wouldn't fall on a DFT sample
At this point, I have a signal bandlimited to $\frac{B}{2}$. Then, if I multiply this signal with a carrier of frequency $\frac{B}{2}$, I shift the spectrum again to the positive and negative side, such that the original spectrum is reconstructed. However, since this operation would increase the bandwidth back to $B$ Hz, the product $x[n]*c[n]$ would need to have its sampling frequency increased by a factor of 2, so that the frequency spectrum can actually represent the spectrum of the signal without aliasing.
The carrier should be an exponential:
\begin{align} c[n] = e^{j2\pi n k_0/N} \end{align} where $k_0$ is the number of DFT samples to shift. In this case, $k_0 = \frac{N}{2}$, such that: \begin{align} c[n] = e^{j \pi n} \end{align}
Finally, after doing resampling, I should take the real part of the signal and that should be equal to the original IDFT (the one obtained by simpling taking the IDFT of the original hermitian symmetric symbol).
However, I am confused mainly by the following issues:
From DC to Nyquist there are 5 DFT samples. That would yield a complex IFFT with size 5, Then, I would multiply point-wise by 5 samples of the carrier and upsample by 2, resulting in 10 samples. On the other hand, the original IDFT has length 8. I'm not sure about the rationale behind taking the real part. I'm not sure if the exponential is the correct carrier. MATLAB implementation:
Here is the code I'm trying to make work.
N=4;A=rand(N-1,1) + j*rand(N-1,1); % Random positive carrierX=[0; A; 0; flipud(conj(A))]; % Hermitian symmetric DFT vectorx = ifft(X)% Positive side of the spectrum shifted to the centerXp = [X((N/2 + 1):N+1); X(1:N/2);];% Complex IFFT:xp = ifft(Xp);% Carriern=(0:N);k0 = (N+1)/2;carrier = exp(j*2*pi*k0*n/(N+1)).';% Modulationx_up = xp.*carrier;% Increase the sampling rate by a factor of twox_re = resample(x_up,2,1); real(x_re)
I basically want the two vectors that are printed to the workspace to be equal:
x and real(x_re)
|
Suppose $f(x):=a_0+a_1x+\cdots+a_nx^n$ is a polynomial in $\mathbb{Z}[x]$ and $|a_i|\leq M$ for each $i=0,\ldots ,n.$ Now suppose $g(x)$ is a factor of $f(x)$ in $\mathbb{Z}[x]$, then is it possible to get a bound on the coefficients of $g(x)$ in terms of $M$ i.e. if $g(x)=\sum_{i=0}^mb_ix^i$ then does there exist some $M^\prime $, which depends only on $M,n$ and $m$, such that $|b_i|\leq M^\prime$ for all $i=0,\ldots ,m$ ?
Gelfond's inequality is probably what you want here; see for example my book with Hindry,
Diophantine Geometry: An Introduction, Proposition B.7.3. I'll state it for polynomials in $\mathbb{Z}[X_1,\ldots,X_m]$, although there's a version that's true over $\overline{\mathbb{Q}}$. The statement uses the projective height, so for a polynomial $f$ with coefficients $a_i\in\mathbb{Z}$, we let $$H(f) = \frac{\max|a_i|}{\gcd(a_i)}.$$ Then Proposition B.7.3 (Gelfond's inequality) Let $f_1,\ldots,f_r\in \mathbb{Z}[X_1,\ldots,X_m]$, and for $1\le i\le m$, let $d_i$ denote the $X_i$ degree of $f_1f_2\cdots f_r$. Then$$ H(f_1)H(f_2)\cdots H(f_r) \le e^{d_1+\cdots+d_m}H(f_1f_2\cdots f_r).$$
For the OP's question, we have $f$ is divisible by $g$, say $f=gg'$, so $$ H(g) \le H(g)H(g') \le e^{\deg f}H(gg') = e^{\deg f}H(f). $$
Completely explicit bounds are given by Granville (bounding the coefficients of the divisor of a given polynomial, 1990, Monat. Math).
Define the height of a polynomial $f$ as $h(f)$, the max of the absolute values of its coefficients. Then (see e.g. Bombieri-Gubler Thm 1.7.2) $h(fg) \ge c(\deg f, \deg g)h(f)h(g)$, for some function $c(.,.)$ of the degrees. This gives what you want, except that $c$ is not very explicit.
I do not see clearly how to use the information that the coefficients be integer numbers, apart the fact that $1\le |b_m|\le |a_n|$; therefore the following estimate is possibly non-optimal. However, it gives a simple and explicit bound, that also holds for complex coefficients.
By a classic estimate on the zeros of a polynomial, all (complex) roots of $f$ are bounded in modulus by $$1+\max_{0\le i < n}\frac{|a_i|}{|a_n|}\le M+1\, ,$$ because $|a_m|\ge1$. Since $b_i$ is the $i$-th elementary symmetric polynomial of some $m$ of the roots of $f$, times $b_m$, which is less that $M$ is size, we have
$$|b_i|\le M {m \choose i}(M+1)^i\, ,$$ so for instance $$\sum_{i=0}^n|b_i|\le M':=M(M+2)^n\, .$$
Any such bound will depend only on $n$ and $m$ since once can take for instance $f(x) = x^n-1$ (so $M = 1$). Then depending on $n$, the cyclotomic factor $\Phi_n(x)$ has coefficients as large as you like: for example $\Phi_{105}(x)$ has coefficients of modulus $2$, $\Phi_{385}(x)$ has coefficients of modulus $3$, for further references see OEIS sequence A013594 (http://oeis.org/A013594).
Obviously yes, since there are only finitely many such $f,g$ for each $M,n,m$.
|
Answer
91.9 Hz
Work Step by Step
We find the resonant frequency: $f = \frac{1}{2\pi}\sqrt{\frac{k}{m}}\\ f = \frac{1}{2\pi}\sqrt{\frac{2.5 \times 10^3}{7.5\times10^{-3}}}=\fbox{91.9 Hz} $
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
I am having some trouble knowing how to correctly start a problem of finding the Fourier Coefficients using complex exponential form. The problem is given below:
$$g_1(t)=\begin{cases} 1,~~\qquad t<1\\2-t,\quad t\ge 1\end{cases}$$
and
$$g_2(t)=\begin{cases} 1+2t, \qquad -0.5<t\le 0.5\\\tfrac{(7-2t)}{3},~~~\,\qquad0.5<t<3.5\\0,~~~\qquad\qquad \mbox{elsewhere}\end{cases}$$
over the interval of $0\le t\le2$.
General complex Fourier series representation is: $g(t)=\displaystyle\sum_{n=-\infty}^{\infty} C_ne^{int}$
I tried using the formula to compute the DC term or zeroth term from the following:
$C_0=\displaystyle\frac{1}{T_0}\int_{<T_0>} x(t)e^{int}\,\mathrm{d}t$, where $n=0$. (zero coefficient, DC term)
The result I get for $g_1(t)$ from this is: $C_0=\displaystyle \frac{1}{2}\int_0^2 1\,\mathrm{d}t+\frac{1}{2}\int_0^2 (2-t)\,\mathrm{d}t=2$, where $T_0$ is the interval in which it runs from $[0,2]$, but not sure if its correct and how to go about setting up for the $C_n$ term.
I have seen different forms of how to compute the coefficients and the general series representation in books, but I do not know which ones to appropriately apply to this problem. The other problem is defining a period for the constant function such as $1$ in the case of $g_1(t)$. Being that this is needed to computer coefficients of the series.
|
I've been trying to prove it for a while, but can't seem to get anywhere.
$$\frac{1}{\sin^2\theta} + \frac{1}{\cos^2\theta} = (\tan \theta + \cot \theta)^2$$
Could someone please provide a valid proof?
I am not allowed to work on both sides of the equation.
Work so far:
RS:
$$ \begin{align} & \frac{\sin^2\theta}{\cos^2\theta} + \frac{\cos^2\theta}{\sin^2\theta} + 2 \\[10pt] & = \frac{\sin^4\theta}{(\cos^2\theta)(\sin^2\theta)} + \frac{\cos^4\theta}{(\sin^2\theta) (\cos^2\theta)} + \frac{(\sin^4\theta)(\cos^2\theta)}{(\sin^2\theta)(\cos^2\theta)} + \frac{(\sin^2\theta)(\cos^4\theta)}{(\sin^2\theta)(\cos^2\theta)} \\[10pt] & = \frac{\sin^4\theta + \cos^4\theta + (\sin^4\theta)(\cos^2\theta) + (\sin^2\theta)(\cos^4\theta)}{(\cos^2\theta)(\sin^2\theta)} \end{align} $$
I am completely lost after this.
|
Definition:Continuous Mapping (Metric Space) Contents Definition
Let $M_1 = \left({A_1, d_1}\right)$ and $M_2 = \left({A_2, d_2}\right)$ be metric spaces.
Let $f: A_1 \to A_2$ be a mapping from $A_1$ to $A_2$.
Let $a \in A_1$ be a point in $A_1$.
$\forall \epsilon \in \R_{>0}: \exists \delta \in \R_{>0}: \forall x \in A_1: \map {d_1} {x, a} < \delta \implies \map {d_2} {\map f x, \map f a} < \epsilon$
Let $M_1 = \left({A_1, d_1}\right)$ and $M_2 = \left({A_2, d_2}\right)$ be metric spaces.
Let $f: A_1 \to A_2$ be a mapping from $A_1$ to $A_2$.
Let $Y \subseteq A_1$.
By definition, $\left({Y, d_Y}\right)$ is a metric subspace of $A_1$.
Let $a \in Y$ be a point in $Y$.
Then $f$ is
$\left({d_Y, d_2}\right)$-continuous at $a$ if and only if: $\forall \epsilon > 0: \exists \delta > 0: d_Y \left({x, a}\right) < \delta \implies d_2 \left({f \left({x}\right), f \left({a}\right)}\right) < \epsilon$ Similarly, $f$ is $\left({d_Y, d_2}\right)$-continuous if and only if: $\forall a \in Y: f$ is $\left({d_Y, d_2}\right)$-continuous at $a$ Also known as
A mapping which is
continuous from $\left({A_1, d_1}\right)$ to $\left({A_2, d_2}\right)$ can also be referred to as $\left({d_1, d_2}\right)$-continuous.
|
This is a pretty interesting problem. First, define the right-hand side.
f[θ_, μ_] := Sin[θ]/(μ + Cos[θ]);
If $|\mu|>1$, there are only the proper equilibria you noted at $\theta=n\pi$:
Plot[f[θ, 1.5], {θ, 0, 4 π}, AxesLabel -> {"θ", "θ'"}]
However if $|\mu|<1$ the denominator might equal zero, resulting in the following:
Plot[f[θ, 0.5], {θ, 0, 4 π}, AxesLabel -> {"θ", "θ'"}]
Those $\theta$ values where the denominator equals zero must also behave like stable equilibria, since $d\theta/dt>0$ below and $d\theta/dt<0$ above. I'll call them
improper equilibria for lack of a better term. Any dynamicists please comment on the real name. They occur at $\theta=arccos(-\mu)$.
The following plots the proper and improper equilibria. I use linearization to assess the stability of the proper equilibria and just assume the improper ones are stable. Solid = stable, dashed = unstable.
λ[θ_?NumericQ, μ_?NumericQ] = D[f[θ, μ], θ];
eq = ContourPlot[{
ConditionalExpression[f[θ, μ], λ[θ, μ] < 0] == 0,
ConditionalExpression[f[θ, μ], λ[θ, μ] > 0] == 0,
(μ + Cos[θ]) == 0}, {μ, -2, 2},
{θ, -10^-5, 4 π + 10^-5}, ContourStyle -> {Black, {Black, Dashed}},
MaxRecursion -> 3, FrameLabel -> {μ, θ}]
Finally, add some flow arrows.
streams = VectorPlot[{0, Sign[f[θ, μ]]}, {μ, -2, 2}, {θ, 0, 4 π},
VectorMarkers -> "Arrow", VectorScale -> Tiny,
VectorPoints -> Flatten[Table[{μ, θ}, {μ, -1.8, 1.8, 0.2}, {θ, 0.25 π, 3.75 π, 0.5 π}], 1]];
Show[streams, eq, PlotRange -> {{-2, 2}, {0, 4 π}}, FrameLabel -> {μ, θ}]
Verifying the behavior of the improper equilibria with
NDSolve requires some trickery.
Edit: My previous attempt at getting
NDSolve to work was wrong, so I removed it.
|
What makes it occur? How do the protons and nucleus know that they have to lose mass to produce energy...? And is the mass of a compressed Spring more than an uncompressed one?? does a body which has a greater energy has more mass than the one which has a less energy?
"How do the protons and nucleus know that they have to lose mass to produce energy...?"
Notwithstanding the comments that this is a silly philosophy question, I think it is a good question for precisely that reason: you know that protons cannot "know" things and therefore we must find an explanation not involving "knowledge". It sounds as though you're thinking something along the lines of everything's getting to the end of the reaction and saying "hey, we need to have less mass now", which of course you know is preposterous. So maybe it helps to think of fusion / fission as a
process: he protons are in constant contact with the process, and it is their loss of energy that "drives" the process (I use the quotes because the energy loss is necessary, but not sufficient, to make the process happen). At the risk of being too colloquial, you can almost say the protons use a small piece of themselves up in completing the reaction.
(Rest) Mass is simply the property that something has if it has a nonzero energy content as measured from a frame that the particle is at rest relative to. As in the other answers, this notion is neatly expressed in the equation (for the "four-momentum's Minkowski norm"(see footnote)) $E^2 - p^2\, c^2 = m^2\,c^4$: if you are in a rest frame relative to a particle, then its momentum $p$ is nought and it has an energy content if and only if $m\neq0$. So yes, mass really does measure an energy content. So after having taken part in the process, the protons have a smaller energy content than before as measured from a relatively stationary frame, so that they have less rest mass.
But not only does the property rest mass measure energy. It also gives rise to the property of inertia, or resistance of state of motion change to external force,
i.e. to the proportionality constant $m$ in Newton's second law. For instance, if we confine a quantity of light inside a perfect, lossless resonant cavity, we can show that the system's inertia increases by an amount $E/c^2$ when we confine the light, where $E$ is the light's energy. I talk about the thought experiment that shows this in my answer here. Indeed, in the same way, most of the mass in your body is owing to the massless, but confined, gluons in the nucleusses of your body's atoms.
For accuracy, I should say that particle physics thinks of many conversions as noncontinuous events: simply branches in a Feynman diagram and does not try to penetrate the "process" or think of it as a continuous evolution as I have implied. But the key idea is that everything is connected and interacting, with the transfer of energy. For example, the fusion reaction of four protons to yield helium in the Sun is thought of as three discrete events:
The proposal of this process as the source of energy in stars led to the award of the 1967 Nobel prize to Hans Bethe
Footnote: I appreciate this phrase is likely to be jargon to you at this stage - I'm not trying to be a bothersome git- I use the phrase because you might like to use it as a phrase to google on as your understanding builds and you want to know where it comes from. See here.
A compressed (or stretched) spring conceptually has an greater mass than does a spring in its minimum potential energy state.Similarly, a hot piece of iron conceptually has a greater mass than does a colder piece of iron with the same number of iron atoms as the hot piece of iron. I wrote "conceptually" in the above because the change in mass is unmeasurably small.
The change in mass is measurable when it comes to atomic nuclei. This change in mass is the basis for fission power (traditional nuclear reactors and the bombs that ended WWII) and fusion power (e.g., the Tsar Bomba, fusion reactors such as ITER, and our Sun).
Consider the processes by which a star converts four protons into an alpha particle. The alpha particle that results from these processes has a mass of 3.9726 protons. What happened to the 4-3.9726=0.0274 proton masses? (Answer: It turned into energy. That 0.0274 proton mass is equivalent to 25.7 MeV. That's the source of energy that makes the Sun shine.) The alpha particle is in a reduced energy state compared to that of the four individual protons.
Mass and energy are distinct concepts classical physics. This is no longer true in relativity theory and in high energy particle physics. The distinction becomes blurred; mass and energy are flip sides of the same coin. Mass is just one of a number of forms of energy. Mass is bound energy.
So what makes this occur? It's the nuclear force, aka the residual strong force. Physicists have known since early in the 20th century that a rather powerful interaction was needed to bind together the protons and neutrons that form the nucleus of an atom. Electromagnetism is not the answer; it would make the nucleus blow apart. Whatever binds the nucleus together had to be much stronger than electromagnetism. It turns out that the nuclear force that binds the protons and neutrons that comprise an atomic nucleus is a residual effect of the strong force that binds the quarks that comprise individual protons and neutrons (but that wasn't known until the 1960s).
With regard to your first second question, "How do the protons and nucleus know that they have to lose mass to produce energy?", it's best not to go there. Protons and neutrons don't know anything. They obey the laws of physics. Why? That's philosophy. Philosophy questions are not appreciated at this site.
With regard to your first question, "Why does it occur?", it's best to translate that to "What makes it occur?" This is an answerable question. The answer lies in the laws of physics. Physicists have developed detailed models of the forces that hold a nucleus together. "Why" questions are the purview of metaphysics (aka philosophy). Translating a "Why" question into a "What" question makes the question scientific.
This is a case in which anthropomorphizing language like
"How do the protons and nucleus know [...]" is not merely unhelpful but positively harmful. I'd recommend not doing it even in the privacy of your own thoughts.
Understand that the mass defect is a feature of the system (the nucleus) as a whole, not of any one part of it.
The system has the property of having less total energy than the component parts would have if they were free---something that is true of all bound systems and is in fact what it means for the system to be bound.
Einstein tells us that mass is a kind of energy, so less total energy (in the nuclear rest frame) is equivalent to saying less total mass. The difference is called the "mass defect".
People will occasionally divide the mass defect up by the number of nucleons and talk about defect per nucleon but it still does not mean that protons have some mechanism to simply ignore some of their mass: the
system is lighter, but the components. 1
There is nothing special about nuclear systems in this regard. The solar system treated as a whole is less massive (by a unmeasurably small fraction) than the total mass of the sun, the planets, the moons and all the smaller hangers-on taken separately
simply because the system has less total energy in its current configuration. 1 Nuclear systems are a little tricky here because the binding force (the residual strong attraction) actually alters the nature of the bound proton a bit---it's form factors change---but it is still useful to talk of these states as "protons" and "neutrons".
(The other answers are fine, but I wanted to answer this question for myself once and for all as well.)
Why does it occur?
The mass defect results from the energy conservation in special relativity: in some process the whole energy
including the rest energy is conserved, where the rest energy is
$$E=m_0 c^2 ~.$$
Now, usually you write the energy conservation -- for example in a billiard ball collision -- like this:
$$ E_{kin} = E'_{kin}+ \text{other energies}~,$$
where the prime indicates a quantity after the process.
Other energies means that in the process other energies besides the kinetic energies take part in the process. If you collide the two billiard balls for example, then you would have a $+\Delta E_{Heat}$ on the right side since a bit of kinetic energy is lost as heat when the balls collide.
In special relativity however, you write it like this:
$$ m_{0}c^2+ E_{kin} = m'_{0}c^2+ E'_{kin}+ \text{other energies} ~.$$
Next consider an atom in a bound state. As dmckee told you, the system is in a lower energy state than the combined free states. This tells you that you have to put energy
into the bound system if you want to break it apart. On the other hand energy is set free when you put the separated components together.
For our energy conservation equation we thus get
$$ M_{0}c^2 = m'_{0}c^2+ \Delta E_{kin}+\Delta E ~,$$
where $M_0$ is the sum of the masses of the components, $m'_0$ is the mass of the bound system and $\Delta E_{kin}$ is the difference in the kinetic energy between the two components and the bound system. Since the energy is set free during the process, $\Delta E$ is on the right hand side of the equation. To make things more clear, let's just neglect the kinetic energies, so that we have
$$ M_{0}c^2 = m'_{0}c^2+ \Delta E ~.$$
When we solve this equation for $m'_0$ we know how much the bound system weighs:
$$m'_{0}=M_{0}-\frac{\Delta E}{c^2}~, $$
which is obviously
less than the combined masses.
This is where the mass defect comes from! It comes from the fact that mass and energy are equivalent and that in a process the whole energy,
including the rest energy must be conserved. Conserved however means not that the energies involved can't change into other energies, just like in classical physics kinetic energy can change into potential energy or heat. In this case the energy set free (the binding energy) is converted from rest energy which the bound system misses.
How do the protons and nucleus know that they have to lose mass to produce energy...?
It's the other way around: the whole process sets energy free; for the energy conservation to hold, the bound system has to lose mass. You could think of this like this:
You have two atoms floating in space. Now you draw a box around it. This box has now the content of 100 energy dollars, those are distributed on the rest energies of the two atoms. Now the two atoms form a bound state and in the process the box loses energy dollars since energy is set free. Suddenly you have only 75 energy dollars left in the box. Since you have only 75 dollars to distribute to the rest energy of the bound system, the bound system has a mass of only 75 energy dollars $/c^2$.
does a body which has a greater energy has more mass than the one which has a less energy?
Yes. Again, this is because of the equivalence of mass and energy: since mass and energy are interchangeable in processes, you increase the mass of an object when you increase it's energy. That's also the case for the spring.
Another way to say it: the zero point of energy is now not "$0$" but $m_0c^2$, so when you put energy (in any form) into the system, the new zero point of energy is at $m'_0c^2$, with $m'>m$.
|
Sub Gaussian Posted on
This post is based on Wainwright (2019).
A random variable $X$ with mean $\mu=\E X$ is sub-gaussian if there is a positive number $\sigma$ such that $$ \E [e^{\lambda(X-\mu)}]\le e^{\lambda^2\sigma^2/2}\;\forall \lambda\in \IR\,. $$
Several equivalent characterizations of sub-Gaussian variables.
Given any zero-mean random variable $X$, the following properties are equivalent:
There is a constant $\sigma$ such that $\E e^{\lambda X}\le e^{\lambda^2\sigma^2/2}$ for all $\lambda \in \IR$. There is a constant $c$ and Gaussian random variable $Z\sim N(0,\sigma^2)$ such that $P(\vert X\vert\ge s)\le cP(\vert Z\vert \ge s)$ for all $s\ge 0$ There exists a number $\theta$ such that $\E X^{2k}\le\frac{(2k)!}{2^kk!}\theta^{2k}$ for all $k=1,2,\ldots$ We have $\E e^{\frac{\lambda X^2}{2\sigma^2}}\le \frac{1}{\sqrt{1-\lambda}}$ for all $\lambda \in [0, 1)$. $\psi_2$-condition: $\exist a>0, \E e^{aX^2}\le 2$. Refer to Subgaussian random variables: An expository note
|
It will come as no surprise that we can also do triple integrals—integrals over a three-dimensional region. The simplest application allows us to compute volumes in an alternate way.
To approximate a volume in three dimensions, we can divide the three-dimensional region into small rectangular boxes, each $\Delta x\times\Delta y\times\Delta z$ with volume $\Delta x\Delta y\Delta z$. Then we add them all up and take the limit, to get an integral: $$\int_{x_0}^{x_1}\int_{y_0}^{y_1}\int_{z_0}^{z_1} dz\,dy\,dx.$$ If the limits are constant, we are simply computing the volume of a rectangular box.
Example 17.5.1 We use an integral to compute the volume of the box with opposite corners at $(0,0,0)$ and $(1,2,3)$. $$\int_0^1\int_0^2\int_0^3 dz\,dy\,dx=\int_0^1\int_0^2\left.z\right|_0^3 \,dy\,dx =\int_0^1\int_0^2 3\,dy\,dx =\int_0^1 \left.3y\right|_0^2 \,dx =\int_0^1 6\,dx = 6. $$
Of course, this is more interesting and useful when the limits are not constant.
Example 17.5.2 Find the volume of the tetrahedron with corners at $(0,0,0)$, $(0,3,0)$, $(2,3,0)$, and $(2,3,5)$.
The whole problem comes down to correctly describing the region by inequalities: $0\le x\le 2$, $3x/2\le y\le 3$, $0\le z\le 5x/2$. The lower $y$ limit comes from the equation of the line $y=3x/2$ that forms one edge of the tetrahedron in the $x$-$y$ plane; the upper $z$ limit comes from the equation of the plane $z=5x/2$ that forms the "upper'' side of the tetrahedron; see figure 17.5.1. Now the volume is $$\eqalign{ \int_0^2\int_{3x/2}^3\int_0^{5x/2} dz\,dy\,dx &=\int_0^2\int_{3x/2}^3\left.z\right|_0^{5x/2} \,dy\,dx\cr &=\int_0^2\int_{3x/2}^3 {5x\over2}\,dy\,dx\cr &=\int_0^2 \left.{5x\over2}y\right|_{3x/2}^3 \,dx\cr &=\int_0^2 {15x\over2}-{15x^2\over4}\,dx\cr &=\left. {15x^2\over4}-{15x^3\over12}\right|_0^2\cr &=15-10=5.\cr }$$
Pretty much just the way we did for two dimensions we can use triple integration to compute mass, center of mass, and various average quantities.
Example 17.5.3 Suppose the temperature at a point is given by $T=xyz$. Find the average temperature in the cube with opposite corners at $(0,0,0)$ and $(2,2,2)$.
In two dimensions we add up the temperature at "each'' point and divide by the area; here we add up the temperatures and divide by the volume, $8$: $$\eqalign{ {1\over8}\int_{0}^2\int_{0}^2\int_{0}^2 xyz\,dz\,dy\,dx &={1\over8}\int_{0}^2\int_{0}^2\left.{xyz^2\over2}\right|_0^2\,dy\,dx ={1\over16}\int_{0}^2\int_{0}^2 xy\,dy\,dx\cr &={1\over4}\int_{0}^2\left.{xy^2\over2}\right|_0^2\,dx ={1\over8}\int_{0}^2 4x\,dx ={1\over2}\left.{x^2\over2}\right|_0^2 =1.\cr }$$
Example 17.5.4 Suppose the density of an object is given by $xz$, and the object occupies the tetrahedron with corners $(0,0,0)$, $(0,1,0)$, $(1,1,0)$, and $(0,1,1)$. Find the mass and center of mass of the object.
As usual, the mass is the integral of density over the region: $$\eqalign{ M&=\int_{0}^1\int_{x}^1\int_{0}^{y-x} xz\,dz\,dy\,dx =\int_{0}^1\int_{x}^1 {x(y-x)^2\over2}\,dy\,dx ={1\over2}\int_{0}^1 {x(1-x)^3\over3}\,dx\cr &={1\over6}\int_{0}^1 x-3x^2+3x^3-x^4\,dx ={1\over120}.\cr }$$ We compute moments as before, except now there is a third moment: $$\eqalign{ M_{xy} &= \int_{0}^1\int_{x}^1\int_{0}^{y-x} xz^2\,dz\,dy\,dx ={1\over360},\cr M_{xz} &= \int_{0}^1\int_{x}^1\int_{0}^{y-x} xyz\,dz\,dy\,dx ={1\over144},\cr M_{yz} &= \int_{0}^1\int_{x}^1\int_{0}^{y-x} x^2z\,dz\,dy\,dx ={1\over360}.\cr }$$ Finally, the coordinates of the center of mass are $\bar x=M_{yz}/M=1/3$, $\bar y=M_{xz}/M=5/6$, and $\bar z=M_{xy}/M=1/3$.
Exercises 17.5
Ex 17.5.1Evaluate $\ds\int_{0}^{1}\int_{0}^{x}\int_{0}^{x+y}2x+y-1 \,dz\,dy\,dx$.(answer)
Ex 17.5.2Evaluate $\ds\int_{0}^{2}\int_{-1}^{x^2}\int_{1}^{y}xyz \,dz\,dy\,dx$.(answer)
Ex 17.5.3Evaluate $\ds\int_{0}^{1}\int_{0}^{x}\int_{0}^{\ln y}e^{x+y+z}\,dz\,dy\,dx$.(answer)
Ex 17.5.4Evaluate$\ds\int_{0}^{\pi/2}\int_{0}^{\sin\theta}\int_{0}^{r\cos\theta}r^2\,dz\,dr\,d\theta$.(answer)
Ex 17.5.5Evaluate $\ds\int_{0}^{\pi}\int_{0}^{\sin\theta}\int_{0}^{r\sin\theta}r\cos^2\theta\,dz\,dr\,d\theta$.(answer)
Ex 17.5.6Evaluate $\ds\int_{0}^{1}\int_{0}^{y^2}\int_{0}^{x+y}x\,dz\,dx\,dy$.(answer)
Ex 17.5.7Evaluate $\ds\int_{1}^{2}\int_{y}^{y^2}\int_{0}^{\ln(y+z)}e^x\,dx\,dz\,dy$.(answer)
Ex 17.5.8Compute $\ds\int_0^\pi\int_0^{\pi/2}\int_0^1 z\sin x+z\cos y\,dz\,dy\,dx$.(answer)
Ex 17.5.9For each of the integrals in the previous exercises, give adescription of the volume (both algebraic and geometric) that is thedomain of integration.
Ex 17.5.10Compute $\ds\int\int\intx+y+z\,dV$ over the region$x^2+y^2+z^2\le 1$ in the first octant.(answer)
Ex 17.5.11Find the mass of a cube with edge length 2 and density equalto the square of the distance from one corner.(answer)
Ex 17.5.12Find the mass of a cube with edge length 2 and density equalto the square of the distance from one edge.(answer)
Ex 17.5.13An object occupies the volume of the upper hemisphere of $x^2+y^2+z^2=4$ and has density $z$ at $(x,y,z)$. Find the center of mass.(answer)
Ex 17.5.14An object occupies the volume of the pyramid with corners at $(1,1,0)$, $(1,-1,0)$, $(-1,-1,0)$, $(-1,1,0)$, and $(0,0,2)$ and hasdensity $x^2+y^2$ at $(x,y,z)$. Find the center of mass.(answer)
Ex 17.5.15Verify the moments $M_{xy}$, $M_{xz}$, and $M_{yz}$of example 17.5.4 by evaluating theintegrals.
Ex 17.5.16Find the region $E$ for which $\ds\tint{E} (1-x^2-y^2-z^2) \; dV$ is a maximum.
|
See section 4.4 to review some basic terminology about graphs.
A graph $G$ consists of a pair $(V,E)$, where $V$ is the set of vertices and $E$ the set of edges. We write $V(G)$ for the vertices of $G$ and $E(G)$ for the edges of $G$ when necessary to avoid ambiguity, as when more than one graph is under discussion.
If no two edges have the same endpoints we say there are no
multiple edges, and if no edge has asingle vertex as both endpoints we say there are no loops. A graph with no loops and no multiple edges is a simple graph. A graph with no loops, butpossibly with multiple edges is a multigraph.The condensation of a multigraph is thesimple graph formed by eliminating multiple edges, that is, removingall but one of the edges with the same endpoints. To form the condensation ofa graph, all loops are also removed. We sometimes refer to a graph as a general graph to emphasize that thegraph may have loops or multiple edges.
The edges of a simple graphcan be represented as a set of two element sets; for example,$$(\{v_1,\ldots,v_7\},\{\{v_1,v_2\},\{v_2,v_3\},\{v_3,v_4\},\{v_3,v_5\},\{v_4,v_5\},\{v_5,v_6\},\{v_6,v_7\}\})$$ is a graph that can be pictured as infigure 5.1.1. This graph is also a
connected graph: each pair of vertices $v$,$w$ is connected by a sequence of vertices and edges,$v=v_1,e_1,v_2,e_2,\ldots,v_k=w$, where$v_i$ and $v_{i+1}$ are the endpoints of edge $e_{i}$. The graphs shownin figure 4.4.2 are connected, but the figurecould be interpreted as a single graph that is not connected.
A graph $G=(V,E)$ that is not simple can be represented by using multisets: a loop is a multiset $\{v,v\}=\{2\cdot v\}$ and multiple edges are represented by making $E$ a multiset. The condensation of a multigraph may be formed by interpreting the multiset $E$ as a set.
The degree of a a vertex $v$, $\d(v)$, is the number of times itappears as an endpoint of an edge. If there are no loops, this is thesame as the number of edges incident with $v$, but if $v$ is bothendpoints of an edge, namely, of a loop, then this contributes 2 to thedegree of $v$. The
degree sequence of agraph is a list of its degrees; the order does not matter, but usuallywe list the degrees in increasing or decreasing order. The degreesequence of the graph in figure 5.1.2, listedclockwise starting at the upper left, is $0,4,2,3,2,8,2,4,3,2,2$. Wetypically denote the degrees of the vertices of a graph by $d_i$,$i=1,2,\ldots,n$, where $n$ is the number of vertices. Depending oncontext, the subscript $i$ may match the subscript on a vertex, sothat $d_i$ is the degree of $v_i$, or the subscript may indicate theposition of $d_i$ in an increasing or decreasing list of the degrees;for example, we may state that the degree sequence is $d_1\le d_2\le\cdots\le d_n$.
Our first result, simple but useful, concerns the degree sequence.
Proof. Let $d_i$ be the degree of $v_i$. The degree $d_i$ counts the number of times $v_i$ appears as an endpoint of an edge. Since each edge has two endpoints, the sum $\sum_{i=1}^n d_i$ counts each edge twice.
An easy consequence of this theorem:
Corollary 5.1.2 The number of odd numbers in a degree sequence is even.
An interesting question immediately arises: given a finite sequence ofintegers, is it the degree sequence of a graph? Clearly, if the sum ofthe sequence is odd, the answer is no. If the sum is even, it is nottoo hard to see that the answer is yes, provided we allow loops andmultiple edges. The sequence need not be the degree sequence of asimple graph; for example, it is not hard to see that no simple graphhas degree sequence $0,1,2,3,4$. A sequence that is the degreesequence of a simple graph is said to be
graphical.Graphical sequences have be characterized; the most well knowncharacterization is given by this result:
Theorem 5.1.3 A sequence $d_1\ge d_2\ge \ldots\ge d_n$ is graphical if and only if $\sum_{i=1}^n d_i$ is even and for all $k\in \{1,2,\ldots,n\}$, $$\sum_{i=1}^k d_i\le k(k-1)+\sum_{i=k+1}^n \min(d_i,k).$$
It is not hard to see that if a sequence is graphical it has the property in the theorem; it is rather more difficult to see that any sequence with the property is graphical.
What does it mean for two graphs to be the same? Consider these threegraphs:$$\eqalign{ G_1&=(\{v_1,v_2,v_3,v_4\},\{\{v_1,v_2\},\{v_2,v_3\},\{v_3,v_4\},\{v_2,v_4\}\})\cr G_2&=(\{v_1,v_2,v_3,v_4\},\{\{v_1,v_2\},\{v_1,v_4\},\{v_3,v_4\},\{v_2,v_4\}\})\cr G_3&=(\{w_1,w_2,w_3,w_4\},\{\{w_1,w_2\},\{w_1,w_4\},\{w_3,w_4\},\{w_2,w_4\}\})\cr}$$ These are pictured in figure 5.1.4. Simplylooking at the lists of vertices and edges, they don't appear to bethe same. Looking more closely, $G_2$ and $G_3$ are the same exceptfor the names used for the vertices: $v_i$ in one case, $w_i$ in theother. Looking at the pictures, there is an obvious sense in whichall three are the same: each is a triangle with an edge (and vertex)dangling from one of the three vertices. Although $G_1$ and $G_2$ usethe same names for the vertices, they apply to different vertices inthe graph: in $G_1$ the "dangling'' vertex (officially called a
pendant vertex) is called $v_1$, whilein $G_2$ it is called $v_3$. Finally, note that in the figure, $G_2$and $G_3$ look different, even though they are clearly the same basedon the vertex and edge lists.
So how should we define "sameness'' for graphs? We use a familiar term and definition: isomorphism.
Definition 5.1.4 Suppose $G_1=(V,E)$ and $G_2=(W,F)$. $G_1$ and $G_2$ are
isomorphic if there is a bijection $f\colon V\to W$ such that $\{v_1,v_2\}\in E$ if and only if $\{f(v_1),f(v_2)\}\in F$. In addition, the repetition numbers of $\{v_1,v_2\}$ and $\{f(v_1),f(v_2)\}$ are the same if multiple edgesor loops are allowed. This bijection $f$is called an isomorphism. When $G_1$ and $G_2$ are isomorphic, we write $G_1\cong G_2$.
Each pair of graphs in figure 5.1.4 are isomorphic. For example, to show explicitly that $G_1\cong G_3$, an isomorphism is $$\eqalign{ f(v_1)&=w_3\cr f(v_2)&=w_4\cr f(v_3)&=w_2\cr f(v_4)&=w_1.\cr }$$
Clearly, if two graphs are isomorphic, their degree sequences are the same. The converse is not true; the graphs in figure 5.1.5 both have degree sequence $1,1,1,2,2,3$, but in one the degree-2 vertices are adjacent to each other, while in the other they are not. In general, if two graphs are isomorphic, they share all "graph theoretic'' properties, that is, properties that depend only on the graph. As an example of a non-graph theoretic property, consider "the number of times edges cross when the graph is drawn in the plane.''
In a more or less obvious way, some graphs are contained in others.
Definition 5.1.5 Graph $H=(W,F)$ is a
subgraph ofgraph $G=(V,E)$ if $W\subseteq V$ and $F\subseteq E$. (Since $H$ is agraph, the edges in $F$ have their endpoints in $W$.) $H$ is an induced subgraph if $F$ consists ofall edges in $E$ with endpoints in $W$. Seefigure 5.1.6.Whenever $U\subseteq V$ we denote the induced subgraph of $G$ onvertices $U$ as $G[U]$.
A path in a graph is a subgraph that is a path; if the endpoints ofthe path are $v$ and $w$ we say it is a path from $v$ to $w$. A cyclein a graph is a subgraph that is a cycle. A
clique in agraph is a subgraph that is a complete graph.
If a graph $G$ is not connected, define $v\sim w$ if and only if there is a path connecting $v$ and $w$. It is not hard to see thatthis is an equivalence relation.Each equivalence class corresponds to an induced subgraph $G$; thesesubgraphs are called the
connected componentsof the graph.
Exercises 5.1
Ex 5.1.1The complement $\overline G$ of the simple graph $G$ is asimple graph with the same vertices as $G$, and $\{v,w\}$ is an edgeof $\overline G$ if and only if it is not an edge of $G$. A graph $G$is self-complementary if $G\cong \overline G$. Show that if $G$ isself-complementary then it has $4k$ or $4k+1$ vertices for some $k$.Find self-complementary graphs on 4 and 5 vertices.
Ex 5.1.2Prove that if $\sum_{i=1}^n d_i$ is even, there is a graph(not necessarily simple) with degree sequence $d_1,d_2,\ldots,d_n$.
Ex 5.1.3Suppose $d_1\ge d_2\ge\cdots\ge d_n$ and $\sum_{i=1}^n d_i$ is even. Prove that there is a multigraph(no loops) with degree sequence $d_1,d_2,\ldots,d_n$ ifand only if $d_1\le \sum_{i=2}^n d_i$.
Ex 5.1.4Prove that $0,1,2,3,4$ is not graphical.
Ex 5.1.5Is $4,4,3,2,2,1,1$ graphical? If not, explain why; if so,find a simple graph with this degree sequence.
Ex 5.1.6Is $4,4,4,2,2$ graphical? If not, explain why, and find amultigraph (no loops) with this degree sequence; if so,find a simple graph with this degree sequence.
Ex 5.1.7Prove that a simple graph with $n\ge 2$ vertices has twovertices of the same degree.
Ex 5.1.8Prove the "only if'' part of theorem 5.1.3.
Ex 5.1.9Show that the condition on the degrees in theorem 5.1.3 is equivalent to thiscondition: $\sum_{i=1}^n d_i$ is even and for all $k\in\{1,2,\ldots,n\}$, and all $\{i_1,i_2,\ldots, i_k\}\subseteq [n]$,$$\sum_{j=1}^k d_{i_j}\le k(k-1)+\sum_{i\notin \{i_1,i_2,\ldots, i_k\}} \min(d_i,k).$$Do not use theorem 5.1.3.
Ex 5.1.10Draw the 11 non-isomorphic graphs with four vertices.
Ex 5.1.11Suppose $G_1\cong G_2$. Show that if $G_1$ contains a cycleof length $k$ so does $G_2$.
Ex 5.1.12Define $v\sim w$ if and only if there is a path connectingvertices $v$ and $w$. Prove that $\sim$ is an equivalence relation.
Ex 5.1.13Prove the "if'' part of theorem 5.1.3, as follows:
The proof is by induction on $s=\sum_{i=1}^n d_i$. This is easy to see if $s=2$, so suppose $s>2$. Without loss of generality we may suppose that $d_n>0$. Let $t$ be the least integer such that $d_t>d_{t+1}$, or $t=n-1$ if there is no such integer. Let $d_t'=d_t-1$, $d_n'=d_n-1$, and $d_i'=d_i$ for all other $i$. Note that $d_1'\ge d_2'\ge\cdots d_n'$. We want to show that the sequence $\{d_i'\}$ satisfies the condition of the theorem, that is, that for all $k\in \{1,2,\ldots,n\}$, $$\sum_{i=1}^k d_i'\le k(k-1)+\sum_{i=k+1}^n \min(d_i',k).$$ There are five cases:
1. $k\ge t$
2. $k< t$, $d_k< k$
3. $k< t$, $d_k=k$
4. $k< t$, $d_n>k$
5. $k< t$, $d_k > k$, $d_n\le k$
By the induction hypothesis, there is a simple graph with degree sequence $\{d_i'\}$. Finally, show that there is a graph with degree sequence $\{d_i\}$.
This proof is due to S. A. Choudum,
A Simple Proof of the Erdős-Gallai Theorem on Graph Sequences,Bulletin of the AustralianMathematics Society, vol. 33, 1986, pp. 67-70. The proof byPaul Erdősand Tibor Gallai was long; Berge provided a shorter proof that used resultsin the theory of network flows. Choudum's proof is both short andelementary.
|
I would like to numerically solve a system of differential equations that describes the dynamics of $N$ coupled oscillatory units (Kuramoto model) via their phase variables $\phi_i$:
$\frac{\partial\phi_i}{\partial t} = \omega_i + \frac{K}{N}\sum_{j=1}^N \sin(\phi_j-\phi_i)$
A simple implementation with NDSolve works nicely:
tfinal = 20;stdϕ = 2.5;n = 100;k = 1.2;SeedRandom[0];ϕ0s = Mod[RandomVariate[NormalDistribution[0, stdϕ], n], 2 π];ω = RandomVariate[NormalDistribution[1, 0.1], n];trajs = NDSolve[{ Table[{ ϕ[i]'[t] == ω[[i]] + k/n Sum[Sin[ϕ[j][t] - ϕ[i][t]], {j, n}], ϕ[i][0] == ϕ0s[[i]] }, {i, n}] }, Table[ϕ[i], {i, n}], {t, 0, tfinal}][[1]];
For suitable coupling strengths, the oscillators synchronize as shown in the following animation:
It is possible to simplify the differential equations via a mean field interaction.
Mean field:
$R e^{\text{i}\psi} = \frac{1}{N} \sum_{j=1}^N e^{\text{i}\phi_j}$
Mean field coupling:
$\frac{\partial\phi_i}{\partial t} = \omega_i + R K \sin(\psi-\phi_i)$
The corresponding mathematica code for this differential-algebraic equation is:
trajs = NDSolve[{ Table[{ ϕ[i]'[t] == ω[[i]] + Abs[r[t]] k Sin[Arg[r[t]] - ϕ[i][t]], ϕ[i][0] == ϕ0s[[i]] }, {i, n}], r[t] == Sum[Exp[I ϕ[i][t]], {i, n}]/n }, Join[Table[ϕ[i], {i, n}], {r}], {t, 0, tfinal}][[1]];
Problem:Instead of a solution that is computed in less time, NDSolve throws an error:
NDSolve::icfail: Unable to find initial conditions that satisfy the residual function within specified tolerances. Try giving initial conditions for both values and derivatives of the functions.
I also tried adding initial conditions for both function values and derivatives, but that did not help. Different "EquationSimplification" options (as listed here) also did not help.
How can I solve this?
Edit
Here is the code for the animation:
animationFrames = Table[ Show[ ParametricPlot[ Evaluate@ReIm[Sum[Exp[I ϕ[i][t] /. trajs], {i, n}]/n], {t, 0, tt}, PlotStyle -> Lighter@ColorData[97, 1], PlotRange -> 1.1 {{-1, 1}, {-1, 1}}, Axes -> None, ImageSize -> {Automatic, 300}], Graphics[{ Gray, AbsoluteThickness[2], Circle[], Orange, Disk[#, 0.03] & /@ Table[{Cos[#], Sin[#]} &@ϕ[i][tt] /. trajs, {i, n}], ColorData[97, 1], Disk[ReIm[ Sum[Exp[I Evaluate[ϕ[i][tt] /. trajs]], {i, n}]/n], 0.1] }] ] , {tt, 0.001, tfinal, 0.01 tfinal}];Export["kuramoto_sync.gif", animationFrames, AnimationRepetitions -> Infinity]
Bonus: Is there a way to "precompute" the trajectory of the centroid (blue bobble) and store it in an interpolating function, so it's not required to calculate
ReIm[Sum[Exp[I ϕ[i][t] /. trajs], {i, n}]/n] in each frame?Of course, one option is to sample the centroid trajectory in advance like:
{rx, ry} = Table[ListInterpolation[ri, {0, tfinal}], {ri, Table[ReIm[ N@Sum[Exp[I Evaluate[ϕ[i][t] /. trajs]], {i, n}]/n], {t, 0, tfinal, 0.001 tfinal}]\[Transpose]}];
|
Setup
Let $G$ be a semisimple algebraic group over an algebraically closed field of arbitrary characteristic with Borel subgroup $B$. Let $\Lambda$ denote the weight lattice of $G$; we write elements of the group ring $\mathbb Z[\Lambda]$ of $\Lambda$ as linear combinations of elements of the form $e^\lambda$, $\lambda \in \Lambda$. In particular, characters of finite-dimensional $B$-modules are elements of $\mathbb Z[\Lambda]$.
For dominant $\lambda \in \Lambda$ let $ V(\lambda) $ denote the Weyl module for $G$ with highest weight $\lambda$ and for any element $w$ in the Weyl group $W$ of $G$ let $V_w(\lambda)$ denote the Demazure submodule of $V(\lambda)$ associated to $w$; this is the $B$-submodule of $V(\lambda)$ generated by an extremal vector of weight $w\lambda$. (Remark in particular that $V_{w_0}(\lambda) = V(\lambda)$).
For any simple root $\alpha_i$ of $G$ with associated simple reflection $s_i \in W$ define the Demazure operator $D_{s_i} : \mathbb Z[\Lambda] \to \mathbb Z[\Lambda]$ by $$ D_{s_i}(e^\lambda) = \frac{ e^\lambda - e^{s_i \lambda - \alpha_i} }{ 1-e^{-\alpha_i} } . $$ It is easy to see that this is well-defined. For any word $\mathfrak w = (s_{i_1}, \ldots, s_{i_k})$ of simple reflections in $W$ we have a Demazure operator $D_{\mathfrak w}$ defined by the obvious composition.
We now have the following theorem: Choose $w \in W$ and let $ \mathfrak w $ be any (not necessarily reduced!) word of simple reflections representing $w$. Then the character of $V_w(\lambda)$ is $D_{\mathfrak w}(e^\lambda)$. [A reference for this is, say, section 3.3 of Brion-Kumar's Frobenius splitting book].
Question
Has anyone studied $q$-analogues of these Demazure operators? In light of recent work on $q$-character formulas and Kazhdan-Lusztig polynomials, this seems like a natural combinatorial thing to consider. For example, I would (perhaps naively) expect that an appropriate $q$-analogue of the Demazure operators computes, say, the $q$-analog of weight multiplicity considered by Kazhdan-Lusztig, R. Brylinski, Joseph, and others. (Also, I don't know much about the path model or crystal bases, but it seems as though there may be a connection to those as well).
|
We show here how any interpretation of $\operatorname{Tr} A$ when $A : V \to V$ is an isomorphism can be extended to an interpretation of the trace of an arbitrary endomorphism.
As described elsewhere, if you view $A : V \to V$ as a vector field on $V$ in the canonical way then the trace of $A$ is the same as its divergence so in the case where $A$ is an isomorphism there is a pleasing geometric interpretation readily available, which I'll assume that you're comfortable with. However, this interpretation is not satisfactory when $A$ is not surjective, as shown by this example:
If $A : \mathbb{R}^3 \to \mathbb{R}^3$ is such that $\operatorname{Im} A$ is $2$-dimensional and $A$ is volume-increasing (i.e. $\operatorname{div}(A) = \operatorname{Tr}(A) > 0$) then $A$ takes a bucket of $3$-d water (i.e. a subset of $\mathbb{R}^3$) and ''compresses it down'' into a $2$-d ''paper'' (i.e. into $\operatorname{Im} A$); but it is not clear (at least to me) how anyone could be expected to say that $A$ has increased the volume of this bucket of water simply because the quantity $\operatorname{div}(A) = \operatorname{Tr}(A)$ happens to be positive!
Nevertheless, the equality $\operatorname{div}(A) = \operatorname{Tr}(A)$ is our best bet at finding a geometric interpretation (Spoiler: we should look at a particular vector subspace of $V$). To begin, let $$V^{(0)} = \operatorname{domain}(A) = V,\;\; V^{(i+1)} = A\left(V^{(i)}\right),\; \textrm{ and }d^i = \dim V^{(i)}$$so that $V^{(1)} = \operatorname{Im} A = A\left(V^{(0)}\right)$, $V^{(i+1)} \subseteq V^{(i)}$, and $d^{i+1} \leq d^i$. Let $N \geq 0$ be the smallest integer s.t. $d^{N+1} = d^N$ and denote this common value by $d$. Let $W := V^{(N)}$. As discussed above, we may assume that $N \geq 1$ (i.e. that $A$ is not surjective) although this is not necessary.
To cut to the chase, what is shown after this paragraph is that the restriction $A\big\vert_W : W \to W$ of $A$ onto $W := V^{(N)}$ is an isomorphism. Furthermore, $\operatorname{Tr}(A) = \operatorname{Tr}\left(A\big\vert_W\right)$ and it will be clear that $W$ is the unique largest vector subspace $S$ of $V$ on which $A$ restricts to an isomorphism $A\big\vert_S : S \to S$. All of this allows us to conclude that to geometrically interpret $\operatorname{Tr}(A)$, one should restrict their focus to geometrically interpreting the isomorphism $A\big\vert_W : W \to W$ rather than $A : V \to V$ itself. This isn't surprising since just as the trace of a matrix does not depend on the "elements off the diagonal", so too does the geometric interpretation of trace not depend on the "space off of $W$."
We now prove the above claim. Inductively construct a basis $\left(e_1, \dots, e_{\dim V}\right)$ for $V$ such that for all $i \geq 0$, $\left(e_1, \dots, e_{d^i}\right)$ is a basis for $V^{(i)}$. Let $\left(\varepsilon^1,\dots, \varepsilon^{\dim V}\right)$ be the dual basis of $e_{\bullet}$ and note in particular that: $$\textrm{(1) whenever }d^{i + 1} < l \leq d^i\textrm{ then }\varepsilon^l\textrm{ vanishes on }V^{(i + 1)}.$$
Since $(e_1, \dots, e_{d^1})$ is a basis for the range of $A$ we may, for any $v \in V^{(0)},$ write $$A(v) = \varepsilon^1(A(v)) e_1 + \cdots + \varepsilon^{d^1}(A(v)) e_{d^1}$$ so that $A = (\varepsilon^l \circ A) \otimes e_l$ (the sum ranging over $l = 1, \dots, d^1$) and hence $$\operatorname{Tr}(A) = (\varepsilon^l \circ A)(e_l) = \varepsilon^1(A(e_1)) + \cdots + \varepsilon^{d^1}\left(A\left( e_{d^1} \right)\right)$$which shows that $\operatorname{Tr}(A)$ actually depends only on the range of $A$ (i.e. $V^{(1)}$). Now since $e_1, \dots, e_{d^1}$ are (by definition) in $V^{(1)}$, all of $A\left(e_1\right), \dots, A\left(e_{d^1}\right)$ belong to $A\left(V^{(1)}\right) = V^{(2)}$ so that from $(1)$ it follows that $$\operatorname{Tr}(A) = \varepsilon^1\left(A\left(e_1\right)\right) + \cdots + \varepsilon^{d^2}\left(A\left( e_{d^2} \right)\right)$$
Continuing this inductively $N \leq \dim V$ times shows that $$\operatorname{Tr}(A) = \varepsilon^1\left(A\left(e_1\right)\right) + \cdots + \varepsilon^{d}\left(A\left(e_d\right)\right)$$so that $\operatorname{Tr}(A)$ depends
only on $W = V^{(N)}$. Since by definition of $N$, the map $A\big\vert_W : W \to W$ is surjective, it is an isomorphism and furthermore, it should be clear that $W$ is the unique largest subspace of $V$ on which $A$ restricts to an isomorphism. QED
To summarize, going back to the divergence ''bucket of water'' example above, in the case where $A : V \to V$ is an arbitrary linear map we can imagine being given some initial ''bucket of water'' $V = V^{(0)}$ and then imagine $A$ as repeatedly (and eternally) deforming this same water until eventually (i.e. after $N$ iterations) $A$ would have ''pushed'' or ''compressed'' all of $V$ onto some vector subspace $W = V^{(N)}$ (which is also the unique the largest subspace $W$ of $V$ that $A$ maps back onto itself) i.e. all of $V$ would eventually ''flow into'' $W$. It is at this point that $A$ no longer ''compresses'' this water down by some dimension(s) so that $A$ does nothing more than bijectively move this $d = \dim W$-dimensional water around. It now makes sense to ask by how much the isomorphism $A\big\vert_W : W \to W$ is increasing or decreasing this $d$-dimensional volume, which is what $\operatorname{Tr}(A) = \operatorname{Tr}\left(A\big\vert_W\right) = \operatorname{div}\left(A\big\vert_W\right)$ represents.
Remark: This may not really answer your question since you stated that "The divergence application of trace is somewhat interesting, but again, not really what we are looking for." Nevertheless, whatever alternative non-divergence based interpretation of the trace of an isomorphism you choose, I hope that this will help you to extend it to the case where the map isn't an isomorphism.
|
So, Bob is given the following state (also called the
maximally-mixed state):
$\rho = \frac{1}{2}|0\rangle\langle 0| + \frac{1}{2}|1\rangle\langle 1| = \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix}$
As you noticed, one nice feature of density matrices is they enable us to capture the uncertainty of an outcome of a measurement and account for the different possible outcomes in a single equation. Projective measurement is defined by a set of measurement operators $P_i$, one for each possible measurement outcome. For example, when measuring in the computational basis (collapsing to $|0\rangle$ or $|1\rangle$) we have the following measurement operators:
$P_0 = |0\rangle\langle0| = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$, $P_1 = |1\rangle\langle1| = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}$
where $P_0$ is associated with outcome $|0\rangle$ and $P_1$ is associated with outcome $|1\rangle$. These matrices are also called
projectors.
Now, given a single-qbit density operator $\rho$, we can calculate the probability of it collapsing to some value with the following formula:
$p_i = Tr(P_i \rho)$
where $Tr(M)$ is the
trace, which is the sum of the elements along the main diagonal of matrix $M$. So, we calculate the probability of your example collapsing to $|0\rangle$ as follows:
$p_0 = Tr(P_0 \rho)= Tr \left( \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix} \right)= Tr \left( \begin{bmatrix} \frac 1 2 & 0 \\ 0 & 0 \end{bmatrix} \right) = \frac 1 2$
And the formula for the post-measurement density operator is:
$\rho_i = \frac{P_i \rho P_i}{p_i}$
which in your example is:
$\rho_0 = \frac{\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}}{\frac 1 2} = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$
which is indeed the density matrix for the pure state $|0\rangle$.
We don't just want the density operator for a certain measurement outcome, though - what we want is a density operator which captures the branching nature of measurement, representing it as an ensemble of possible collapsed states! For this, we use the following formula:
$\rho' = \sum_i p_i \rho_i = \sum_i P_i \rho P_i$
in our example:
$\rho' = \left( \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \right) + \left( \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \right)$
$\rho' = \begin{bmatrix} \frac 1 2 & 0 \\ 0 & 0 \end{bmatrix}+ \begin{bmatrix} 0 & 0 \\ 0 & \frac 1 2 \end{bmatrix} = \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix} = \frac{1}{2}|0\rangle\langle 0| + \frac{1}{2}|1\rangle\langle 1|$
Your final density matrix is unchanged! This should actually be unsurprising, because we started out with the maximally-mixed state and performed a further randomizing operation on it.
Much of this answer copied from my previous detailed answer here, which in turn is based off of another answer here.
|
The second qubit of a two-qubit system in the Bell state
$$|\beta_{01}\rangle= \frac{1}{\sqrt{2}}(|01\rangle+|10\rangle)$$ is sent through an error channel which introduces a bit flip error with probability $p$. I want to calculate the state of the system after the second qubit exits the error channel.
I know how to do this for a single qubit, but there isn't anything in my notes about doing it on a single qubit of a two-qubit system (or on two qubits for that matter).
This question is from a past exam paper (my exam is next week) could anyone show me the method of how to perform this operation?
|
A
directed graph,also called a digraph,is a graph in which the edges have a direction. Thisis usually indicated with an arrow on the edge; more formally, if $v$and $w$ are vertices, an edge is an unordered pair $\{v,w\}$, while adirected edge, called an arc,is an ordered pair $(v,w)$ or $(w,v)$. The arc $(v,w)$ is drawn as anarrow from $v$ to $w$.If a graph contains both arcs $(v,w)$ and $(w,v)$, this is not a "multiple edge'', as the arcs aredistinct. It is possible to have multiple arcs, namely, an arc $(v,w)$may be included multiple times in the multiset of arcs. As before, a digraph is called simple if there are no loops or multiple arcs.
We denote by $E\strut_v^-$ the set of all arcs of the form $(w,v)$, and by$E_v^+$ the set of arcs of the form $(v,w)$. The
indegree of $v$, denoted $\d^-(v)$, is the numberof arcs in $E\strut_v^-$, and the outdegree,$\d^+(v)$, is the number of arcs in $E_v^+$. If the vertices are$v_1,v_2,\ldots,v_n$, the degrees are usually denoted$d^-_1,d^-_2,\ldots,d^-_n$ and $d^+_1,d^+_2,\ldots,d^+_n$. Note thatboth $\sum_{i=0}^n \d^-_i$ and $\sum_{i=0}^n \d^+_i$ count the numberof arcs exactly once, and of course $\sum_{i=0}^n \d^-_i=\sum_{i=0}^n\d^+_i$. A walk in a digraph is asequence $v_1,e_1,v_2,e_2,\ldots,v_{k-1},e_{k-1},v_k$ such that$e_k=(v_i,v_{i+1})$; if $v_1=v_k$, it is aclosed walk or a circuit.A path in adigraph is a walk in which all vertices are distinct. It is not hardto show that, as for graphs, if there is a walk from $v$ to $w$ thenthere is a path from $v$ to $w$.
Many of the topics we have considered for graphs have analogues in digraphs, but there are many new topics as well. We will look at one particularly important result in the latter category.
Definition 5.11.1 A
network is a digraph with adesignated source $s$ and target $t\not=s$. In addition, eacharc $e$ has a positive capacity, $c(e)$.
Networks can be used to model transport through a physical network, of a physical quantity like oil or electricity, or of something more abstract, like information.
Definition 5.11.2 A
flow in a network is a function $f$from the arcs of the digraph to $\R$, with $0\le f(e)\le c(e)$ for all $e$,and such that$$\sum_{e\in E_v^+}f(e)=\sum_{e\in E_v^-}f(e),$$ for all $v$ other than $s$ and $t$.
We wish to assign a value to a flow, equal to the net flow out of the source. Since the substance being transported cannot "collect'' or "originate'' at any vertex other than $s$ and $t$, it seems reasonable that this value should also be the net flow into the target.
Before we prove this, we introduce some new notation. Suppose that $U$ is a set of vertices in a network, with $s\in U$ and $t\notin U$. Let $\overrightharpoon U$ be the set of arcs $(v,w)$ with $v\in U$, $w\notin U$, and $\overleftharpoon U$ be the set of arcs $(v,w)$ with $v\notin U$, $w\in U$.
Theorem 5.11.3 For any flow $f$ in a network, the net flow out of the source is equal to the net flow into the target, namely, $$\sum_{e\in E_s^+} f(e)-\sum_{e\in E_s^-}f(e)= \sum_{e\in E_t^-} f(e)-\sum_{e\in E_t^+}f(e).$$
Proof. We will show first that for any $U$ with $s\in U$ and $t\notin U$, $$\sum_{e\in E_s^+} f(e)-\sum_{e\in E_s^-}f(e)= \sum_{e\in\overrightharpoon U} f(e)-\sum_{e\in\overleftharpoon U}f(e).$$
Consider the following: $$S=\sum_{v\in U}\left(\sum_{e\in E_v^+}f(e)-\sum_{e\in E_v^-}f(e)\right).$$ The quantity $$\sum_{e\in E_v^+}f(e)-\sum_{e\in E_v^-}f(e)$$ is zero except when $v=s$, by the definition of a flow. Thus, the entire sum $S$ has value $$\sum_{e\in E_s^+} f(e)-\sum_{e\in E_s^-}f(e).$$ On the other hand, we can write the sum $S$ as $$ \sum_{v\in U}\sum_{e\in E_v^+}f(e)- \sum_{v\in U}\sum_{e\in E_v^-}f(e). $$ Every arc $e=(x,y)$ with both $x$ and $y$ in $U$ appears in both sums, that is, in $$\sum_{v\in U}\sum_{e\in E_v^+}f(e),$$ when $v=x$, and in $$\sum_{v\in U}\sum_{e\in E_v^-}f(e),$$ when $v=y$, and so the flow in such arcs contributes $0$ to the overall value. Thus, only arcs with exactly one endpoint in $U$ make a non-zero contribution, so the entire sum reduces to $$\sum_{e\in\overrightharpoon U} f(e)-\sum_{e\in\overleftharpoon U}f(e).$$ Thus $$\sum_{e\in E_s^+} f(e)-\sum_{e\in E_s^-}f(e)=S= \sum_{e\in\overrightharpoon U} f(e)-\sum_{e\in\overleftharpoon U}f(e),$$ as desired.
Now let $U$ consist of all vertices except $t$. Then $$ \sum_{e\in E_s^+} f(e)-\sum_{e\in E_s^-}f(e)= \sum_{e\in\overrightharpoon U} f(e)-\sum_{e\in\overleftharpoon U}f(e)= \sum_{e\in E_t^-} f(e)-\sum_{e\in E_t^+}f(e), $$ finishing the proof.
Definition 5.11.4 The
valueof a flow, denoted $\val(f)$, is $\sum_{e\in E_s^+} f(e)-\sum_{e\in E_s^-}f(e)$.A maximum flowin a network is any flow$f$ whose value is the maximum among all flows.
We next seek to formalize the notion of a "bottleneck'', with the goal of showing that the maximum flow is equal to the amount that can pass through the smallest bottleneck.
Definition 5.11.5 A
cut in a network is aset $C$ of arcs with the property that every path from $s$ to $t$uses an arc in $C$, that is, if the arcs in $C$ are removed from thenetwork there is no path from $s$ to $t$.The capacity of a cut, denoted $c(C)$, is$$\sum_{e\in C} c(e).$$A minimum cut is one with minimum capacity. A cut $C$ is minimal if nocut is properly contained in $C$.
Note that a minimum cut is a minimal cut. Clearly, if $U$ is a set of vertices containing $s$ but not $t$, then $\overrightharpoon U$ is a cut.
Proof. Let $U$ be the set of vertices $v$ such that there is a path from $s$ to $v$ using no arc in $C$.
Suppose that $e=(v,w)\in C$. Since $C$ is minimal, there is a path $P$ from $s$ to $t$ using $e$ but no other arc in $C$. Thus, there is a path from $s$ to $v$ using no arc of $C$, so $v\in U$. If there is a path from $s$ to $w$ using no arc of $C$, then this path followed by the portion of $P$ that begins with $w$ is a walk from $s$ to $t$ using no arc in $C$. This implies there is a path from $s$ to $t$ using no arc in $C$, a contradiction. Thus $w\notin U$ and so $e\in \overrightharpoon U$. Hence, $C\subseteq \overrightharpoon U$.
Suppose that $e=(v,w)\in \overrightharpoon U$. Then $v\in U$ and $w\notin U$, so every path from $s$ to $w$ uses an arc in $C$. Since $v\in U$, there is a path from $s$ to $v$ using no arc of $C$, and this path followed by $e$ is a path from $s$ to $w$. Hence the arc $e$ must be in $C$, so $\overrightharpoon U\subseteq C$.
We have now shown that $C=\overrightharpoon U$.
Now we can prove a version of the important max-flow, min cut theorem.
Theorem 5.11.7 Suppose in a network all arc capacities are integers. Then the value of a maximum flow is equal to the capacity of a minimum cut. Moreover, there is a maximum flow $f$ for which all $f(e)$ are integers.
Proof. First we show that for any flow $f$ and cut $C$, $\val(f)\le c(C)$. It suffices to show this this for a minimum cut $C$, and by lemma 5.11.6 we know that $C=\overrightharpoon U$ for some $U$. Using the proof of theorem 5.11.3 we have: $$ \val(f) = \sum_{e\in\overrightharpoon U} f(e)-\sum_{e\in\overleftharpoon U}f(e) \le \sum_{e\in\overrightharpoon U} f(e) \le \sum_{e\in\overrightharpoon U} c(e) = c(\overrightharpoon U). $$ Now if we find a flow $f$ and cut $C$ with $\val(f)=c(C)$, it follows that $f$ is a maximum flow and $C$ is a minimum cut. We present an algorithm that will produce such an $f$ and $C$.
Given a flow $f$, which may initially be the zero flow, $f(e)=0$ for all arcs $e$, do the following:
0. Let $U=\{s\}$.
Repeat the next two steps until no new vertices are added to $U$.
1. If there is an arc $e=(v,w)$ with $v\in U$ and $w\notin U$, and $f(e)< c(e)$, add $w$ to $U$.
2. If there is an arc $e=(v,w)$ with $v\notin U$ and $w\in U$, and $f(e)>0$, add $v$ to $U$.
When this terminates, either $t\in U$ or $t\notin U$. If $t\in U$, there is a sequence of distinct vertices $s=v_1,v_2,v_3,\ldots,v_k=t$ such that for each $i$, $1\le i< k$, either $e=(v_i,v_{i+1})$ is an arc with $f(e)< c(e)$ or $e=(v_{i+1},v_i)$ is an arc with $f(e)>0$. Update the flow by adding $1$ to $f(e)$ for each of the former, and subtracting $1$ from $f(e)$ for each of the latter. This new flow $f'$ is still a flow: In the first case, since $f(e)< c(e)$, $f'(e)\le c(e)$, and in the second case, since $f(e)>0$, $f'(e)\ge 0$. It is straightforward to check that for each vertex $v_i$, $1< i< k$, that $$\sum_{e\in E_{v_i}^+}f'(e)=\sum_{e\in E_{v_i}^-}f'(e). $$ In addition, $\val(f')=\val(f)+1$. Now rename $f'$ to $f$ and repeat the algorithm.
Eventually, the algorithm terminates with $t\notin U$ and flow $f$. This implies that for each $e=(v,w)$ with $v\in U$ and $w\notin U$, $f(e)=c(e)$, and for each $e=(v,w)$ with $v\notin U$ and $w\in U$, $f(e)=0$. The capacity of the cut $\overrightharpoon U$ is $$\sum_{e\in\overrightharpoon U} c(e).$$ The value of the flow $f$ is $$ \sum_{e\in\overrightharpoon U} f(e)-\sum_{e\in\overleftharpoon U}f(e)= \sum_{e\in\overrightharpoon U} c(e)-\sum_{e\in\overleftharpoon U}0= \sum_{e\in\overrightharpoon U} c(e). $$ Thus we have found a flow $f$ and cut $\overrightharpoon U$ such that $$ \val(f) = c(\overrightharpoon U), $$ as desired.
The max-flow, min-cut theorem is true when the capacities are any positive real numbers, though of course the maximum value of a flow will not necessarily be an integer in this case. It is somewhat more difficult to prove, requiring a proof involving limits.
We have already proved that in a bipartite graph, the size of a maximum matching is equal to the size of a minimum vertex cover, theorem 4.5.6. This turns out to be essentially a special case of the max-flow, min-cut theorem.
Corollary 5.11.8 In a bipartite graph $G$, the size of a maximum matching is the same as the size of a minimum vertex cover.
Proof. Suppose the parts of $G$ are $X=\{x_1,x_2,\ldots,x_k\}$ and $Y=\{y_1,y_2,\ldots,y_l\}$. Create a network as follows: introduce two new vertices $s$ and $t$ and arcs $(s,x_i)$ for all $i$ and $(y_i,t)$ for all $i$. For each edge $\{x_i,y_j\}$ in $G$, let $(x_i,y_j)$ be an arc. Let $c(e)=1$ for all arcs $e$. Now the value of a maximum flow is equal to the capacity of a minimum cut.
Let $C$ be a minimum cut. If $(x_i,y_j)$ is an arc of $C$, replace it by arc $(s,x_i)$. This is still a cut, since any path from $s$ to $t$ including $(x_i,y_j)$ must include $(s,x_i)$. Thus, we may suppose that $C$ contains only arcs of the form $(s,x_i)$ and $(y_i,t)$. Now it is easy to see that $$K=\{x_i\vert (s,x_i)\in C\}\cup\{y_i\vert (y_i,t)\in C\}$$ is a vertex cover of $G$ with the same size as $C$.
Let $f$ be a maximum flow such that $f(e)$ is an integer for all $e$, which is possible by the max-flow, min-cut theorem. Consider the set of edges $$M=\{\{x_i,y_j\}\vert f((x_i,y_j))=1\}.$$ If $\{x_i,y_j\}$ and $\{x_i,y_m\}$ are both in this set, then the flow out of vertex $x_i$ is at least 2, but there is only one arc into $x_i$, $(s,x_i)$, with capacity 1, contradicting the definition of a flow. Likewise, if $\{x_i,y_j\}$ and $\{x_m,y_j\}$ are both in this set, then the flow into vertex $y_j$ is at least 2, but there is only one arc out of $y_j$, $(y_j,t)$, with capacity 1, also a contradiction. Thus $M$ is a matching. Moreover, if $U=\{s,x_1,\ldots,x_k\}$ then the value of the flow is $$ \sum_{e\in\overrightharpoon U}f(e)-\sum_{e\in\overleftharpoon U}f(e)= \sum_{e\in\overrightharpoon U}f(e)=|M|\cdot1=|M|. $$ Thus $|M|=\val(f)=c(C)=|K|$, so we have found a matching and a vertex cover with the same size. This implies that $M$ is a maximum matching and $K$ is a minimum vertex cover.
Exercises 5.11
Ex 5.11.1Connectivity in digraphs turns out to be a little morecomplicated than connectivity in graphs. A digraph is connected if the underlying graph isconnected. (The underlying graph of a digraph is produced by removingthe orientation of the arcs to produce edges, that is, replacing eacharc $(v,w)$ by an edge $\{v,w\}$. Even if the digraph is simple, theunderlying graph may have multiple edges.) A digraph is stronglyconnected if for every vertices $v$and $w$ there is a walk from $v$ to $w$. Give an example of a digraphthat is connected but not strongly connected.
Ex 5.11.2A digraph has an Euler circuit if there is a closed walk thatuses every arc exactly once. Show that a digraph with no vertices ofdegree 0 has an Euler circuit ifand only if it is connected and $\d^+(v)=\d^-(v)$ for all vertices $v$.
Ex 5.11.3A tournament is an oriented complete graph. That is,it is a digraph on $n$ vertices, containing exactly one of thearcs $(v,w)$ and $(w,v)$ for every pair of vertices. A Hamilton path is a walk that usesevery vertex exactly once.Show that everytournament has a Hamilton path.
Ex 5.11.4Interpret a tournament as follows: the vertices areplayers. If $(v,w)$ is an arc, player $v$ beat $w$. Say that $v$ is a champion if for every other player $w$, either $v$ beat $w$or $v$ beat a player who beat $w$. Show that a player with the maximumnumber of wins is a champion. Find a 5-vertex tournament in whichevery player is a champion.
|
So Im having some trouble in solving projective module problems. I will give an example which extends to the other problems I also wasn´t also to solve.
Let A be a ring, $P$ a left $A$-module. We call a dual basis for $P$ to the family $\{(x_i,f_i)\}_{i \in I}$ where $(x_i,f_i)\in P \times P^*$ for every $i\in I$ such that
i) for every $x\in P$, the set $\{i \in I:f_i(x) \neq 0\}$ is finite
ii) for every $x\in P$, $x=\sum_{i \in I}f_i(x)x_i$
Show that $P$ is projective if and only if it has a dual basis.
So I started by trying to prove that if $P$ has a dual basis, then $P$ is projective. So let $f:T \to N$ be a surjective $A$-module homomorphism and $g: P \to N$ an $A$-module homomorphism. I want to see that there exist an $A$-module homomorphism $\overline{g}:P \to T$ such that $f \circ \overline{g} = g$.
So Im thinking about the following, consider, for every $x \in P$, an element $t_x \in T$ such that $g(x) = f(t_x)$. This element exists since $f$ is surjective. Now the problem in defining $\overline{g}:P \to T$ as $\overline{g}(x)=t_x$ is the good definition, since there can be different $t_x$ such that $f(t_x)=g(x)$. My problem is that im not being able to deal with this, and further more, I don't understand why in some cases (for example when prooving that $\Bbb{Z}$ is a projective $\Bbb{Z}$-module, the uniqueness of the homomorphism leaving the module implies good definition).
Can anyone help me get me out of this confution?
|
Since my answer is long, here is a loose heuristic summary answer: A permutation $\pi$ such that $|\pi(i+1) - \pi(i)|$ is large for many values of $i$ occurs less frequently than permutations where such values are small.
The first results on this might be from Bandt and Shiha's 2005 paper Order patterns in time series, which gives the probability that $\pi \in S_n$ occurs as an ordinal pattern for AR(1) processes and fractional Brownian motion for $n = 3,4$. It includes Douglas Zare's answer for the case of ordinary Brownian motion. Another $n = 3,4$ answer is given in DeFord and Moore's Random walk null models for time series, which gives the probability that $\pi \in S_n$ occurs in a walk with uniform steps on $[b-1,b] \; \; \left(\frac{1}{2} \leq b \leq 1\right)$ as a piecewise-defined polynomial in $b$.
In John Mangual's 2012 answer, there is a suggestion to "look for general properties" instead of directly computing the distribution on $S_n$. This happens in The frequency of pattern occurrence in random walks, where Elizalde and Martinez showed that certain pairs of permutations occur with the same probability
regardless of choice of density function for the steps. A matrix $L_\pi$ is introduced that maps the positive orthant to the region of steps that generate $\pi$. They show that if $L_\pi$ can be obtained from $L_\tau$ by permuting its rows and columns, then $\pi$ and $\tau$ occur with the same probability regardless of choice of step density function. The converse is conjectured to hold. A characterization for the pairs of permutations with equal frequencies independent of density choice is given in Martinez's thesis Equivalences on patterns in random walks via the all-subset (resonance) hyperplane arrangement.
A generic integral that calculates the probability that $\pi \in S_{n+1}$ occurs is $\int_{\mathbb{R}_{>0}^n} g(L_\pi(\mathbf{x})) \mathrm{d} \mathbf{x}$, where $g$ is the joint density function for the steps. The preprint Ordinal pattern probabilities for symmetric random walks uses this to compute or compare probabilities for various density functions. In particular, an answer is given to the question about steps drawn from the uniform distribution on $[-1,1]$. For that choice of density, the probability that $\pi \in S_{n+1}$ occurs is $\frac{K_\pi}{2^n n!}$, where $K_\pi$ is the size of a weak order interval in the affine Weyl group. A complicated elementary description of $K_\pi$ is that it counts the number of strictly upper triangular matrices $M$ such that $M_{i,j} = 0$ whenever there exists $k$ such that $\pi(k) \leq i < j \leq \pi(k+1)$ or $\pi(k+1) \leq i < j \leq \pi(k)$ subject to the constraint, due to Shi, that $M_{it} + M_{tj} \leq M_{ij} \leq M_{it} + M_{tj} + 1$ whenever $i < t < j$. Code that calculates this distribution is given here.
This provides a description of the extremes: The identity and its reverse occur with probability $\frac{1}{2^n}$, while any permutation where $1$ and $n+1$ are consecutive occurs with probability $\frac{1}{2^n n!}$. A permutation where $|\pi(i) - \pi(i+1)| = m$ for some $i$ is guaranteed to have probability less than or equal to $\frac{1}{2^n m!}$.
For the mean zero Gaussian, we can compare permutations by looking at the matrix $\mathrm{lev}(\pi)$ whose $ij$-th entry counts the number of positions $k$ such that $\pi(k) \leq i,j < \pi(k+1)$ or $\pi(k+1) \leq i,j < \pi(k)$. If $\mathrm{lev}(\pi)_{ij} \leq \mathrm{lev}(\tau)_{ij}$ for all $i,j$, then $\pi$ occurs more frequently than $\tau$. (This is a somewhat precise version of the heuristic this answer opened with.) For exact probabilities, there is a decent answer when $\pi$ is a permutation such that $|\pi^{-1}(i+1) - \pi^{-1}(i)| \leq 2$, which applies to any symmetric density function. For such permutations, the probability is given by$$\frac{1}{2^n \prod_{j=1}^n \mathrm{lev}(\pi)_j},$$where $\mathrm{lev}(\pi)_j$ is the number of positions $k$ such that $\pi(k) \leq j < \pi(k+1)$ or $\pi(k+1) \leq j < \pi(k)$. This expression applies to every $\pi \in S_{n+1}$ when the steps are drawn from a mean zero double-exponential distribution.
|
Suppose an object moves so that its speed, or more properly velocity, is given by $\ds v(t)=-t^2+5t$, as shown in figure 7.3.1. Let's examine the motion of this object carefully. We know that the velocity is the derivative of position, so position is given by $\ds s(t)=-t^3/3+5t^2/2+C$. Let's suppose that at time $t=0$ the object is at position 0, so $\ds s(t)=-t^3/3+5t^2/2$; this function is also pictured in figure 7.3.1.
Between $t=0$ and $t=5$ the velocity is positive, so the object moves away from the starting point, until it is a bit past position 20. Then the velocity becomes negative and the object moves back toward its starting point. The position of the object at $t=5$ is exactly $s(5)=125/6$, and at $t=6$ it is $s(6)=18$. The total distance traveled by the object is therefore $125/6 + (125/6 - 18) = 71/3\approx 23.7$.
As we have seen, we can also compute distance traveled with anintegral; let's try it.$$ \int_0^6 v(t)\,dt = \int_0^6 -t^2+5t\,dt = \left.{-t^3\over 3}+{5\over2}t^2\right|_0^6 = 18.$$What went wrong? Well, nothing really, except that it's not reallytrue after all that "we can also compute distance traveled with anintegral''. Instead, as you might guess from this example, theintegral actually computes the
net distance traveled, that is,the difference between the starting and ending point.
As we have already seen, $$ \int_0^6 v(t)\,dt=\int_0^5 v(t)\,dt+\int_5^6 v(t)\,dt. $$ Computing the two integrals on the right (do it!) gives $125/6$ and $-17/6$, and the sum of these is indeed 18. But what does that negative sign mean? It means precisely what you might think: it means that the object moves backwards. To get the total distance traveled we can add $125/6+17/6=71/3$, the same answer we got before.
Remember that we can also interpret an integral as measuring an area, but now we see that this too is a little more complicated that we have suspected. The area under the curve $v(t)$ from 0 to 5 is given by $$ \int_0^5 v(t)\,dt={125\over6}, $$ and the "area'' from 5 to 6 is $$ \int_5^6 v(t)\,dt=-{17\over 6}. $$ In other words, the area between the $x$-axis and the curve, but under the $x$-axis, "counts as negative area''. So the integral $$ \int_0^6 v(t)\,dt=18 $$ measures "net area'', the area above the axis minus the (positive) area below the axis.
If we recall that the integral is the limit of a certain kind of sum, this behavior is not surprising. Recall the sort of sum involved: $$ \sum_{i=0}^{n-1} v(t_i)\Delta t. $$ In each term $v(t)\Delta t$ the $\Delta t$ is positive, but if $\ds v(t_i)$ is negative then the term is negative. If over an entire interval, like 5 to 6, the function is always negative, then the entire sum is negative. In terms of area, $v(t)\Delta t$ is then a negative height times a positive width, giving a negative rectangle "area''.
So now we see that when evaluating $$\ds\int_5^6 v(t)\,dt=-{17\over 6}$$ by finding an antiderivative, substituting, and subtracting, we get a surprising answer, but one that turns out to make sense.
Let's now try something a bit different: $$ \int_6^5 v(t)\,dt=\left.{-t^3\over 3}+{5\over2}t^2\right|_6^5 = {-5^3\over 3}+{5\over2}5^2-{-6^3\over 3}-{5\over2}6^2 ={17\over 6}. $$ Here we simply interchanged the limits 5 and 6, so of course when we substitute and subtract we're subtracting in the opposite order and we end up multiplying the answer by $-1$. This too makes sense in terms of the underlying sum, though it takes a bit more thought. Recall that in the sum $$ \sum_{i=0}^{n-1} v(t_i)\Delta t, $$ the $\Delta t$ is the "length'' of each little subinterval, but more precisely we could say that $\ds \Delta t = t_{i+1}-t_i$, the difference between two endpoints of a subinterval. We have until now assumed that we were working left to right, but could as well number the subintervals from right to left, so that $\ds t_0=b$ and $\ds t_n=a$. Then $\ds \Delta t=t_{i+1}-t_i$ is negative and in $$ \int_6^5 v(t)\,dt=\sum_{i=0}^{n-1} v(t_i)\Delta t, $$ the values $\ds v(t_i)$ are negative but also $\Delta t$ is negative, so all terms are positive again. On the other hand, in $$ \int_5^0 v(t)\,dt=\sum_{i=0}^{n-1} v(t_i)\Delta t, $$ the values $\ds v(t_i)$ are positive but $\Delta t$ is negative,and we get a negative result: $$ \int_5^0 v(t)\,dt=\left.{-t^3\over 3}+{5\over2}t^2\right|_5^0 = 0-{-5^3\over 3}-{5\over2}5^2 = -{125\over6}. $$
Finally we note one simple property of integrals: $$ \int_a^b f(x)+g(x)\,dx=\int_a^b f(x)\,dx+\int_a^b g(x)\,dx. $$ This is easy to understand once you recall that $(F(x)+G(x))'=F'(x)+G'(x)$. Hence, if $F'(x)=f(x)$ and $G'(x)=g(x)$, then $$ \eqalign{ \int_a^b f(x)+g(x)\,dx&=\left.(F(x)+G(x))\right|_a^b\cr &=F(b)+G(b)-F(a)-G(a)\cr &=F(b)-F(a)+G(b)-G(a)\cr &=\left.F(x)\right|_a^b+\left.G(x)\right|_a^b\cr &=\int_a^b f(x)\,dx+\int_a^b g(x)\,dx.\cr } $$
In summary, we will frequently use these properties of integrals: $$\displaylines{ \int_a^b f(x)\,dx = \int_a^c f(x)\,dx + \int_c^b f(x)\,dx\cr \int_a^b f(x)+g(x)\,dx=\int_a^b f(x)\,dx+\int_a^b g(x)\,dx\cr \int_a^b f(x)\,dx=-\int_b^a f(x)\,dx\cr }$$ and if $a< b$ and $f(x)\le 0$ on $[a,b]$ then $$ \int_a^b f(x)\,dx\le 0$$ and in fact $$ \int_a^b f(x)\,dx=-\int_a^b |f(x)|\,dx. $$
Exercises 7.3
Ex 7.3.1An object moves so that its velocity at time $t$ is$v(t)=-9.8t+20$ m/s. Describe the motion of the object between $t=0$ and$t=5$, find the total distance traveled by the object during thattime, and find the net distance traveled.(answer)
Ex 7.3.2An object moves so that its velocity at time $t$ is $v(t)=\sin t$.Set up and evaluate a single definite integral to compute thenet distance traveled between $t=0$ and $t=2\pi$.(answer)
Ex 7.3.3An object moves so that its velocity at time $t$ is$v(t)=1+2\sin t$ m/s. Find the net distance traveled by the objectbetween $t=0$ and $t=2\pi$, and find the total distance traveledduring the same period.(answer)
Ex 7.3.4Consider the function $f(x)=(x+2)(x+1)(x-1)(x-2)$ on$[-2,2]$. Find the total area between the curve and the $x$-axis(measuring all area as positive).(answer)
Ex 7.3.5Consider the function $\ds f(x)=x^2-3x+2$ on$[0,4]$. Find the total area between the curve and the $x$-axis(measuring all area as positive).(answer)
Ex 7.3.6Evaluate the three integrals:$$ A=\int_0^3 (-x^2+9)\,dx\qquad B=\int_0^{4} (-x^2+9)\,dx\qquad C=\int_{4}^3 (-x^2+9)\,dx,$$and verify that $A=B+C$.(answer)
|
Methods for Enforcing Inequality Constraints
How do you find the shortest overland distance between two points across a lake? Such obstacles and bounds on solutions are often called inequality constraints. Requirements for nonnegativity of gaps between objects in contact mechanics, species concentrations in chemistry, and population in ecology are some examples of inequality constraints. Previously in this series, we discussed equality constraints on variational problems. Today, we will show you how to implement inequality constraints when using equation-based modeling in COMSOL Multiphysics®.
A Penalty Method for Path Planning
Assume you want to go from a point (0,0.2) to (1,1), but there is a circular obstacle centered at (0.5,0.5) and radius 0.2. The curve (x,u(x)) that minimizes the distance between two end points, (a,u(a)) and (b,u(b)), should minimize the functional
In the Euclidean space, the shortest distance between two points is a straight line. As such, solving for this equation using what we have discussed so far should give a straight line. However, in our case, that straight line goes through the obstacle. This is not feasible. We have to respect the constraint that our path cannot go into the obstacle; that is, the distance from a point on the path to the center of the circular obstacle should be greater than or equal to the radius of the circle.
A basic schematic of the inequality constrained problem.
We want to add a term to our functional to penalize constraint violation. For an equality constraint g=0, we penalize both negative and positive values of g. For an inequality constraint g \leq 0, we have to penalize only positive values of g, while negative values are acceptable.
In the obstacle problem, this means we have to penalize only when the path tries to penetrate the obstacle. It is feasible, although maybe not optimal, to stray far from the shores of our circular lake, thus our penalty-regularized objective function is
where \mu is the penalty parameter and we used the ramp function
x& x\ge 0 \\
0 & x \le 0.
\end{cases}
For a general functional
to be minimized subject to the inequality constraint
the penalized functional becomes
The final step is to take the first variational derivative of the above functional and set it to zero. Here, we need to remember that the derivative of a ramp function is the Heaviside step function H.
In the above derivation, we used the fact that \big \langle x \big \rangle H(x) = xH(x). Note that we are discussing nonpositivity constraints here. Nonnegativity constraints can be dealt with by taking the negative of the constraint equation and solving for the equivalent nonpositivity constraint.
The first term in the above variational equation is the now-familiar contribution from the unconstrained problem. Let’s see how to add the penalty term using the
Weak Contribution node. We will use the simpler Dirichlet Boundary Condition node to fix the ends. Implementing an inequality constraint using the penalty method.
Note that in the above weak contribution, we did not include \frac{\partial g}{\partial u^{\prime}}, since the constraint in this problem depends only on the solution and not on its spatial derivative.
Solving this problem for a sequence of increasing penalty parameters, while reusing the previous solution as an initial estimate, we get the result shown below.
Finding the shortest path around an obstacle using the penalty method. A zero penalty returns the unconstrained solution.
In our discussion of the physical interpretation of the penalty method for equality constraints, we have said that the penalty term introduces a reaction proportional to the constraint violation. That interpretation applies, but for inequality constraints, the reaction is one-sided.
Imagine forcing a bead to stay in place using compressive springs. The springs are not hooked to the bead; they just touch. To keep the bead at x=0, we have to use two springs, one on each side of the origin. If it is acceptable for the bead to go to the left of the origin, our constraint is x \leq 0 and we have to place the compressive springs only on the right side. That is why we penalize \big \langle g \big \rangle instead of |g|.
A spring analogy of penalty enforcement of equality (multilateral) and inequality (unilateral) constraints. The Lagrange Multiplier Method
When we discussed the numerical properties of different constraint enforcement strategies, we said that while the Lagrange multiplier method enforces constraints exactly, it has certain undesirable properties in numerical solutions. Namely, it is sensitive to initial estimates of the solution and can require a direct linear solver. These downsides are still there, but in inequality constraints, there is an additional challenge. Specifically, the constraint may not always be active.
Consider our path planning problem. In parts of the path not touching the obstacle, the constraint is not active. However, we do not know beforehand where the constraints will be active and not active. There are several strategies to deal with this, but we would like to point out two commonly used methods.
Active Set Strategy
Assume some constraints are active and some are not. In a distributed constraint, this means picking points where we expect g=0 and assuming the other points are inside the feasible region with g <0. At the inactive points, the Lagrange multiplier should be zero. On the active set, the Lagrange multiplier should not be negative. These are the so-called Karush-Kuhn-Tucker (KKT) conditions. If, after computation, the active set changes or the KKT conditions are violated, we have to appropriately update the active set and recompute. The following flowchart summarizes this procedure:
An active set strategy for the Lagrange multiplier enforcement of inequality constraints.
Through this iterative process, each iteration solves an equality constrained problem. Lagrange multiplier implementations of equality constraints have been discussed in the previous blog post in this series.
Slack Variables
Another strategy is introducing the so-called
slack variables to change the inequality into equality. The constraint g \leq 0 is equivalent to g + s^2=0. Now we have an equality constraint involving a new slack variable. If the inequality is a distributed (pointwise) constraint, the slack variable as well as the Lagrange multiplier will be functions.
On one hand, the slack variable strategy introduces yet another unknown to be solved for, but solves the problem in one go. On the other hand, the active set strategy does not need this variable but has to solve multiple problems until the active set stops changing.
The Augmented Lagrangian Method
Like we did for equality constraints, let’s use a calculus problem to discuss the basic ideas of this method. Augmenting the unconstrained objective function with a Lagrange multiplier term with an estimated multiplier and a penalty term, we have
The first-order optimality condition for this constraint is
This suggests the multiplier update
If we start with a zero initial estimate for the Lagrange multiplier, the above equation updates the Lagrange multiplier only when g is positive. Thus, the Lagrange multiplier is never negative and is positive only on points that tend to violate the constraint.
The augmented Lagrangian method, being an approximate constraint enforcement strategy, will allow a small violation of the constraint. In fact, if there is no violation of the constraint, the Lagrange multiplier stays zero. As such, this strategy satisfies the dual feasibility part (\lambda \ge 0) of the KKT conditions exactly but satisfies the complementary slackness part (\lambda g = 0) only approximately. Both conditions are exactly satisfied with the Lagrange multiplier method.
Obstacles with Irregular Shapes
In today’s example, we used a very idealized lake whose boundaries could be described by a simple function. Or perhaps it was a pool. Sometimes, inequality constraints have such simple analytical forms. Nonnegativity constraints in chemical reactions or population dynamics are examples of this. However, other constraints do not have such simple forms.
For example, in contact mechanics, we want to keep the gap between contacting objects nonnegative, but the boundaries of the objects are rarely so simple that we can define them by smooth analytical functions. On top of that, for large deformation problems, we want to enforce the contact constraints on the deforming domains, but we definitely don’t have an analytical description of the deforming domains. As such, the constraints have to be satisfied on the discrete version of the objects obtained from the mesh. Complex search operations have to be used to find out, as the objects move and deform, what points come into and out of contact with other points. There are built-in features for this and other contact simulation procedures in the COMSOL Multiphysics® software.
For some simple geometries and deformations, we can conceivably use the General Extrusion operator to compute the distance between deforming objects without using the contact mechanics functionality. More complicated geometries would need the latter.
Up Next…
So far in this series, we have shown how to solve constrained and unconstrained variational problems using COMSOL Multiphysics and discussed the pros and cons of different numerical strategies for constraint enforcement. That said, we limited ourselves to 1D single-physics problems and variational problems with, at most, first-order derivatives in the functional.
With the basics of this subject covered, we hope we are ready to tackle higher dimensions, higher-order derivatives, and multiple fields. This will come in the next and final blog post in this series. Stay tuned!
View More Blog Posts in the Variational Problems and Constraints Series Part 1: Introduction to Modeling Soap Films and Other Variational Problems Part 2: Specifying Boundary Conditions and Constraints in Variational Problems Part 3: Methods for Dealing with Numerical Issues in Constraint Enforcement Part 5: Image Denoising and Other Multidimensional Variational Problems Comments (1) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
The continuous time Hilbert transform is $$\hat x(t) := x(t) + j\left( p.v. \left\{\frac{1}{t\pi} \ast x \right\} (t)\right)$$
where $$ \theta(t) = \tan^{-1}\left( \frac{ p.v. \left\{\frac{1}{t\pi} \ast x \right\} (t)}{x(t)} \right)$$ and $$ \omega(t) = \theta'(t) $$
and since $\theta(t)$ is smooth, you can actually calculate the instantaneous frequency. But in the discrete time domain, the angle function is discrete, so given that in the dt domain, your hilbert transform filter is
$$ h_{HT}[n] = \left\{ \begin{array}{ll} \frac{1}{n\pi} & \mbox{if}\ \ n \neq 0 \\ 0 & \mbox{otherwise} \\ \end{array} \right.$$
How do you get the approximate frequency? or are you really just getting $\Delta\theta[n]$? Sorry if this is me just getting hung up on something trivial, but most references don't seem to clearly address this.
$$\theta[n] = \tan^{-1}\left(\frac{(h_{HT} \ast x)[n]}{x[n]}\right)$$ $$\omega[n] = \left\{ \begin{array}{ll} \theta[n] \ast \frac{1}{T_s}(\delta[n] - \delta[n-1]) & ? \\ \theta[n] \ast \frac{1}{2T_s}(\delta[n+1] - \delta[n-1]) & ? \\ ??? \end{array} \right. $$
I'm looking for what is typically done. The "Standard" approach.
|
Image of Subset under Relation is Subset of Image/Corollary 3 Corollary of Image of Subset under Relation is Subset of Image
Let $S$ and $T$ be sets.
Let $f: S \to T$ be a mapping from $S$ to $T$.
Let $C, D \subseteq T$. Then: $C \subseteq D \implies f^{-1} \sqbrk C \subseteq f^{-1} \sqbrk D$ This can be expressed in the language and notation of inverse image mappings as: $\forall C, D \in \powerset T: C \subseteq D \implies \map {f^\gets} C \subseteq \map {f^\gets} D$ Proof $f^{-1} \subseteq T \times S$
The result follows directly from Image of Subset under Relation is Subset of Image.
$\blacksquare$
Sources 1970: B. Hartley and T.O. Hawkes: Rings, Modules and Linear Algebra... (previous) ... (next): $\S 2.2$: Homomorphisms: $\text{(ii)}$ 1975: T.S. Blyth: Set Theory and Abstract Algebra... (previous) ... (next): $\S 5$. Induced mappings; composition; injections; surjections; bijections: Theorem $5.1: \ \text{(j)}$ 2000: James R. Munkres: Topology(2nd ed.) ... (previous) ... (next): $1$: Set Theory and Logic: $\S 2$: Functions: Exercise $2.2 \ \text{(a)}$
|
We have seen how integration can be used to find an area between a curve and the $x$-axis. With very little change we can find some areas between curves; indeed, the area between a curve and the $x$-axis may be interpreted as the area between the curve and a second "curve'' with equation $y=0$. In the simplest of cases, the idea is quite easy to understand.
Example 8.1.1 Find the area below $\ds f(x)= -x^2+4x+3$ and above $\ds g(x)=-x^3+7x^2-10x+5$ over the interval $1\le x\le2$. In figure 8.1.1 we show the two curves together, with the desired area shaded, then $f$ alone with the area under $f$ shaded, and then $g$ alone with the area under $g$ shaded.
It is clear from the figure that the area we want is the area under $f$ minus the area under $g$, which is to say $$\int_1^2 f(x)\,dx-\int_1^2 g(x)\,dx = \int_1^2 f(x)-g(x)\,dx.$$ It doesn't matter whether we compute the two integrals on the left and then subtract or compute the single integral on the right. In this case, the latter is perhaps a bit easier: $$\eqalign{ \int_1^2 f(x)-g(x)\,dx&=\int_1^2 -x^2+4x+3-(-x^3+7x^2-10x+5)\,dx\cr &=\int_1^2 x^3-8x^2+14x-2\,dx\cr &=\left.{x^4\over4}-{8x^3\over3}+7x^2-2x\right|_1^2\cr &={16\over4}-{64\over3}+28-4-({1\over4}-{8\over3}+7-2)\cr &=23-{56\over3}-{1\over4}={49\over12}.\cr }$$
It is worth examining this problem a bit more. We have seen one way to look at it, by viewing the desired area as a big area minus a small area, which leads naturally to the difference between two integrals. But it is instructive to consider how we might find the desired area directly. We can approximate the area by dividing the area into thin sections and approximating the area of each section by a rectangle, as indicated in figure 8.1.2. The area of a typical rectangle is $\Delta x(f(x_i)-g(x_i))$, so the total area is approximately $$\sum_{i=0}^{n-1} (f(x_i)-g(x_i))\Delta x.$$ This is exactly the sort of sum that turns into an integral in the limit, namely the integral $$\int_1^2 f(x)-g(x)\,dx.$$ Of course, this is the integral we actually computed above, but we have now arrived at it directly rather than as a modification of the difference between two other integrals. In that example it really doesn't matter which approach we take, but in some cases this second approach is better.
Example 8.1.2 Find the area below $\ds f(x)= -x^2+4x+1$ and above $\ds g(x)=-x^3+7x^2-10x+3$ over the interval $1\le x\le2$; these are the same curves as before but lowered by 2. In figure 8.1.3 we show the two curves together. Note that the lower curve now dips below the $x$-axis. This makes it somewhat tricky to view the desired area as a big area minus a smaller area, but it is just as easy as before to think of approximating the area by rectangles. The height of a typical rectangle will still be $f(x_i)-g(x_i)$, even if $g(x_i)$ is negative. Thus the area is $$ \int_1^2 -x^2+4x+1-(-x^3+7x^2-10x+3)\,dx =\int_1^2 x^3-8x^2+14x-2\,dx. $$ This is of course the same integral as before, because the region between the curves is identical to the former region—it has just been moved down by 2.
Example 8.1.3 Find the area between $\ds f(x)= -x^2+4x$ and $\ds g(x)=x^2-6x+5$ over the interval $0\le x\le 1$; the curves are shown in figure 8.1.4. Generally we should interpret "area'' in the usual sense, as a necessarily positive quantity. Since the two curves cross, we need to compute two areas and add them. First we find the intersection point of the curves: $$\eqalign{ -x^2+4x&=x^2-6x+5\cr 0&=2x^2-10x+5\cr x&={10\pm\sqrt{100-40}\over4}={5\pm\sqrt{15}\over2}.\cr }$$ The intersection point we want is $\ds x=a=(5-\sqrt{15})/2$. Then the total area is $$\eqalign{ \int_0^a x^2-6x+5-(-x^2+4x)\,dx&+\int_a^1 -x^2+4x-(x^2-6x+5)\,dx\cr &=\int_0^a 2x^2-10x+5\,dx+\int_a^1 -2x^2+10x-5\,dx\cr &=\left.{2x^3\over3}-5x^2+5x\right|_0^a + \left.-{2x^3\over3}+5x^2-5x\right|_a^1\cr &=-{52\over3}+5\sqrt{15}, }$$ after a bit of simplification.
Example 8.1.4 Find the area between $\ds f(x)= -x^2+4x$ and $\ds g(x)=x^2-6x+5$; the curves are shown in figure 8.1.5. Here we are not given a specific interval, so it must be the case that there is a "natural'' region involved. Since the curves are both parabolas, the only reasonable interpretation is the region between the two intersection points, which we found in the previous example: $${5\pm\sqrt{15}\over2}.$$ If we let $\ds a=(5-\sqrt{15})/2$ and $\ds b=(5+\sqrt{15})/2$, the total area is $$\eqalign{ \int_a^b -x^2+4x-(x^2-6x+5)\,dx &=\int_a^b -2x^2+10x-5\,dx\cr &=\left.-{2x^3\over3}+5x^2-5x\right|_a^b\cr &=5\sqrt{15}.\cr }$$ after a bit of simplification.
Exercises 8.1
Find the area bounded by the curves.
Ex 8.1.1$\ds y=x^4-x^2$ and $\ds y=x^2$ (the part to the right of the $y$-axis)(answer)
Ex 8.1.2$\ds x=y^3$ and $\ds x=y^2$(answer)
Ex 8.1.3$\ds x=1-y^2$ and $y=-x-1$(answer)
Ex 8.1.4$\ds x=3y-y^2$ and $x+y=3$(answer)
Ex 8.1.5$y=\cos(\pi x/2)$ and $\ds y=1- x^2$ (in the first quadrant)(answer)
Ex 8.1.6$y=\sin(\pi x/3)$ and $y=x$ (in the first quadrant)(answer)
Ex 8.1.7$\ds y=\sqrt{x}$ and $\ds y=x^2$(answer)
Ex 8.1.8$\ds y=\sqrt x$ and $\ds y=\sqrt{x+1}$, $0\le x\le 4$(answer)
Ex 8.1.9$x=0$ and $\ds x=25-y^2$(answer)
Ex 8.1.10$y=\sin x\cos x$ and $y=\sin x$, $0\le x\le \pi$(answer)
Ex 8.1.11$\ds y=x^{3/2}$ and $\ds y=x^{2/3}$(answer)
Ex 8.1.12$\ds y=x^2-2x$ and $y=x-2$(answer)
|
We start by considering equations in which only the first derivative of the function appears.
Definition 19.1.1 A
first order differential equation is an equation ofthe form$F(t, y, \dot{y})=0$.A solution of a first order differential equation is afunction $f(t)$ that makes $\ds F(t,f(t),f'(t))=0$ for every value of $t$.
Here, $F$ is a function of three variables which we label $t$, $y$, and $\dot{y}$. It is understood that $\dot{y} $ will explicitly appear in the equation although $t$ and $y$ need not. The term "first order'' means that the first derivative of $y$ appears, but no higher order derivatives do.
Example 19.1.2 The equation from Newton's law of cooling, $\dot{y}=k(M-y)$ is a first order differential equation; $F(t,y,\dot y)=k(M-y)-\dot y$.
Example 19.1.3 $\ds\dot{y}=t^2+1$ is a first order differential equation; $\ds F(t,y,\dot y)= \dot y-t^2-1$. All solutions to this equation are of the form $\ds t^3/3+t+C$.
Definition 19.1.4 A
first order initial value problem is a system ofequations of the form$F(t, y, \dot{y})=0$, $y(t_0)=y_0$. Here $t_0 $ is a fixed timeand $y_0$ is a number.A solution of an initial value problem is a solution $f(t)$ ofthe differential equation that also satisfies the initial condition$f(t_0) = y_0$.
Example 19.1.5 The initial value problem $\ds\dot{y}=t^2+1$, $y(1)=4$ has solution $\ds f(t)=t^3/3+t+8/3$.
The general first order equation is rather too general, that is, we can't describe methods that will work on them all, or even a large portion of them. We can make progress with specific kinds of first order differential equations. For example, much can be said about equations of the form $\ds \dot{y} = \phi (t, y)$ where $\phi $ is a function of the two variables $t$ and $y$. Under reasonable conditions on $\phi$, such an equation has a solution and the corresponding initial value problem has a unique solution. However, in general, these equations can be very difficult or impossible to solve explicitly.
Example 19.1.6 Consider this specific example of an initial value problem for Newton's law of cooling: $\dot y = 2(25-y)$, $y(0)=40$. We first note that if $y(t_0) = 25$, the right hand side of the differential equation is zero, and so the constant function $y(t)=25$ is a solution to the differential equation. It is not a solution to the initial value problem, since $y(0)\not=40$. (The physical interpretation of this constant solution is that if a liquid is at the same temperature as its surroundings, then the liquid will stay at that temperature.) So long as $y$ is not 25, we can rewrite the differential equation as $$\eqalign{{dy\over dt}{1\over 25-y}&=2\cr {1\over 25-y}\,dy&=2\,dt,\cr} $$ so $$\int {1\over 25-y}\,dy = \int 2\,dt,$$ that is, the two anti-derivatives must be the same except for a constant difference. We can calculate these anti-derivatives and rearrange the results: $$\eqalign{ \int {1\over 25-y}\,dy &= \int 2\,dt\cr (-1)\ln|25-y| &= 2t+C_0\cr \ln|25-y| &= -2t - C_0 = -2t + C\cr |25-y| &= e^{-2t+C}=e^{-2t} e^C\cr y-25 & = \pm\, e^C e^{-2t} \cr y &= 25 \pm e^C e^{-2t} =25+Ae^{-2t}.\cr}$$ Here $\ds A = \pm\, e^C = \pm\, e^{-C_0}$ is some non-zero constant. Since we want $y(0)=40$, we substitute and solve for $A$: $$\eqalign{ 40&=25+Ae^0\cr 15&=A,\cr}$$ and so $\ds y=25+15 e^{-2t}$ is a solution to the initial value problem. Note that $y$ is never 25, so this makes sense for all values of $t$. However, if we allow $A=0$ we get the solution $y=25$ to the differential equation, which would be the solution to the initial value problem if we were to require $y(0)=25$. Thus, $\ds y=25+Ae^{-2t}$ describes all solutions to the differential equation $\ds\dot y = 2(25-y)$, and all solutions to the associated initial value problems.
Why could we solve this problem? Our solution depended on rewriting the equation so that all instances of $y$ were on one side of the equation and all instances of $t$ were on the other; of course, in this case the only $t$ was originally hidden, since we didn't write $dy/dt$ in the original equation. This is not required, however.
Example 19.1.7 Solve the differential equation $\ds\dot y = 2t(25-y)$. This is almost identical to the previous example. As before, $y(t)=25$ is a solution. If $y\not=25$, $$\eqalign{ \int {1\over 25-y}\,dy &= \int 2t\,dt\cr (-1)\ln|25-y| &= t^2+C_0\cr \ln|25-y| &= -t^2 - C_0 = -t^2 + C\cr |25-y| &= e^{-t^2+C}=e^{-t^2} e^C\cr y-25 & = \pm\, e^C e^{-t^2} \cr y &= 25 \pm e^C e^{-t^2} =25+Ae^{-t^2}.\cr}$$ As before, all solutions are represented by $\ds y=25+Ae^{-t^2}$, allowing $A$ to be zero.
Definition 19.1.8 A first order differential equation is
separable if itcan be written in the form$\dot{y} = f(t) g(y)$.
As in the examples, we can attempt to solve a separable equation byconverting to the form$$\int {1\over g(y)}\,dy=\int f(t)\,dt.$$This technique is called
separation of variables. The simplest (inprinciple) sort of separable equation is one in which $g(y)=1$, inwhich case we attempt to solve$$\int 1\,dy=\int f(t)\,dt.$$We can do this if we can find an anti-derivative of $f(t)$.
Also as we have seen so far, a differential equationtypically has an infinite number of solutions. Ideally, but certainlynot always, a corresponding initial value problem will have just onesolution. A solution in which there are no unknown constants remainingis called a
particular solution .
The general approach to separable equations is this: Suppose we wish to solve $\dot{y} = f(t) g(y) $ where $f$ and $g$ are continuous functions. If $g(a)=0$ for some $a$ then $y(t)=a$ is a constant solution of the equation, since in this case $\dot y = 0 = f(t)g(a)$. For example, $\dot{y} =y^2 -1$ has constant solutions $y(t)=1$ and $y(t)=-1$.
To find the nonconstant solutions, we note that the function $1/g(y)$ is continuous where $g\not=0$, so $1/g$ has an antiderivative $G$. Let $F$ be an antiderivative of $f$. Now we write $$G(y) = \int {1\over g(y)}\,dy = \int f(t)\,dt=F(t)+C,$$ so $G(y)=F(t)+C$. Now we solve this equation for $y$.
Of course, there are a few places this ideal description could go wrong: we need to be able to find the antiderivatives $G$ and $F$, and we need to solve the final equation for $y$. The upshot is that the solutions to the original differential equation are the constant solutions, if any, and all functions $y$ that satisfy $G(y)=F(t)+C$.
Example 19.1.9 Consider the differential equation $\dot y=ky$. When $k>0$, this describes certain simple cases of population growth: it says that the change in the population $y$ is proportional to the population. The underlying assumption is that each organism in the current population reproduces at a fixed rate, so the larger the population the more new organisms are produced. While this is too simple to model most real populations, it is useful in some cases over a limited time. When $k< 0$, the differential equation describes a quantity that decreases in proportion to the current value; this can be used to model radioactive decay.
The constant solution is $y(t)=0$; of course this will not be the solution to any interesting initial value problem. For the non-constant solutions, we proceed much as before: $$\eqalign{ \int {1\over y}\,dy&=\int k\,dt\cr \ln|y| &= kt+C\cr |y| &= e^{kt} e^C\cr y &= \pm \,e^C e^{kt} \cr y&= Ae^{kt}.\cr}$$ Again, if we allow $A=0$ this includes the constant solution, and we can simply say that $\ds y=Ae^{kt}$ is the general solution. With an initial value we can easily solve for $A$ to get the solution of the initial value problem. In particular, if the initial value is given for time $t=0$, $y(0)=y_0$, then $A=y_0$ and the solution is $\ds y= y_0 e^{kt}$.
Exercises 19.1
Ex 19.1.1Which of the following equations are separable?
a. $\ds \dot{y} = \sin (ty)$
b. $\ds \dot{y} = e^t e^y $
c. $\ds y\dot{y} = t $
d. $\ds \dot{y} = (t^3 -t) \arcsin(y)$
e. $\ds \dot{y} = t^2 \ln y + 4t^3 \ln y $
Ex 19.1.2Solve $\ds\dot{y} = 1/(1+t^2)$.(answer)
Ex 19.1.3Solve the initial value problem $\dot{y} = t^n$ with$y(0)=1$ and $n\ge 0$.(answer)
Ex 19.1.4Solve $\dot{y} = \ln t$. (answer)
Ex 19.1.5Identify the constant solutions (if any) of $\dot{y} =t\sin y$.(answer)
Ex 19.1.6Identify the constant solutions (if any) of $\ds\dot{y}=te^y$.(answer)
Ex 19.1.7Solve $\dot{y} = t/y$.(answer)
Ex 19.1.8Solve $\ds\dot{y} = y^2 -1$.(answer)
Ex 19.1.9Solve $\ds\dot{y} = t/(y^3 - 5)$. You may leaveyour solution in implicit form: that is, you may stop once you havedone the integration, without solving for $y$.(answer)
Ex 19.1.10Find a non-constant solution of the initial value problem $\dot{y} = y^{1/3}$, $y(0)=0$, using separation of variables. Note that the constant function $y(t)=0 $ also solves the initial value problem. This shows that an initial value problem can have more than one solution.(answer)
Ex 19.1.11Solve the equation for Newton's law of cooling leaving $M$and $k$ unknown.(answer)
Ex 19.1.12After 10 minutes in Jean-Luc's room, his tea hascooled to $40^\circ $ Celsius from $100^\circ$ Celsius. The room temperature is $25^\circ$Celsius. How much longer will it take to cool to $35^\circ$?(answer)
Ex 19.1.13Solve the logistic equation $\dot{y} = ky(M-y)$. (This is a somewhat morereasonable population model in most cases than the simpler$\dot y=ky$.) Sketch thegraph of the solution to this equation when $M=1000$, $k=0.002$, $y(0)=1$.(answer)
Ex 19.1.14Suppose that $\dot{y} = ky$, $y(0)=2$, and $\dot{y}(0)=3$. What is $y$?(answer)
Ex 19.1.15A radioactive substance obeys the equation$\dot{y} =ky$ where $k< 0 $ and $y$ is the mass of thesubstance at time $t$. Suppose that initially, the mass of thesubstance is $y(0)=M>0$. At what time does half of the mass remain?(This is known as the half life. Note that the half life depends on$k$ but not on $M$.)(answer)
Ex 19.1.16Bismuth-210 has a half life of five days. If there isinitially 600 milligrams, how much is left after 6 days? When willthere be only 2 milligrams left?(answer)
Ex 19.1.17The half life of carbon-14 is 5730 years. If one startswith 100 milligrams of carbon-14, how much is left after 6000years? How long do we have to wait before there is less than 2milligrams?(answer)
Ex 19.1.18A certain species of bacteria doubles its population(or its mass)every hour in the lab. The differential equation that models this phenomenonis $\dot{y} =ky$, where $k>0 $ and $y$is the population of bacteria at time $t$. What is $y$?(answer)
Ex 19.1.19If a certain microbe doubles its population every 4hours and after 5 hours the total population has mass 500 grams,what was the initial mass?(answer)
|
The question Should chat have TeX support was first asked3½ years ago, and it has been asked at intervals since then. I am not asking that question. I
know that chat should have TeX support and so does everyone else. My questions are, why hasn't this been done and when will it be done? The lack of MathJax support in chat is a serious problem with the site, and it should be fixed as soon as possible, preferably today.
I am aware of the existing solutions such as greasemonkey scripts. I use them myself. They do not solve the problem because while they allow one to
receive formatted mathematics, they don't allow one to send it, unless the recipient also has a script installed. But in general I can't expect the person I am talking to will have one installed. Then I think I am sending “No, you need to consider $\int_0^\infty \def\O{\mathcal O} \O\left(\operatorname{erf}(\frac x2)\right)\;dx$” but the hapless loser I am trying to help only sees “No, you need to consider \int_0^\infty \def\O{\mathcal O} \O\left(\operatorname{erf}(\frac x2)\right)\;dx”.
When a discussion goes on in comments, SE posts a notice telling the people involved to take the discussion to chat:
That sounds great at first, but it is completely unhelpful, because if the discussion is taken to chat, you will no longer be able to actually discuss mathematics. And the prior discussion, imported automatically, will look like this to the person you are trying to help:
It seems to me completely obvious that this feature is crucial, and its lack should have been addressed years ago, and yet it has not been. So I repeat:
What is causing the delay on this? When will this be fixed? And if it's been decided for some reason that it will never happen, then what is going to be done instead? What's the point of having chat at all, when it doesn't work? The purpose of the site is to ask and answer questions about mathematics, and the only purpose of chat is to support that. Other features of the site assume that it will. But in its present form, it does not and it cannot.
|
If $E \subset \mathbb{R}^n$ and $x \in E$, we have that $ \operatorname{dist}(x,\partial E) = \sup \{ \gamma \geq 0 : B(x, \gamma) \subset E \}$. What characteristics of $\mathbb{R}^n$ make this true? In other words, what restrictions would we have to impose on a general metric space (or topological space) for this to be true?
Let $X$ be a metric space and $E\subseteq X$ a subset with non-empty boundary. Let $x\in E$. Then by definition, $\operatorname{dist}(x,\partial E)=\inf\{\,\operatorname{dist}(x,y)\mid y\in\partial E\,\}$. So for any $\gamma>\operatorname{dist}(x,\partial E)$, there exists $y\in\partial E$ with $y\in B(x,\gamma)$. By definition of boundary, $B(y,\gamma-\operatorname{dist}(x,\partial E))$ contains exterior points and is $\subseteq B(x,\gamma)$. Hence $B(x,\gamma)\not\subseteq E$. However, for $\gamma<\operatorname{dist}(x,\partial E)$, we can only conclude that $B(x,\gamma)\cap \partial E=\emptyset$, while what we'd need ist $B(x,\gamma)\subseteq E$. It is possible that $B(x,\gamma)$ intersects both the interior and the exterior of $E$, but not its boundary. In that case, $B(x,\gamma)$ is not connected (as witnessed namely by the interior and exterior). Thus the important property that $\Bbb R^n$ has is that
every open ball is connected.
Remark. This may somewhat remind of "locally connected" - but that is not enough. Consider $X=\Bbb R^2\setminus ([0,\infty)\times\{0\})$, which has all the connectivity you want, but for $E=\{\,(x,y)\in X\mid y>0\,\}$ and $a=(4,3)$, we have $\operatorname{dist}(a,\partial E)=5$ and $B(a,4)\not\subseteq E$. As $X$ is homeomorphic to $\Bbb R^2$, this is really a property of the metric, not just of the topology (as should be expected from the very definition)
|
For reasons that I find very mysterious, Unicode has the full range of Greek in sans serif bold, upright and italic, but it doesn't cover sans serif Greek in medium weight; to wit, there are
MATHEMATICAL SANS-SERIF BOLD CAPITAL GAMMA (U+1D758)
MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL GAMMA (U+1D792)
and the other Greek letters (upper and lower case), but no medium weight ones.
So it's not surprising that
\symsf{\Gamma} (or
\mathsf{\Gamma} in an older version of
unicode-math) gives no different symbol: there's none that satisfies the request.
You can get through with
\textsf:
\documentclass{article}
\usepackage{amsmath}
\usepackage{unicode-math}
\begin{document}
$\Gamma\ne\textsf{\upshape Γ}\ne\symbfsf{\Gamma}$
$Γ\ne\textsf{\upshape Γ}\ne\symbfsf{Γ}$
\end{document}
A kludge, I know, but I can't offer any better. Of course the sans serif font defined in the document should support Greek.
|
Rayleigh Scattering of Isolated Species
( Species == ions, atoms, molecules )
See Polarization for the low wavenumber (frequency / speed of light ) approximation used for Rayleigh scattering.
Scattering is due to the polarization of species. The polarization can be summed from the behavior of individual resonances and damping factors (related to natural linewidth and spontaneous emission rates), which I am still learning about. I
The scattering cross section from a single resonator at frequency ~ \large \omega ~ is derived from Feynman Lectures on Physics, volume one, chapter 32, equation 32.15, and is proportional to:
{ \huge \int } { \Large { \omega^4 \over { ( \omega^2 ~-~ \omega_0^2 )^2 ~+~ \gamma^2 \omega^2 } } ~ } { d \omega }
That can be integrated using Wolfram alpha for the integration, but that spews frightening number of radicals containing negative values, given that \omega_0 > \gamma > 0 . And I don't have a list of resonances ( \omega_0 ) and resonance bandwidths (\gamma) anyway; the integration would be highly sensitive to resonance bandwidth if the resonances occur in the middle of the Rayleigh scattering maximum of the solar spectrum.
So, until I find accurate information, I'll just assume the low frequency polarization.
Since the solar spectrum drops off exponentially in the UV, Rayleigh scattering seems to peak in the near-UV, around 300 nm wavelength.
|
As mentioned in the comments by Roland, this term is not common and is first used by the authors of the mentioned paper (also the package mentioned by Roland in comments - STING).
From this link you can find the definition of the Last Heavy Atom which is possibly the most distal non-hydrogen (N, C, O, S) atom in the amino-acid side chain.
$$\begin{array}{|c|c|c|c|}\hlineAla\ :\ C_\beta & Asp\ :\ O_{\delta^2} &Trp\ :\ C_{\eta^2} &Asn\ :\ N_{\delta^2} \\\hlineLys\ :\ N_\zeta & Glu\ :\ O_{\epsilon^2} & Ser\ :\ O_\gamma & Gln\ :\ N_{\epsilon^2} \\\hlineCys\ :\ S_\gamma & His\ :\ N_{\epsilon^2} & Tyr\ :\ O_\eta & Val\ :\ C_{\gamma^2} \\\hlineGly\ :\ C_\alpha & Leu\ :\ C_{\delta^2} & Met\ :\ C_\epsilon & Ile\ :\ C_{\delta^1} \\\hlineArg\ :\ N_{\eta^2} & Phe\ :\ C_\zeta & Pro\ :\ C_\delta & Thr\ :\ C_{\gamma^2} \\\hline\end{array}$$
They have mentioned in the same link that:
If in any of the PDB files (to be analyzed by the BLUE STAR STING components) that specific atom (LHA) is missing in the record, then our algorithms will search for the next closer atom in the side chain that would be considered as the LHA.
The exact location of some of the abovementioned atoms.
Some of the locations might be confusing. So I am just indicating what these are, for a few the amino acids (especially for the branched and cyclic side chains).
[ Images reproduced from here.] Arginine
The terminal nitrogens $\eta^1$ and $\eta^2$ are symmetric and the atom that is actually (stereochemically) the most distal is considered the LHA, as inferred from the protein structure. The second choice, obviously is the $\eta^1$-Nitrogen.
Aspartic acid
The two terminal oxygens ($\delta^1$ and $\delta^2$) in the ionized form are equivalent (also otherwise because of resonance). The actual distal atom would be labelled as $\delta^2$
you can extrapolate the same for Glutamic acid Histidine
$\epsilon^2$ refers to the pyrollic nitogen. .
Leucine
The last carbons ($\delta^1$ and $\delta^2$) are equivalent and the choice is based on the actual distance.
Extrapolate for valine Phenylalanine
$\zeta$ refers to the carbon in the
para-position of the benzene ring. For tyrosine the LHA is the oxygen of the para-OH in the benzene ring. Tryptophan
Proline
For
Aspargine and Glutamine the LHA is the amide nitrogen of the side chain.
|
There are reasons that any modern example is likely to resemble the status of Legendre's constant. Most (but not all) interesting numbers admit a polynomial-time algorithm to compute their digits. In fact, there is an interesting semi-review by Borwein and Borwein that shows that most of the usual numbers in calculus (for example, $\exp(\sqrt{2}+\pi)$) have a
quasilinear time algorithm on a RAM machine, meaning $\tilde{O}(n) = O(n(\log n)^\alpha)$ time to compute $n$ digits. Once you have $n$ digits, you can use the continued fraction algorithm to find the best rational approximation with at most $n/2-O(1)$ digits in the denominator. The continued fraction algorithm is equivalent to the Euclidean algorithm, which also has a quasilinear time version according to Wikipedia.
Euler's constant has been to computed almost 30 billion digits, using a quasilinear time algorithm due to Brent and McMillan.
As a result, for any such number it's difficult to be surprised. You would need a mathematical coincidence that the number is rational, but with a denominator that is out of reach for modern computers. (This was Brent and MacMillian's stated motivation in the case of Euler's constant.) I think that it would be fairly newsworthy if it happened. On the other hand, if you can only compute the digits very slowly, then your situation resembles Legendre's.
I got e-mail asking for a reference to the paper of Borwein and Borwein. The paper is On the complexity of familiar functions and numbers. To summarize the relevant part of this survey paper, any value or inverse value of an elementary function in the sense of calculus, including also hypergeometric functions as primitives, can be computed in quasilinear time. So can the gamma or zeta function evaluated at a rational number.
|
What does the number in brackets mean in these two examples?
$$21(1)\ \mathrm{cm^{-1}}$$
and
$$1.0(3)\times10^{-7}$$
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community
What does the number in brackets mean in these two examples?
$$21(1)\ \mathrm{cm^{-1}}$$
and
$$1.0(3)\times10^{-7}$$
The results of measurements and other numerical values of quantities are often given with an associated
standard uncertainty. A numerical value and the associated uncertainty may be expressed as shown in the question:
$$\begin{align} y&=21(1)\ \mathrm{cm^{-1}}\\[6pt] &=a(b)\ \mathrm{cm^{-1}} \end{align}$$
where
$y$ is the estimate of the measurand (e.g. the result of a measurement) expressed in the unit $\mathrm{cm^{-1}}$, $a$ is the numerical value, and $b$ denotes a standard uncertainty expressed in terms of the least significant digit(s) in $a$.
It is important to note that the given uncertainty refers to the least significant digits of the given numerical value. For example, in the expression
$$l=23.4782(32)\ \mathrm m$$
the $(32)$ represents a standard uncertainty equal to
$$u(l)=0.0032\ \mathrm m$$
Many physical constants are also given in this form. For example, the previously recommended value for the molar gas constant $R$ as given by NIST from 25 June 2015 until the new value became available on 20 May 2019:
$$R=8.3144598(48)\ \mathrm{J\ mol^{-1}\ K^{-1}}$$
where the $(48)$ represents a standard uncertainty of
$$u(R)=0.0000048\ \mathrm{J\ mol^{-1}\ K^{-1}}$$
This form is in accordance with various current standards, in particular
The Guide to the Expression of Uncertainty in Measurement (GUM) also shows other permissible forms:
7.2.2When the measure of uncertainty is $u_\mathrm c(y)$, it is preferable to state the numerical result of the measurement in one of the following four ways in order to prevent misunderstanding. (The quantity whose value is being reported is assumed to be a nominally 100 g standard of mass $m_\mathrm S$; the words in parentheses may be omitted for brevity if $u_\mathrm c$ is defined elsewhere in the document reporting the result.)
1) “$m_\mathrm S=100{,}021\,47\ \mathrm g$ with (a combined standard uncertainty) $u_\mathrm c = 0{,}35\ \mathrm{mg}$.”
2) “$m_\mathrm S=100{,}021\,47(35)\ \mathrm g$, where the number in parentheses is the numerical value of (the combined standard uncertainty) $u_\mathrm c$ referred to the corresponding last digits of the quoted result.”
3) “$m_\mathrm S=100{,}021\,47(0{,}000\,35)\ \mathrm g$, where the number in parentheses is the numerical value of (the combined standard uncertainty) $u_\mathrm c$ expressed in the unit of the quoted result.”
4) “$m_\mathrm S=(100{,}021\,47\pm0{,}000\,35)\ \mathrm g$, where the number following the symbol $\pm$ is the numerical value of (the combined standard uncertainty) $u_\mathrm c$ and not a confidence interval.”
Note that item 2) corresponds to the form given in the question.
Concerning item 4), however, the GUM notes
The ± format should be avoided whenever possible because it has traditionally been used to indicate an interval corresponding to a high level of confidence and thus may be confused with expanded uncertainty (…).
Furthermore, ISO 80000 notes
Uncertainties are often expressed in the following manner: $(23{,}478\,2\pm0{,}003\,2)\ \mathrm m$. This is, however, wrong from a mathematical point of view. $23{,}478\,2\pm0{,}003\,2$ means $23{,}481\,4$ or $23{,}475\,0$, but not all values between these two values. (…)
From Wikipedia:
In metrology, physics, and engineering, the uncertainty or margin of error of a measurement, when explicitly stated, is given by a range of values likely to enclose the true value. This may be denoted by error bars on a graph, or by the following notations:
measured value± uncertainty measured value$^{+uncertainty}_{−uncertainty}$ measured value( uncertainty)
|
An original contribution in this work was the proposition of a contrast gain control mechanism (Section 2.3
)
via
the differential equation
$$\frac{\displaystyle {dV}}{\displaystyle {dt}}(x,y,t)=I_\mathrm{OPL}(x,y,t) -g_\mathrm{A}(x,y,t) V(x,y,t)$$
(22)
with
$$g_\mathrm{A}(x,y,t) = G_{\sigma_\mathrm{A}} \stackrel{x,y}{*} E_{\tau_\mathrm{A}} \stackrel{t}{*} Q (V) \;\;(x,y,t),$$
(23)
$$Q(V) = g^0_\mathrm{A} + \lambda_\mathrm{A} V^2.$$
(24)
Mathematically, this dynamical system is difficult to study due to its high dimensionality (two variables
V
and
g A
, expressed on spatial maps). Thus, in Wohrer (2007
), we studied the simplified dynamical system
$$\frac{\displaystyle {dV}}{\displaystyle {dt}}(t)= {A} \cos (\omega t) - V(t)Q(V(t)),$$
(25)
for which we can prove contrast gain control properties. System (25
) derives from (22
) considering the following assumptions:
We considered a sinusoidal stimulation:
I OPL( t) = A cos( ωt). This is the simplest way to control both amplitude A (i.e., contrast) and speed of temporal variation ω in the input. Furthermore, sinusoidal stimulation enables direct comparison of our system with linear ones, for which Fourier analysis can be done.
We assumed that
σ A = 0, so that (25) depends only on time t, and not on any spatial structure ( x, y). This choice does not appear too restrictive, especially since contrast gain control is experimentally settled as a temporal property only (see Section 4.2).
We consider the asymptotic limit of Eq. (22), when parameter
τ A in (23) tends to zero, yielding g A( t) = Q( V Bip( t)). As a consequence, (25) is now a one-dimensional dynamical system, easier to study. The assumption τ A ≃ 0 is justified in the scope of our simulations, for which we chose a small constant τ A = 5 ms, as detailed in Sections 3.2.2 and 4.2.
In Wohrer (2007), we proved general properties of system (25).
First, (25) is a very stable system. Similarly to a simple linear exponential filter, the system (25) forgets its initial condition exponentially fast (Lohmiller and Slotine 1998): All trajectories converge asymptotically to a unique solution
V( t) which is \(\frac{2\pi}{\omega}\)-periodic (like the input current).
Furthermore, over one cycle,
V( t) reaches a single maximum V max at time t max, and a single minimum V min = − V max at time t min = t max − π/ ω. In Wohrer (2007), we studied V max as a measure of the strength of the system’s response to the input current, and ωt max as a measure of the phase of the system’s response to the input current.
By studying
V max and t max, we thus provided a description of the system’s behavior according to input frequency and amplitude. This is presented in the following theorem, which shows that (25) acts as a low-pass, gain control system on its input, given suitable assumptions on Q. Theorem 1 Let V be a solution of
(25
)
, with Q an even, convex and strictly positive function. First, we show how V max and ωt max depend on the frequency ω: (i) Low-pass setting
$$\partial_{\omega}{V_\mathrm{max}}< 0\,\;and\,\;\displaystyle \lim_{\omega \to +\infty} V_\mathrm{max} = 0.$$
(ii) Phase delay
$$\partial_{\omega}{(\omega{t_\mathrm{max}})}> 0\,\;and\,\;\displaystyle \lim_{\omega \to +\infty} \omega{t_\mathrm{max}} = \frac{\pi}{2}\; \textrm{{\rm(}mod $2\pi${\rm)}}.$$
Second, we show how V max and ωt max depend on the amplitude A:
(iii) Growth of V max
$$\partial_{{A}}{V_\mathrm{max}} > 0\,\;and\,\;\displaystyle \lim_{{A} \to +\infty} V_\mathrm{max} = +\infty.$$
(iv) Phase advance
$$\partial_{{A}}{t_\mathrm{max}} < 0\,\;and\,\;\displaystyle \lim_{{A} \to +\infty} \omega{t_\mathrm{max}} = 0 \; \textrm{(mod $2\pi$)}.$$
(v) Under-linearity
$$\begin{array}{rl} \forall\; (\omega, Q),\;&if\;A\;is\;high\;enough,\;\partial_{{A}}{\frac{V_\mathrm{max}}{{A}}} < 0.\\ &Also,\;\displaystyle \lim_{{A} \to +\infty} \frac{V_\mathrm{max}}{{A}}=0. \end{array}$$
Let us comment those results.
Properties (i) and (ii) show that system (25
) acts as a
low-pass
temporal filter (using a classical linear systems terminology). To understand this, suppose that \(Q(V)=g_\mathrm{A}^0\)
, i.e., just a constant function. This means having a null feedback strength
λ A
in Eq. (24
). Then, Eq. (25
) becomes a simple exponential low-pass filter
$$\dot{V}(t)={A} \cos(\omega t) - g_\mathrm{A}^0 V(t),$$
(26)
whose behavior is well known:
$$V_\mathrm{max} =|\widetilde{H}(\omega)|{A}$$
(27)
$$\omega t_\mathrm{max} =\arg(\widetilde{H}(\omega)),$$
(28)
where \(\widetilde{H}(\omega)=1/(g_\mathrm{A}^0+\textbf{j}\omega)\)
is the Fourier transform of the system. One verifies easily that Eqs. (27
)–(28
) imply that
V max
tends to 0 (with
V max
> 0) and
ωt max
tends to \(\frac{\pi}{2}\)
(\(\omega{t_\mathrm{max}}<\frac{\pi}{2}\)
), as
ω
→ + ∞. Properties (i)–(ii) extend these properties in the nonlinear case of (25
).
Properties (iii), (iv) and (v) show the apparition of
gain control in system (25), as opposed to the linear case (26). In the linear case, Eqs. (27)–(28) give a linear dependence of V max and t max with respect to amplitude A. One has indeed \(\partial_{{A}}{V_\mathrm{max}}=|\widetilde{H}(\omega)|>0\), \(\partial_{{A}}{(V_\mathrm{max}/{A})}=0\) and \(\partial_{{A}}{(\omega t_\mathrm{max})}=0\). In the nonlinear case (25), property (iii) shows that V max is still a growing function of A. However, this growth is now under-linear (property (v)).
Finally, property (iv) shows the
phase advance effect, with input amplitude A. This second nonlinear effect corresponds to the definition of contrast gain control in real retinas, as detailed in Section 3.2.2.
|
This "decay" of later values is a direct consequence of the episodic formula for the objective function for REINFORCE:
$$J(\theta) = v_{\pi_\theta}(s_0)$$
That is, the expected return from the first state of the episode. This is equation 13.4 in the book edition that you linked in the question.
In other words, if there is any discounting, we care less about rewards seen later in the episode. We mainly care about how well the agent will do from its starting position.
This is not true for all formulations of policy gradients. There are other, related, choices of objective function. We can formulate the objective function as caring about the returns from any distribution of states, but in order to define it well, we do need to describe the weighting/distribution somehow, it should be relevant to the problem, and we want to be able to get approximate samples of $\nabla J(\theta)$ for policy gradient to work. The algorithm you are asking about is specifically for improving policy for episodic problems. Note you can set $\gamma = 1$ for these problems, so the decay is not necessarily required.
As an aside (because someone is bound to ask): Defining $J(\theta)$ with respect to all states equally weighted could lead to difficulties e.g. the objective would take less account of a policy's ability to avoid undesirable states, and it would require a lot of samples from probably irrelevant states in order to estimate it. These difficulties would turn up as a hard to calculate (or maybe impossible) expectation for $\nabla J(\theta)$
|
Apart from the formal result about #P-hardness, there's something worth touching on, about the nature of strong simulation itself. I'll comment first on strong simulation, and then specifically on the quantum case.1. Strong simulation even of classical randomised computation is hardStrong simulation is a very powerful concept — not only in the fact ...
We simply translate the binary result of a qubit measurement to our guess whether it's the first state or the second, calculate the probability of success for every possible measurement of the qubit, and then more find the maximum of a function of two variables (on the two-sphere).First, something that we won't really need, the precise description of the ...
Yes.Remember that you require several properties of a projective measurement including $P_i^2=P_i$ for each projector, and$$\sum_iP_i=\mathbb{I}.$$The first of these show you that the $P_i$ have eigenvalues 0 and 1. Now take a $|\phi\rangle$ that is an eigenvector of eigenvalue 1 of a particular projector $P_i$. Use this in the identity relation:$$\...
So, Bob is given the following state (also called the maximally-mixed state):$\rho = \frac{1}{2}|0\rangle\langle 0| + \frac{1}{2}|1\rangle\langle 1| = \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix}$As you noticed, one nice feature of density matrices is they enable us to capture the uncertainty of an outcome of a measurement and ...
An observable only needs to be Hermitian, and can have any real eigenvalues. They don't even need to be distinct eigenvalues: if there are repeated eigenvalues, we say that the eigenspace for that eigenvalue is degenerate.(In the case of observables on a qubit, having a repeated eigenvalue makes the observable rather uninteresting, because absolutely all ...
Given only one copy of such a state, it is not possible to determine it with any good probability. Reason being there is no way, in principle, to extract information from the system without making a measurement on the system. And when we go for a measurement, we project it to a basis element $i,j$ if we turn out to pick these by chance from the set $\{1,2,......
Any set of commuting observables in any quantum state can be characterized by a joint classical distribution function describing the probabilities of its measurement outputs in that quantum state. Since you needa single observable and it is of course self commuting, the above is valid in your case.The obsevable in your case is:$$\sigma = \cos \phi \...
So Alice sends Bob a qubit with the density matrix$$\rho = \frac{1}{2}|0\rangle\langle 0| + \frac{1}{2}|1\rangle\langle 1| = \begin{bmatrix} .5 & 0 \\ 0 & .5 \end{bmatrix}$$as you said. (I've fixed the notation to make it a density matrix, what you wrote was in the structure of a state, but with non-normalized coeffecients. It is important to ...
Cirq uses numpy's pseudo random number generator to pick measurement results, e.g. here is code from XmonStepper.simulate_measurement:def simulate_measurement(self, index: int) -> bool:[...]prob_one = np.sum(self._pool.map(_one_prob_per_shard, args))result = bool(np.random.random() <= prob_one)[...]Cirq ...
No, no trick. You can’t prove it.One way to think about this is what if you had either the 1 state, or something that is arbitrarily close to the 1 state with a tiny amount of 0. You’re essentially asking if it’s possible to perfectly distinguish them. But if you could, you could copy them, and you’d have perfect cloning of a pair of non-orthogonal states, ...
You just need to do a bit more algebra: Note that$$ \sum_{i=0}^n (\overline{x_i+y_i})(x_i+y_i)=\langle x+y|x+y\rangle$$and then you can distribute the right-hand side to get$$\langle x|x\rangle+\langle x|y\rangle+\langle y|x\rangle+\langle y|y\rangle.$$Since $| x\rangle$ and $| y\rangle$ are normalized, we know that $\langle x|x\rangle=\langle y|y\...
If $P$ is (ortho)projector, that is $P^2=P=P^\dagger$, then we can define unitary $U = I - 2P$. You can verify$$UU^\dagger = U^2 = (I-2P)(I-2P) = I-4P+4P = I$$Now we can express $P=\frac{1}{2}(I-U), Q=\frac{1}{2}(I+U)$ and calculate$$P\rho P + Q\rho Q = \frac{1}{4}(I-U)\rho(I-U) + \frac{1}{4}(I+U)\rho(I+U)=$$$$= \frac{1}{4}(\rho - U\rho - \rho U +...
$\text{Tr}(AB)$ is always real and non-negative if $A,B$ are positive semi-definite hermitian matrices.To see this note that $A = UDU^\dagger$, for some unitary $U$ and diagonal matrix $D$ with $d_{ii} \ge 0$.Then $\text{Tr}(AB) = \text{Tr}(UDU^\dagger B) = \text{Tr}(DU^\dagger B U).$But $B^\prime = U^\dagger B U$ is also positive semi-definite and ...
It is important to emphasise that a density matrix may not be absolute; it represents the state of somebody's knowledge of the system.To see this, consider 3 parties: Alice, Bob and Charlie. Alice prepares a qubit in either $|0\rangle$ or $|1\rangle$. Now, Alice knows which state she prepared (let's assume it's $|0\rangle$), so Alice's description of the ...
Denote $| \psi_{i,j} \rangle = \frac{1}{\sqrt{2}} ( | i \rangle + | j \rangle) $ for fixed $i$ and $j$. Alice gives a randomly drawn one of this form.That is for each $0 \leq i<j \leq N$ she assigns a probability $p_{ij}$ and sends the corresponding state $| \psi_{ij} \rangle$ with that probability. When you say unknown of that form, I'll assume $p_{ij}=...
Let's start from the state$$|\Psi\rangle=\frac12\left(|0\rangle(|\psi\rangle|\phi\rangle+|\phi\rangle|\psi\rangle)+|1\rangle(|\psi\rangle|\phi\rangle-|\phi\rangle|\psi\rangle)\right).$$There are a couple of ways to do the calculation. If you want to be formal, which typically leads to fewer mistakes, you identify the measurement operators on a single ...
Suppose you have a state $\rho$, and a random process that changes this to a state $\rho_j$ with probability $p_j$. If you know what the value of $j$ is, your knowledge of the resulting state will be described by the corresponding $\rho_j$. If you have no information regarding $j$, your knowledge will be described by$$\sum_j ~ p_j ~ \rho_j$$This is a ...
measurements in every circuit can be postponed or never performed in a circuit while achieving the same functionality of the circuitThat's correct. But if the circuit involves two parties, this process will introduce quantum operations between the two parties. It will require a quantum communication channel, so that the qubits can be shuttled back and ...
This phenomenon is sometimes known as a discretization of errors. It is a property of certain error correcting codes that allows it to work. It is described (somewhat briefly) in Section 10.2 of Nielsen and Chuang.Suppose that we have an arbitrary error that affects just one qubit, and suppose that we represent this error by a channel $\Phi$ mapping one ...
Note that measuring an observable is equivalent to projecting the quantum state into a particular eigenspace of the operator, and the measurement result tells you which eigenspace. So, in the case of measuring an observable on an eigenstate of that observable, you just project the state onto itself, and the outcome tells you the eigenvalue. So, (2)is ...
This question is actually entirely about the basics of measurement on a quantum system, and nothing to do with secret sharing.Let's state the measurement postulate of quantum mechanics as it applies to projective measurements:A measurement is described by a set of projectors $P_i$ satisfying$\sum_iP_i=1$. If a state $|\psi\rangle$ is being measured, ...
Without additional assumptions or context, there is no fundamental difference between an "$2^n$-dimensional qudit" and "$n$ qubits". Any "qudit system" over $2^n$ modes for some integer $n$ can be thought of as a system of $n$ qubits. Equivalently, an $n$-qubit system is nothing but a $2^n$-dimensional qudit system.The difference is in the fact that if you ...
The Hadamard gate is:$$\frac{1}{\sqrt 2} \left(|0\rangle \langle 0 | + |0\rangle\langle 1| + |1\rangle \langle 0| - |1\rangle \langle 1|\right)$$And since $|+\rangle$ is $\frac{1}{\sqrt 2}\left(|0\rangle + |1\rangle \right)$,you can work out that $H(|+\rangle) = |0\rangle$So,$$CNOT(H|+\rangle \otimes |+\rangle)$$$$= CNOT(|0\rangle \otimes |+\...
Yes, it is possible, same way as it is possible to measure a state with complex amplitudes in a basis with real amplitudes (say, a $|i\rangle$ state in the $[|0\rangle, |1\rangle]$ basis). Either way, the probability of measuring state $|\psi\rangle$ and getting measurement result corresponding to basis state $|a_i\rangle$ is defined as $P_i = |\langle a_i| \...
Write down the two reduced density matrices of the single qubits that you have access to. Apply the Helstrom measurement (there are several descriptions of this on the site already).The problem is that, in this case, the two reduced density matrices are the same. That means that you cannot tell them apart. More explicitly,$$|\varphi_2\rangle=(I\otimes X)|...
The starting state is$$2 | 1 0 \rangle + | 1 1 \rangle$$After the $H \otimes 1$$$\sqrt{2} | 0 0 \rangle - \sqrt{2} | 1 0 \rangle + 1/\sqrt{2} | 0 1 \rangle - 1/\sqrt{2} | 1 1 \rangle$$where the first 2 terms come from the first term above and the last two from the last term.Then the CNOT$$\sqrt{2} | 0 0 \rangle - \sqrt{2} | 1 1 \rangle + 1/\...
Here is another way to see this.A projection $P$ is an operator such that $P^2=P$.This directly implies that we can attach to each projector $P$ a set of orthonormal states that represent it, by choosing any orthonormal base for its range. More precisely, if $P_i$ has trace $\operatorname{tr}(P_i)=n$, then we can represent $P_i$ as a set of orthonormal ...
An assumption in general measurements: The measuring device itself has no degrees of freedom and it does not couple with the qudit in any form of interaction, which is not true.1) A projective measurement is ideal and non-realistic because it is always assumed that there is no extension of this Projector to a bigger Hilbert space or more degrees of freedom ...
|
Bach, Marc A and Parameswaran, Pattiyil and Jemmis, Eluvathingal D and Rosenthal, Uwe (2007)
Bimetallic Complexes of Metallacyclopentynes: cis versus trans and Planarity versus Nonplanarity. In: Organometallics, 26 (9). pp. 2149-2156.
PDF
Bimetallic-45.pdf
Restricted to Registered users only
Download (342kB) | Request a copy
Abstract
Density functional theory calculations show that the $Cp_2M$ in cis-dimetallabicycles of metallacyclopentynes, $Cp_2M[\mu-(\eta^4:\eta^2-H_2C_4H_2)]M'L_2 (M = Ti, Zr$ and M' Ti, Zr), deviates from the C4 plane. Both the metal fragments deviate from the C4 plane in the nickel complexes of metallacyclopentynes (3Ti-Ni and 3Zr-Ni). The nonplanarity of $Ni(PH_3)_2$ from the C4 plane reduces the antibonding interaction between nickel orbitals and the $\pi-MO$ at the C2-C3 bond, whereas that of the $Cp_2M$ acts mainly to reduce the antibonding interaction between C1 and C2. The energetics of the isodesmic equations show that the nickel complexes 3Ti-Ni and 3Zr-Ni are more stable than the homodimetallabicycles, 3Zr-Zr and 3Ti-Ti. The electron deficiency on the cis-homodimetallabicycles due to the vacant d-orbital on $\eta^2-M'$ can be decreased by accepting electrons from a Lewis base or by flipping into trans geometry. This is reflected in the experimental realization of cis-$Cp_2Zr[\mu-(\eta^4:\eta^2-H_2C_4H_2)]ZrCp_2(PMe_3)$ and the trans geometry for $Cp_2Ti[\mu-(\eta^3:\eta^3-H_2C_4H_2)]TiCp_2$ and $Cp_2Zr[\mu-(\eta^3:\eta^3-H_2C_4H_2)]ZrCp_2$.
Item Type: Journal Article Additional Information: Copyright of this article belongs to American Chemical Society. Department/Centre: Division of Chemical Sciences > Inorganic & Physical Chemistry Depositing User: Anka Setty Date Deposited: 30 Jul 2007 Last Modified: 19 Sep 2010 04:38 URI: http://eprints.iisc.ac.in/id/eprint/11625 Actions (login required)
View Item
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.