text
stringlengths 256
16.4k
|
|---|
Search
Now showing items 1-10 of 165
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
|
Search
Now showing items 1-10 of 21
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Centrality dependence of particle production in p-Pb collisions at $\sqrt{s_{\rm NN} }$= 5.02 TeV
(American Physical Society, 2015-06)
We report measurements of the primary charged particle pseudorapidity density and transverse momentum distributions in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV, and investigate their correlation with experimental ...
|
It is common knowledge that chemical reactions occur more rapidly at higher temperatures. Milk turns sour much more rapidly if stored at room temperature rather than in a refrigerator; butter goes rancid more quickly in the summer than in the winter; and eggs hard-boil more quickly at sea level than in the mountains. For the same reason, cold-blooded animals such as reptiles and insects tend to be more lethargic on cold days.
The reason for this is not hard to understand. Thermal energy relates direction to motion at the molecular level. As the temperature rises, molecules move faster and collide more vigorously, greatly increasing the likelihood of bond cleavages and rearrangements. Whether it is through the collision theory, transition state theory, or just common sense, chemical reactions are typically expected to proceed faster at higher temperatures and slower at lower temperatures.
By 1890 it was common knowledge that higher temperatures speed up reactions, often doubling the rate for a 10-degree rise, but the reasons for this were not clear. Finally, in 1899, the Swedish chemist Svante Arrhenius (1859-1927) combined the concepts of activation energy and the Boltzmann distribution law into one of the most important relationships in physical chemistry:
Take a moment to focus on the meaning of this equation, neglecting the
A factor for the time being.
First, note that this is another form of the exponential decay law discussed in the previous section of this series. What is "decaying" here is not the concentration of a reactant as a function of time, but the magnitude of the rate constant as a function of the exponent
–E a / RT. And what is the significance of this quantity? Recalling that RT is the average kinetic energy, it becomes apparent that the exponent is just the ratio of the activation energy E a to the average kinetic energy. The larger this ratio, the smaller the rate (hence the negative sign). This means that high temperature and low activation energy favor larger rate constants, and thus speed up the reaction. Because these terms occur in an exponent, their effects on the rate are quite substantial.
The two plots below show the effects of the activation energy (denoted here by
E ‡) on the rate constant. Even a modest activation energy of 50 kJ/mol reduces the rate by a factor of 10 8.
Figure 1: Arrhenius plots. The logarithmic scale in the right-hand plot leads to nice straight lines.
Looking at the role of temperature, a similar effect is observed. (If the
x-axis were in "kilodegrees" the slopes would be more comparable in magnitude with those of the kilojoule plot at the above right.)
Figure 2: Determining the activation energy
The Arrhenius equation,
\[k = A e^{-E_a/RT} \tag{1}\]
can be written in a non-exponential form that is often more convenient to use and to interpret graphically. Taking the logarithms of both sides and separating the exponential and pre-exponential terms yields
\[ \ln k = \ln \left(Ae^{-E_a/RT} \right) = \ln A + \ln \left(e^{-E_a/RT}\right) \tag{2}\]
\[\ln k = \ln A + \dfrac{-E_a}{RT} = \left(\dfrac{-E_a}{R}\right) \left(\dfrac{1}{T}\right) + \ln A \tag{3}\]
which is the equation of a straight line whose slope is \(–E_a /R\). This affords a simple way of determining the activation energy from values of
k observed at different temperatures, by plotting \(\ln k\) as a function of \(1/T\).
Example 1: Isomerization of Cyclopropane
For the isomerization of cyclopropane to propene,
the following data were obtained (calculated values shaded in pink):
T, °C 477 523 577 623 1/ T, K –1 × 10 3 1.33 1.25 1.18 1.11 k, s –1 0.00018 0.0027 0.030 0.26 ln k –8.62 –5.92 –3.51 –1.35
From the calculated slope, we have
– (
E a/ R) = –3.27 × 10 4K E a=– (8.314 J mol –1K –1) (–3.27 × 10 4K) = 273 kJ mol –1
Comment: This activation energy is high, which is not surprising because a carbon-carbon bond must be broken in order to open the cyclopropane ring. (C–C bond energies are typically around 350 kJ/mol.) This is why the reaction must be carried out at high temperature. Calculating \(E_a\) without a plot
Because the ln
k-vs.-1/ T plot yields a straight line, it is often convenient to estimate the activation energy from experiments at only two temperatures. To see how this is done, consider that
\[ \ln k_2 -\ln k_1 =\left(\ln A - \frac{E_a}{RT_2} \right)\left(\ln A - \frac{E_a}{RT_1} \right)= \color{red}{\boxed{\color{black}{ \frac{E_a}{R}\left( \frac{1}{T_1}-\frac{1}{T_2} \right) }}} \]
The ln-
A term is eliminated by subtracting the expressions for the two ln- k terms.) Solving the expression on the right for the activation energy yields
\[ E_a = \dfrac{R \ln \dfrac{k_2}{k_1}}{\dfrac{1}{T_1}-\dfrac{1}{T_2}}\]
Example 2
A widely used rule-of-thumb for the temperature dependence of a reaction rate is that a
ten degree rise in the temperature approximately doubles the rate. This is not generally true, especially when a strong covalent bond must be broken. For a reaction that does show this behavior, what would the activation energy be? SOLUTION
Center the ten degree interval at 300 K. Substituting into the above expression yields
\[E_a = \dfrac{(8.314)(\ln 2/1)}{\dfrac{1}{295} – \dfrac{1}{305}} = \dfrac{(8.314\text{ J mol}^{-1}\text{ K}^{-1})(0.693)}{0.00339\,\text{K}^{-1} – 0.00328 \, \text{K}^{-1}} \]
= (5.76 J mol
–1 K –1) / (0.00011 K –1) = 52400 J mol –1 = 52.4 kJ mol –1
Example 3
It takes about 3.0 minutes to cook a hard-boiled egg in Los Angeles, but at the higher altitude of Denver, where water boils at 92°C, the cooking time is 4.5 minutes. Use this information to estimate the activation energy for the coagulation of egg albumin protein.
SOLUTION
The ratio of the rate constants at the elevations of Los Angeles and Denver is 4.5/3.0 = 1.5, and the respective temperatures are \(373 \; \rm{K }\) and \(365\; \rm{K}\). With the subscripts 2 and 1 referring to Los Angeles and Denver respectively:
\[E_a = \dfrac{(8.314)(\ln 1.5)}{\dfrac{1}{365\; \rm{K}} – \dfrac{1}{373 \; \rm{K}}} = \dfrac{(8.314)(0.405)}{0.00274 \; \rm{K^{-1}} – 0.00268 \; \rm{K^{-1}}}\]
\[\\ = \dfrac{(3.37\; \rm{J\; mol^{–1} K^{–1}})}{5.87 \times 10^{-5}\; \rm{K^{–1}}} = 5740\; \rm{ J\; mol^{–1}} = 5.73 \; \rm{kJ \;mol^{–1}}\]
Comment: This low value seems reasonable because thermal denaturation of proteins primarily involves the disruption of relatively weak hydrogen bonds; no covalent bonds are broken (although disulfide bonds can interfere with this interpretation). The pre-exponential factor
Up to this point, the pre-exponential term,
A in the Arrhenius equation, has been ignored because it is not directly involved in relating temperature and activation energy, which is the main practical use of the equation.
However, because
A multiplies the exponential term, its value clearly contributes to the value of the rate constant and thus of the rate. Recall that the exponential part of the Arrhenius equation expresses the fraction of reactant molecules that possess enough kinetic energy to react, as governed by the Maxwell-Boltzmann law. This fraction can run from zero to nearly unity, depending on the magnitudes of \(E_a\) and of the temperature.
If this fraction were 0, the Arrhenius law would reduce to
\[k = A\]
In other words, \(A\) is the fraction of molecules that would react if either the activation energy were zero, or if the kinetic energy of all molecules exceeded \(E_a\) — admittedly, an uncommon scenario (although barrierless reactions have been characterized).
The role of collisions
What would limit the rate constant if there were no activation energy requirements? The most obvious factor would be the rate at which reactant molecules come into contact. This can be calculated from kinetic molecular theory and is known as the
frequency- or collision factor, \(Z\).
In some reactions, the relative orientation of the molecules at the point of collision is important, so a geometrical or
steric factor (commonly denoted by \(\rho\) (Greek lower case rho) can be defined. In general, we can express \(A\) as the product of these two factors:
\[A = Z\rho\]
Values of ρ are generally very difficult to assess; they are sometime estimated by comparing the observed rate constant with the one in which
A is assumed to be the same as \(Z\). Introduction
The "Arrhenius Equation" was physical justification and interpretation in 1889 by Svante Arrhenius, a Swedish chemist. Arrhenius performed experiments that correlated chemical reaction rate constants with temperature. After observing that many chemical reaction rates depended on the temperature, Arrhenius developed this equation to characterize the temperature-dependent reactions:
\[ \large k=Ae^{^{\frac{-E_{a}}{k_{B}T}}} \]
or
\[\large \ln k=\ln A - \frac{E_{a}}{k_{B}T} \]
With the following terms:
In unit of s -1(for 1 storder rate constant) or M -1s -1(for 2 ndorder rate constant) A: The pre-exponential factor or frequency factor Specifically relates to molecular collision Deals with the frequency of molecules that collide in the correct orientation and with enough energy to initiate a reaction. It is a factor that is determined experimentally, as it varies with different reactions. In unit of L mol -1s -1or M -1s -1(for 2 ndorder rate constant) and s -1(for 1 storder rate constant) Because frequency factor A is related to molecular collision, it is temperature dependent Hard to extrapolate pre-exponential factor because lnk is only linear over a narrow range of temperature E a: The activation energy is the threshold energy that the reactant(s) must acquire before reaching the transition state. Once in the transition state, the reaction can go in the forward direction towards product(s), or in the opposite direction towards reactant(s). A reaction with a large activation energy requires much more energy to reach the transition state. Likewise, a reaction with a small activation energy doesn't require as much energy to reach the transition state. In units of kJ/mol. -E a/RT resembles the Boltzmann distribution law. R: The gas constant. Its value is 8.314 J/mol K. T:The absolute temperature at which the reaction takes place. In units of Kelvin (K). Implications
The exponential term in the Arrhenius equation implies that the rate constant of a reaction increases exponentially when the activation energy decreases. Because the rate of a reaction is directly proportional to the rate constant of a reaction, the rate increases exponentially as well. Because a reaction with a small activation energy does not require much energy to reach the transition state, it should proceed faster than a reaction with a larger activation energy.
In addition, the Arrhenius equation implies that the rate of an uncatalyzed reaction is more affected by temperature than the rate of a catalyzed reaction. This is because the activation energy of an uncatalyzed reaction is greater than the activation energy of the corresponding catalyzed reaction. Since the exponential term includes the activation energy as the numerator and the temperature as the denominator, a smaller activation energy will have less of an impact on the rate constant compared to a larger activation energy. Hence, the rate of an uncatalyzed reaction is more affected by temperature changes than a catalyzed reaction.
The Math in Eliminating the Constant A
To eliminate the constant \(A\), there must be two known temperatures and/or rate constants. With this knowledge, the following equations can be written:
\[ \ln k_{1}=\ln A - \dfrac{E_{a}}{k_{B}T_1} \]
at \(T_1\) and
\[ \ln k_{2}=\ln A - \dfrac{E_{a}}{k_{B}T_2} \]
at \(T_2\). By rewriting the second equation:
\[ \ln A = \ln k_{2} + \dfrac{E_{a}}{k_{B}T_2} \]
and substitute for \(\ln A\) into the first equation:
\[ \ln k_{1}= \ln k_{2} + \dfrac{E_{a}}{k_{B}T_2} - \dfrac{E_{a}}{k_{B}T_1} \]
This simplifies to:
\[ \ln k_{1} - \ln k_{2} = -\dfrac{E_{a}}{k_{B}T_1} + \dfrac{E_{a}}{k_{B}T_2} \]
\[ \ln \dfrac{k_{1}}{k_{2}} = -\dfrac{E_{a}}{k_{B}} \left (\dfrac{1}{T_1}-\dfrac{1}{T_2} \right )\]
Graphically determining the Activation Energy of a Reaction
A closer look at the Arrhenius equation reveals that the natural logarithm form of the Arrhenius equation is in the form of \(y = mx + b\). In other words, it is similar to the equation of a straight line.
\[ \ln k=\ln A - \dfrac{E_{a}}{k_{B}T} \]
Problems Find the activation energy (in kJ/mol) of the reaction if the rate constant at 600K is 3.4 M -1s -1and 31.0 at 750K. Find the rate constant if the temperature is 289K, Activation Energy is 200kJ/mol and pre-exponential factor is 9 M -1s -1 Find the new rate constant at 310K if the rate constant is 7 M -1s -1 Calculate the activation energy if the pre-exponential factor is 15 M -1s -1 ,rate constant is 12M -1s -1and it is at 22K Find the new temperature if the rate constant at that temperature is 15M -1s -1while at temperature 389K the rate constant is 7M -1s 1 ,the Activation Energy is 600kJ/mol Solutions 1. E a is the factor the question asks to be solved. Therefore it is much simpler to use
\(\large \ln k = -\frac{E_a}{RT} + \ln A\)
To find E
a, subtract ln A from both sides and multiply by -RT.
This will give us:
\( E_a=\ln A -\ln k)RT\)
2. Substitute the numbers into the equation:
\(\ ln k = \frac{-(200 \times 1000\text{ J}) }{ (8.314\text{ J mol}^{-1}\text{K}^{-1})(289\text{ K})} + \ln 9\)
k = 6.37X10 -36 M-1s-1 3. Use the equation ln(k 1/k2)=-Ea/R(1/T1-1/T2)
ln(7/k
2)=-[(900 X 1000)/8.314](1/370-1/310) k 2=1.788X10-24 M-1s-1 4. Use the equation k = Ae -Ea/RT
12 = 15e
-Ea /(8.314)(22) Ea = 40.82J/mol 5. Use the equatioin ln(k 1/k2)=-Ea/R(1/T1-1/T2)
ln(15/7)=-[(600 X 1000)/8.314](1/T
1 - 1/389) T 1 = 390.6K References Chang, Raymond. 2005. Physical Chemistry for the Biosciences. Sausalito (CA): University Science Books. p. 311-347. Segal, Irwin. 1975. Enzyme Kinetics. John Wiley & Sons, Inc. p.931-933. Ames, James. 2010. Lecture 7 Chem 107B. University of California, Davis. Laidler, Keith. "The Development of the Arrhenius Equation." J. Chem. Educ., 1984, 61 (6), p 494 Logan, S. R. "The orgin and status of the Arrhenius Equation." J. Chem. Educ., 1982, 59 (4), p 279 Contributors Guenevieve Del Mundo, Kareem Moussa, Pamela Chacha, Florence-Damilola Odufalu, Galaxy Mudda, Kan, Chin Fung Kelvin
|
Gas Chromatography Contents Introduction
Chromatography is a technique in the field of analytical chemistry that separates a fluid mixture into its parts for analysis. The field of chromatography is very broad and is used in many different ways. Chromatography can be done on a small strip of paper to large industrial columns for pharmaceutical purification [1]. Types of chromatography include gas chromatography (GC), high performance liquid chromatography (HPLC), size exclusion, ion exchange, hydrophobic interaction, and affinity chromatography. The different types of chromatography, use different properties of molecules to separate and identify mixtures [2].
The analytical tool known as gas chromatography was founded by Archer Martin and Richard Synge in 1941. The basis of this new technique, separating itself from other forms of chromatography at the time, was the concept that chromatography was not limited to just the liquid phase. It took nearly 10 years after the idea of gas chromatography, in 1951, for the concept to proven to be an effective analytical tool [4]. Figure 1 shows a state of the art GC machine that is paired with a mass spectrometer for more streamline analysis.
How It Works
Gas chromatography can be broken down into a flow sheet diagram like in Figure 2. Gas chromatography uses a gas mobile phase and a liquid stationary phase for separation. The gas mobile phase is made up of a carrier gas, usually an inert gas like hydrogen, and the sample. The sample at is injected into a vessel where it is turned into a volatile state using heat to be carried through the machine. Once the sample is ready, it is opened to the carrier gas and sent through the column through the help of pneumatic valves. Once the sample is carried into the column, it is allowed to interact with the stationary phase through different molecular interactions. In the column, the sample is broken down into its components to be analyzed. The analyzer instrument in the system detects the chemical properties of the sample and generates an electronic signal to be read. The electronic signal is then converted into a physical graph called a chromatogram for interpretation [4].
Components in the flow are traditionally eluted in order of descending vapor pressure. Therefore, the molecules that are usually seen in the gaseous phase are seen first on the chromatograms, followed by the heavier components of the analyzed mixture. To actively change the elution of gas chromatography, the stationary phase shown in Figure 3 can be modified. Increasing the thickness will allow for more gases to be analyzed but with decreased resolution. Decreasing the thickness will increase resolution but at the expense of retention time. The temperature and pressure of the system can be changed to get different results, where increasing temperature of the column or pressure of the sample can decrease the retention time but greatly increase resolution. By modifying equipment conditions, gas chromatography can be used in a variety of analytic work [6].
Advantages Gas chromatography is fast, efficient, and can analyze many components at once. Resolution of the chromatogram is very high, giving very clear data and results. GC analysis is very accurate and reliable for sample analysis. The volume of sample needed to analyze the solution is small. Gas chromatography has been around for over 50 years and commercially available to most laboratories [4]. Shortcomings Samples need to be able to be stable in the gas phase. The system is run at a high temperature so samples may break down during the analysis. Cannot determine the structure of the sample [4]. Analysis
The analysis of chromatography starts at the molecular scale, looking at the separation of molecules in the mobile and stationary phases. The value that describes this partition between the two phases is called the retention value (Rf) [2].
[math]Rf=\frac{(distance \, traveled \, by \, sample)}{(distance \, traveled \, to \, solvent \, front)}=1-\alpha[/math]
Where the closer Rf gets to one, the faster the separation happens, while the closer it gets to zero, the slower the separation happens. To determine the Rf value, α, the partition coefficient needs to be determined [2].
[math]\alpha=\frac{(Concentration \, in \, stationary \, phase)}{(Concentration \, in \, stationary \, and \, mobile \, phase)}[/math]
Chromatography runs are analyzed through a plot of detector signal versus time they produce called a chromatogram. The different components can be seen as peaks in the graph. The separation between two peaks, or components of a mixture, can be defined as the resolution [6].
[math] Resolution = \frac{0.589\Delta t_{r}}{1/2w_{av}} [/math]
Where Δt
r is the difference in time between the two peaks and w av is the width of the peak in units of time. Area under the peak can generally tell how much of a compound or molecule is in a sample solution. Figure 4 shows what a typical GC chromatograph would look like. To determine which peak corresponds to which compound, a known sample can be run through the system as reference or a standard from literature or industry can be used to best analyze the results [6]. Applications MedicineGas chromatography is a valuable resource to the medical field due to its quick and reliable analysis. By developing sample preparation and validation techniques, biological samples can be tested for drugs and proteins for medical testing. It was found in a 2004 study, that gas chromatography, in conjunction with mass spectroscopy, can determine concentrations of lidocaine, prilocaine, and other drugs. This sampling of blood can be done in just one minute, providing quick results to patients and doctors [7]. PharmaceuticalsUses for gas chromatography in pharmaceuticals happens in not only development, but also in post market screening. Due to regulations, drugs need to be extremely pure with consistent properties. Using gas chromatography to analyze drugs product solutions when developing industrial purification process is important due to its quick and accurate analysis. The faster the properties of a drug product can be determined, the quicker a company can bring a drug to market [4].
Pharmaceutical companies also have to do research into how effective their products are doing. Testing for efficacy can include urine drug screening of patients on different drugs to see how its working. Gas chromatography can be used to find drug fragments or different biological indicators to best determine the drugs true effect on on patients [4].
FoodCharacterization of food molecules is critical for food companies as so many tight regulations. Analyzing food mixtures for quality and consistent is important for company image. Unilever research and development, has looked into oil compositions using gas chromatography. Oil and fats are usually not made up of a single molecule but a collection of many different variations of similar molecules. It is vital for companies to know how much of different types of oils and fats so the contents can be labeled on the products mandated by the regulatory agencies. Gas chromatography is used as a quick and effective analysis of different types of food [8]. PetroleumDue to the large variations in chemicals in petroleum in diesel, it is commonly seen for product validation and characterization. Octane numbers for gasoline are determined through the composition of gas. Providing as close to real time data on the composition of affluent gas streams of a refinery are key to providing reliable products to the consumers. Due to the quick characterization of gas chromatography, it is one of the most popular analysis techniques used in the petroleum industry [4]. References
[1] "Principles of Chromatography." Khan Academy. N.p., 2016. Web.
[2] University of Kentucky. Chromatography. Lecture 6 presented in Chemistry 554 at University of Kentucky. 2013.
[3] High Performance Gas Chromatography Mass Spectroscopy. 2016. Technology University of Malaysia, Makmal.
[4] "Theory and Instrumentation of GC." GC's CHROM Academy (n.d.): 1-24. Crawford Scientific. Web.
[5] Schoenmakers, Peter. "Introduction to Capillary GC Injection Techniques." Introduction to Capillary GC Injection Techniques. - Introduction to Capillary GC Injection Techniques. - Chromedia. Chromedia, n.d. Web.
[6] University of California Irvine. "Background - Gas Chromatography." UCI.edu. Robert M Corn, 2016. Web
[7] Altun, Z., M. Abdelrehim, and L. Blomberg. "New Trends in Sample Preparation: On-line Microextraction in Packed Syringe (MEPS) for LC and GC ApplicationsPart III: Determination and Validation of Local Anaesthetics in Human Plasma Samples Using a Cation-exchange Sorbent." Journal of Chromatography B 813.1-2 (2004): 129-35. ELSEVIER.
[8] Janssen, Hans-Gerd, Herrald Steenbergen, and Sjaak De Koning. "The Role of Comprehensive Chromatography in the Characterization of Edible Oils and Fats." European Journal of Lipid Science and Technology 111.12 (2009): 1171-184. Wiley.
|
The problem is to find the derivative of $f(x) = \frac{3x}{x^2+1}$ at $x = -4$ using the limit definition, $$ f'(x) = \lim_{h\to 0}\frac{f(x+h)-f(x)}{h} $$
Progress
I plug in $-4$ for $x$ when using the limit definition and I always end up stuck with an unfactorable denominator. I tried it $5$ times already but I always end up with the same denominator.
$$\frac{-45 + 12h}{17 (h^2 - 8h + 17)}$$
|
Optimal Hölder regularity for nonautonomous Kolmogorov equations
1.
Dipartimento di Matematica, Università degli Studi di Parma, Viale Parco Area delle Scienze 53/A, I-43124 Parma, Italy
Awith unbounded coefficients defined in $[0,T]\times\R^N$ and we prove optimal Schauder estimates for the solution to the parabolic Cauchy problem $D_tu=$ A$u+g$, $u(0,\cdot)=f$. Keywords:discontinuous coefficients, optimal Schauder estimates., nonautonomous elliptic and parabolic operators with unbounded coefficients. Mathematics Subject Classification:Primary: 35B65; Secondary: 35K15, 35R0. Citation:Luca Lorenzi. Optimal Hölder regularity for nonautonomous Kolmogorov equations. Discrete & Continuous Dynamical Systems - S, 2011, 4 (1) : 169-191. doi: 10.3934/dcdss.2011.4.169
References:
[1]
M. Bertoldi and L. Lorenzi,
[2]
M. Bertoldi and L. Lorenzi, "Analytical Methods for Markov Semigroups,",
[3]
S. Cerrai,
[4]
A. Friedman, "Partial Differential Equations of Parabolic Type,",
[5]
M. Kunze, L. Lorenzi and A. Lunardi,
[6]
N. V. Krylov and E. Priola,
[7]
S. N. Kružkov, A. Castro and M. Lopes,
[8]
S. N. Kružkov, A. Castro and M. Lopes,
[9]
S. N. Kružkov, A. Castro and M. Lopes,
[10] [11]
L. Lorenzi,
[12] [13]
A. Lunardi,
[14]
A. I. Nazarov and N. N. Ural'tseva,
[15]
H. Triebel, "Interpolation Theory, Function Spaces, Differential Operators,",
show all references
References:
[1]
M. Bertoldi and L. Lorenzi,
[2]
M. Bertoldi and L. Lorenzi, "Analytical Methods for Markov Semigroups,",
[3]
S. Cerrai,
[4]
A. Friedman, "Partial Differential Equations of Parabolic Type,",
[5]
M. Kunze, L. Lorenzi and A. Lunardi,
[6]
N. V. Krylov and E. Priola,
[7]
S. N. Kružkov, A. Castro and M. Lopes,
[8]
S. N. Kružkov, A. Castro and M. Lopes,
[9]
S. N. Kružkov, A. Castro and M. Lopes,
[10] [11]
L. Lorenzi,
[12] [13]
A. Lunardi,
[14]
A. I. Nazarov and N. N. Ural'tseva,
[15]
H. Triebel, "Interpolation Theory, Function Spaces, Differential Operators,",
[1]
Giorgio Metafune, Chiara Spina.
Heat Kernel estimates for some elliptic operators with unbounded diffusion coefficients.
[2]
N. V. Krylov.
Some $L_{p}$-estimates for elliptic and parabolic operators with measurable coefficients.
[3]
Giorgio Metafune, Chiara Spina, Cristian Tacelli.
On a class of elliptic operators with unbounded diffusion coefficients.
[4]
Feng Zhou, Zhenqiu Zhang.
Pointwise gradient estimates for subquadratic elliptic systems with discontinuous coefficients.
[5]
Peter I. Kogut.
On approximation of an optimal boundary control problem for linear elliptic equation with unbounded coefficients.
[6]
Gabriella Zecca.
An optimal control problem for some nonlinear elliptic equations with unbounded coefficients.
[7]
Luigi Greco, Gioconda Moscariello, Teresa Radice.
Nondivergence elliptic equations with unbounded coefficients.
[8]
Bálint Farkas, Luca Lorenzi.
On a class of hypoelliptic operators with unbounded coefficients in $R^N$.
[9]
Simona Fornaro, Federica Gregorio, Abdelaziz Rhandi.
Elliptic operators with unbounded diffusion coefficients perturbed by inverse square potentials in $L^p$--spaces.
[10]
Simona Fornaro, Luca Lorenzi.
Generation results for elliptic operators with unbounded diffusion coefficients in $L^p$- and $C_b$-spaces.
[11] [12]
Hiroshi Watanabe.
Existence and uniqueness of entropy solutions to strongly degenerate parabolic equations with discontinuous coefficients.
[13]
Hiroshi Watanabe.
Solvability of boundary value problems for strongly degenerate parabolic equations with discontinuous coefficients.
[14]
Pierpaolo Soravia.
Uniqueness results for fully nonlinear degenerate elliptic equations with discontinuous coefficients.
[15]
Sofia Giuffrè, Giovanna Idone.
On linear and nonlinear elliptic boundary value problems in the plane with discontinuous coefficients.
[16]
Patrick W. Dondl, Michael Scheutzow.
Positive speed of propagation in a semilinear parabolic interface model with unbounded random coefficients.
[17]
Horst Heck, Matthias Hieber, Kyriakos Stavrakidis.
$L^\infty$-estimates for parabolic systems with VMO-coefficients.
[18]
Junjie Zhang, Shenzhou Zheng.
Weighted lorentz estimates for nondivergence linear elliptic equations with partially BMO coefficients.
[19]
Sallah Eddine Boutiah, Abdelaziz Rhandi, Cristian Tacelli.
Kernel estimates for elliptic operators with unbounded diffusion, drift and potential terms.
[20]
Ugur G. Abdulla, Evan Cosgrove, Jonathan Goldfarb.
On the Frechet differentiability in optimal control of coefficients in parabolic free boundary problems.
2018 Impact Factor: 0.545
Tools Metrics Other articles
by authors
[Back to Top]
|
Suppose you dispense 20 mL of a reagent using the Class A 10-mL pipet whose calibration information is given in Table 4.9. If the volume and uncertainty for one use of the pipet is 9.992 ± 0.006 mL, what is the volume and uncertainty when we use the pipet twice?
As a first guess, we might simply add together the volume and the maximum uncertainty for each delivery; thus
\[\mathrm{(9.992\: mL + 9.992\: mL) ± (0.006\: mL + 0.006\: mL) = 19.984 ± 0.012\: mL}\]
It is easy to appreciate that combining uncertainties in this way overestimates the total uncertainty. Adding the uncertainty for the first delivery to that of the second delivery assumes that with each use the indeterminate error is in the same direction and is as large as possible. At the other extreme, we might assume that the uncertainty for one delivery is positive and the other is negative. If we subtract the maximum uncertainties for each delivery,
\[\mathrm{(9.992\: mL + 9.992\: mL) ± (0.006\: mL - 0.006\: mL) = 19.984 ± 0.000\: mL}\]
we clearly underestimate the total uncertainty.
So what is the total uncertainty? From the previous discussion we know that the total uncertainty is greater than ±0.000 mL and less than ±0.012 mL. To estimate the cumulative effect of multiple uncertainties we use a mathematical technique known as the propagation of uncertainty. Our treatment of the propagation of uncertainty is based on a few simple rules.
Note
Although we will not derive or further justify these rules here, you may consult the additional resources at the end of this chapter for references that discuss the propagation of uncertainty in more detail.
4.3.1 A Few Symbols
A
propagation of uncertainty allows us to estimate the uncertainty in a result from the uncertainties in the measurements used to calculate the result. For the equations in this section we represent the result with the symbol R, and the measurements with the symbols A, B, and C. The corresponding uncertainties are u R, u A, u B, and u C. We can define the uncertainties for A, B, and C using standard deviations, ranges, or tolerances (or any other measure of uncertainty), as long as we use the same form for all measurements.
Note
The requirement that we express each uncertainty in the same way is a critically important point. Suppose you have a range for one measurement, such as a pipet’s tolerance, and standard deviations for the other measurements. All is not lost. There are ways to convert a range to an estimate of the standard deviation. See Appendix 2 for more details.
4.3.2 Uncertainty When Adding or Subtracting
When adding or subtracting measurements we use their absolute uncertainties for a propagation of uncertainty. For example, if the result is given by the equation
\[R = A + B - C\]
then the absolute uncertainty in R is
\[u_R = \sqrt{u_A^2+u_B^2+u_C^2}\tag{4.6}\]
Example 4.5
When dispensing 20 mL using a 10-mL Class A pipet, what is the total volume dispensed and what is the uncertainty in this volume? First, complete the calculation using the manufacturer’s tolerance of 10.00 mL ± 0.02 mL, and then using the calibration data from Table 4.9.
Solution
To calculate the total volume we simply add the volumes for each use of the pipet. When using the manufacturer’s values, the total volume is
\[V = \mathrm{10.00\: mL + 10.00\: mL = 20.00\: mL}\]
and when using the calibration data, the total volume is
\[V = \mathrm{9.992\: mL + 9.992\: mL = 19.984\: mL}\]
Using the pipet’s tolerance value as an estimate of its uncertainty gives the uncertainty in the total volume as
\[u_\ce{R} = \sqrt{(0.02)^2+(0.02)^2} = \mathrm{0.028\: mL}\]
and using the standard deviation for the data in Table 4.9 gives an uncertainty of
\[u_R = \sqrt{(0.006)^2+(0.006)^2} = \mathrm{0.0085\: mL}\]
Rounding the volumes to four significant figures gives 20.00 mL ± 0.03 mL when using the tolerance values, and 19.98 ± 0.01 mL when using the calibration data.
4.3.3 Uncertainty When Multiplying or Dividing
When multiplying or dividing measurements we use their relative uncertainties for a propagation of uncertainty. For example, if the result is given by the equation
\[R = \dfrac{A × B}{C}\]
then the relative uncertainty in R is
\[\dfrac{u_R}{R} = \sqrt{\left(\dfrac{u_A}{A}\right)^2 + \left(\dfrac{u_B}{B}\right)^2 + \left(\dfrac{u_C}{C}\right)^2}\tag{4.7}\]
Example 4.6
The quantity of charge, Q, in coulombs passing through an electrical circuit is
\[Q = I × t\]
where
I is the current in amperes and t is the time in seconds. When a current of 0.15 A ± 0.01 A passes through the circuit for 120 s ± 1 s, what is the total charge passing through the circuit and its uncertainty? Solution
The total charge is
\[Q = \mathrm{(0.15\: A) × (120\: s) = 18\: C}\]
Since charge is the product of current and time, the relative uncertainty in the charge is
\[\dfrac{u_R}{R} = \sqrt{\left(\dfrac{0.01}{0.15}\right)^2 + \left(\dfrac{1}{120}\right)^2} = 0.0672\]
The absolute uncertainty in the charge is
\[u_R = R × 0.0672 = \mathrm{(18\: C) × (0.0672) = 1.2\: C}\]
Thus, we report the total charge as 18 C ± 1 C.
4.3.4 Uncertainty for Mixed Operations
Many chemical calculations involve a combination of adding and subtracting, and multiply and dividing. As shown in the following example, we can calculate uncertainty by treating each operation separately using equation 4.6 and equation 4.7 as needed.
Example 4.7For a concentration technique the relationship between the signal and the an analyte’s concentration is
\[S_\ce{total} = k_\ce{A}C_\ce{A} + S_\ce{mb}\]
What is the analyte’s concentration,
C A, and its uncertainty if S total is 24.37 ± 0.02, S mb is 0.96 ± 0.02, and k A is 0.186 ± 0.003 ppm –1. Solution
Rearranging the equation and solving for
C A
\[C_\ce{A} =\dfrac{S_\ce{total} - S_\ce{mb}}{k_\ce{A}} = \mathrm{\dfrac{24.37-0.96}{0.186\: ppm^{-1}} = 125.9\: ppm}\]
gives the analyte’s concentration as 126 ppm. To estimate the uncertainty in
C A, we first determine the uncertainty for the numerator using equation 4.6.
\[u_R= \sqrt{(0.02)^2 + (0.02)^2} = 0.028\]
The numerator, therefore, is 23.41 ± 0.028. To complete the calculation we estimate the relative uncertainty in
C A using equation 4.7.
\[\dfrac{u_R}{R} = \sqrt{\left(\dfrac{0.028}{23.41}\right)^2 + \left(\dfrac{0.003}{0.186}\right)^2} = 0.0162\]
The absolute uncertainty in the analyte’s concentration is
\[u_R = \mathrm{(125.9\: ppm) × (0.0162) = 2.0\: ppm}\]
Thus, we report the analyte’s concentration as 126 ppm ± 2 ppm.
Practice Exercise 4.2
To prepare a standard solution of Cu
2 + you obtain a piece of copper from a spool of wire. The spool’s initial weight is 74.2991 g and its final weight is 73.3216 g. You place the sample of wire in a 500 mL volumetric flask, dissolve it in 10 mL of HNO 3, and dilute to volume. Next, you pipet a 1 mL portion to a 250-mL volumetric flask and dilute to volume. What is the final concentration of Cu 2+ in mg/L, and its uncertainty? Assume that the uncertainty in the balance is ±0.1 mg and that you are using Class A glassware.
Click here to review your answer to this exercise.
Function u R \(R = kA\) \(u_R=ku_A\) \(R = A + B\) \(u_R = \sqrt{u_A^2 + u_B^2}\) \(R = A − B\) \(u_R = \sqrt{u_A^2 +u_B^2}\) \(R = A × B\) \(\dfrac{u_R}{R} = \sqrt{\left(\dfrac{u_A}{A}\right)^2 + \left(\dfrac{u_B}{B}\right)^2}\) \(R = \dfrac{A}{B}\) \(\dfrac{u_R}{R} = \sqrt{\left(\dfrac{u_A}{A}\right)^2 + \left(\dfrac{u_B}{B}\right)^2}\) \(R = \ln(A)\) \(u_R = \dfrac{u_A}{A}\) \(R = \log(A)\) \(u_R = 0.4343 × \dfrac{u_A}{A}\) \(R = \ce{e}^A\) \(\dfrac{u_R}{R} = u_A\) \(R = 10^A\) \(\dfrac{u_R}{R} = 2.303 × u_A\) \(R = A^k\) \(\dfrac{u_R}{R} = k × \dfrac{u_A}{A}\) † Assumes that the measurements A and B are independent; k is a constant whose value has no uncertainty. 4.3.5 Uncertainty for Other Mathematical Functions
Many other mathematical operations are common in analytical chemistry, including powers, roots, and logarithms. Table 4.10 provides equations for propagating uncertainty for some of these function.
Example 4.8
If the pH of a solution is 3.72 with an absolute uncertainty of ±0.03, what is the [H
+] and its uncertainty? Solution
The concentration of H
+ is
\[\mathrm{[H^+] = 10^{−pH} = 10^{−3.72} = 1.91×10^{−4}\: M}\]
or 1.9 × 10
–4 M to two significant figures. From Table 4.10 the relative uncertainty in [H +] is
\[\dfrac{uR}{R} = 2.303 × u_A = 2.303 × 0.03 = 0.069\]
The uncertainty in the concentration, therefore, is
\[\mathrm{(1.91×10^{-4}\: M) × (0.069) = 1.3×10^{-5}\: M}\]
We report the [H
+] as 1.9 (±0.1) × 10 –4 M.
(Writing this result as 1.9 (±0.1) × 10
–4 M is equivalent to 1.9 × 10 –4 M ± 0.1 × 10 –4 M)
Practice Exercise 4.3
A solution of copper ions is blue because it absorbs yellow and orange light. Absorbance,
A, is defined as
\[A = -\log \dfrac{P}{P_\ce{o}}\]
where
P o is the power of radiation from the light source and P is the power after it passes through the solution. What is the absorbance if P o is 3.80×10 2 and P is 1.50×10 2? If the uncertainty in measuring P o and P is 15, what is the uncertainty in the absorbance?
Click here to review your answer to this exercise.
4.3.6 Is Calculating Uncertainty Actually Useful?
Given the effort it takes to calculate uncertainty, it is worth asking whether such calculations are useful. The short answer is, yes. Let’s consider three examples of how we can use a propagation of uncertainty to help guide the development of an analytical method.
One reason for completing a propagation of uncertainty is that we can compare our estimate of the uncertainty to that obtained experimentally. For example, to determine the mass of a penny we measure mass twice—once to tare the balance at 0.000 g, and once to measure the penny’s mass. If the uncertainty for measuring mass is ±0.001 g, then we estimate the uncertainty in measuring mass as
\[u_{mass} = \sqrt{(0.001)^2 + (0.001)^2} = \mathrm{0.0014\: g}\]
If we measure a penny’s mass several times and obtain a standard deviation of ±0.050 g, then we have evidence that our measurement process is out of control. Knowing this, we can identify and correct the problem.
We also can use propagation of uncertainty to help us decide how to improve an analytical method’s uncertainty. In Example 4.7, for instance, we calculated an analyte’s concentration as 126 ppm ± 2 ppm, which is a percent uncertainty of 1.6%. (\(\mathrm{\dfrac{2\: ppm}{126\: ppm} × 100 = 1.6\%}\).) Suppose we want to decrease the percent uncertainty to no more than 0.8%. How might we accomplish this? Looking back at the calculation, we see that the concentration’s relative uncertainty is determined by the relative uncertainty in the measured signal (corrected for the reagent blank)
\[\mathrm{\dfrac{0.028}{23.41} = 0.0012\: or\: 0.12\%}\]
and the relative uncertainty in the method’s sensitivity,
k A,
\[\mathrm{\dfrac{0.003\: ppm^{–1}}{0.186\: ppm^{–1}} = 0.016\: or\: 1.6\%}\]
Of these terms, the uncertainty in the method’s sensitivity dominates the overall uncertainty. Improving the signal’s uncertainty will not improve the overall uncertainty of the analysis. To achieve an overall uncertainty of 0.8% we must improve the uncertainty in
k A to ±0.0015 ppm –1.
Practice Exercise 4.4
Verify that an uncertainty of ±0.0015 ppm
–1 for k A is the correct result.
Click here to review your answer to this exercise.
Finally, we can use a propagation of uncertainty to determine which of several procedures provides the smallest uncertainty. When diluting a stock solution there are usually several different combinations of volumetric glassware that will give the same final concentration. For instance, we can dilute a stock solution by a factor of 10 using a 10-mL pipet and a 100-mL volumetric flask, or by using a 25-mL pipet and a 250-mL volumetric flask. We also can accomplish the same dilution in two steps using a 50-mL pipet and 100-mL volumetric flask for the first dilution, and a 10-mL pipet and a 50-mL volumetric flask for the second dilution. The overall uncertainty in the final concentration—and, therefore, the best option for the dilution—depends on the uncertainty of the transfer pipets and volumetric flasks. As shown below, we can use the tolerance values for volumetric glassware to determine the optimum dilution strategy.
5
Example 4.9
Which of the following methods for preparing a 0.0010 M solution from a 1.0 M stock solution provides the smallest overall uncertainty?
A one-step dilution using a 1-mL pipet and a 1000-mL volumetric flask. A two-step dilution using a 20-mL pipet and a 1000-mL volumetric flask for the first dilution, and a 25-mL pipet and a 500-mL volumetric flask for the second dilution. Solution
The dilution calculations for case (a) and case (b) are
\[\textrm{case (a): }\mathrm{1.0\: M × \dfrac{1.000\: mL}{1000.0\: mL} = 0.0010\: M}\]
\[\textrm{case (b): }\mathrm{1.0\: M × \dfrac{20.00\: mL}{1000.0\: mL} × \dfrac{25.00\: mL}{500.0\: mL} = 0.0010\: M}\]
Using tolerance values from Table 4.2, the relative uncertainty for case (a) is
\[\dfrac{u_R}{R} = \sqrt{\left(\dfrac{0.006}{1.000}\right)^2 + \left(\dfrac{0.3}{1000.0}\right)^2} = 0.006\]
and for case (b) the relative uncertainty is
\[\dfrac{u_R}{R} = \sqrt{\left(\dfrac{0.03}{20.00}\right)^2 + \left(\dfrac{0.3}{1000.0}\right)^2 + \left(\dfrac{0.03}{25.00}\right)^2+ \left(\dfrac{0.2}{500.0}\right)^2} = 0.002\]
Since the relative uncertainty for case (b) is less than that for case (a), the two-step dilution provides the smallest overall uncertainty. (See Appendix 2 for a more detailed treatment of the propagation of uncertainty.)
|
A standard Grassmannian $Gr(m,V)$ is the manifold having as its points all possible $m$-dimensional subspaces of a given vectorspace $V$. As an example, $Gr(1,V)$ is the set of lines through the origin in $V$ and therefore is the projective space $\mathbb{P}(V)$. Grassmannians are among the nicest projective varieties, they are smooth and allow a cell decomposition.
A quiver $Q$ is just an oriented graph. Here’s an example
A representation $V$ of a quiver assigns a vector-space to each vertex and a linear map between these vertex-spaces to every arrow. As an example, a representation $V$ of the quiver $Q$ consists of a triple of vector-spaces $(V_1,V_2,V_3)$ together with linear maps $f_a~:~V_2 \rightarrow V_1$ and $f_b,f_c~:~V_2 \rightarrow V_3$.
A sub-representation $W \subset V$ consists of subspaces of the vertex-spaces of $V$ and linear maps between them compatible with the maps of $V$. The dimension-vector of $W$ is the vector with components the dimensions of the vertex-spaces of $W$.
This means in the example that we require $f_a(W_2) \subset W_1$ and $f_b(W_2)$ and $f_c(W_2)$ to be subspaces of $W_3$. If the dimension of $W_i$ is $m_i$ then $m=(m_1,m_2,m_3)$ is the dimension vector of $W$.
The quiver-analogon of the Grassmannian $Gr(m,V)$ is the
Quiver Grassmannian $QGr(m,V)$ where $V$ is a quiver-representation and $QGr(m,V)$ is the collection of all possible sub-representations $W \subset V$ with fixed dimension-vector $m$. One might expect these quiver Grassmannians to be rather nice projective varieties.
Let’s illustrate the argument by finding a quiver Grassmannian $QGr(m,V)$ isomorphic to the elliptic curve in $\mathbb{P}^2$ with homogeneous equation $Y^2Z=X^3+Z^3$.
Consider the Veronese embedding $\mathbb{P}^2 \rightarrow \mathbb{P}^9$ obtained by sending a point $(x:y:z)$ to the point
\[ (x^3:x^2y:x^2z:xy^2:xyz:xz^2:y^3:y^2z:yz^2:z^3) \]
The upshot being that the elliptic curve is now realized as the intersection of the image of $\mathbb{P}^2$ with the hyper-plane $\mathbb{V}(X_0-X_7+X_9)$ in the standard projective coordinates $(x_0:x_1:\cdots:x_9)$ for $\mathbb{P}^9$.
To describe the equations of the image of $\mathbb{P}^2$ in $\mathbb{P}^9$ consider the $6 \times 3$ matrix with the rows corresponding to $(x^2,xy,xz,y^2,yz,z^2)$ and the columns to $(x,y,z)$ and the entries being the multiplications, that is
$$\begin{bmatrix} x^3 & x^2y & x^2z \\ x^2y & xy^2 & xyz \\ x^2z & xyz & xz^2 \\ xy^2 & y^3 & y^2z \\ xyz & y^2z & yz^2 \\ xz^2 & yz^2 & z^3 \end{bmatrix} = \begin{bmatrix} x_0 & x_1 & x_2 \\ x_1 & x_3 & x_4 \\ x_2 & x_4 & x_5 \\ x_3 & x_6 & x_7 \\ x_4 & x_7 & x_8 \\ x_5 & x_8 & x_9 \end{bmatrix}$$
But then, a point $(x_0:x_1: \cdots : x_9)$ belongs to the image of $\mathbb{P}^2$ if (and only if) the matrix on the right-hand side has rank $1$ (that is, all its $2 \times 2$ minors vanish). Next, consider the quiver
and consider the representation $V=(V_1,V_2,V_3)$ with vertex-spaces $V_1=\mathbb{C}$, $V_2 = \mathbb{C}^{10}$ and $V_2 = \mathbb{C}^6$. The linear maps $x,y$ and $z$ correspond to the columns of the matrix above, that is
$$(x_0,x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8,x_9) \begin{cases} \rightarrow^x~(x_0,x_1,x_2,x_3,x_4,x_5) \\ \rightarrow^y~(x_1,x_3,x_4,x_6,x_7,x_8) \\ \rightarrow^z~(x_2,x_4,x_5,x_7,x_8,x_9) \end{cases}$$
The linear map $h~:~\mathbb{C}^{10} \rightarrow \mathbb{C}$ encodes the equation of the hyper-plane, that is $h=x_0-x_7+x_9$.
Now consider the quiver Grassmannian $QGr(m,V)$ for the dimension vector $m=(0,1,1)$. A base-vector $p=(x_0,\cdots,x_9)$ of $W_2 = \mathbb{C}p$ of a subrepresentation $W=(0,W_2,W_3) \subset V$ must be such that $h(x)=0$, that is, $p$ determines a point of the hyper-plane.
Likewise the vectors $x(p),y(p)$ and $z(p)$ must all lie in the one-dimensional space $W_3 = \mathbb{C}$, that is, the right-hand side matrix above must have rank one and hence $p$ is a point in the image of $\mathbb{P}^2$ under the Veronese.
That is, $Gr(m,V)$ is isomorphic to the intersection of this image with the hyper-plane and hence is isomorphic to the elliptic curve.
The general case is similar as one can view any projective subvariety $X \rightarrow \mathbb{P}^n$ as isomorphic to the intersection of the image of a specific $d$-uple Veronese embedding $\mathbb{P}^n \rightarrow \mathbb{P}^N$ with a number of hyper-planes in $\mathbb{P}^N$.
ADDED For those desperate to read the original comments-section, here’s the link.
|
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
|
Geometry and Topology Seminar Contents 1 Fall 2016 2 Spring 2017 3 Fall Abstracts 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2016
date speaker title host(s) September 9 Bing Wang (UW Madison) "The extension problem of the mean curvature flow" (Local) September 16 Ben Weinkove (Northwestern University) "Gauduchon metrics with prescribed volume form" Lu Wang September 23 Jiyuan Han (UW Madison) "Deformation theory of scalar-flat ALE Kahler surfaces" (Local) September 30 October 7 Yu Li (UW Madison) "Ricci flow on asymptotically Euclidean manifolds" (Local) October 14 Sean Howe (University of Chicago) "Representation stability and hypersurface sections" Melanie Matchett Wood October 21 Nan Li (CUNY) "Quantitative estimates on the singular Sets of Alexandrov spaces" Lu Wang October 28 Ronan Conlon(Florida International University) "New examples of gradient expanding K\"ahler-Ricci solitons" Bing Wang November 4 Jonathan Zhu (Harvard University) "Entropy and self-shrinkers of the mean curvature flow" Lu Wang November 11 Richard Kent (Wisconsin) Analytic functions from hyperbolic manifolds local November 18 Caglar Uyanik (Illinois) "TBA" Kent Thanksgiving Recess December 2 Peyman Morteza (UW Madison) "TBA" (Local) December 9 Yu Zeng(University of Rochester) "TBA" December 16 Spring 2017
date speaker title host(s) Jan 20 Jan 27 Feb 3 Feb 10 Feb 17 Feb 24 March 3 March 10 March 17 March 24 Spring Break March 31 April 7 April 14 April 21 April 28 Bena Tshishiku (Harvard) "TBA" Dymarz Fall Abstracts Ronan Conlon New examples of gradient expanding K\"ahler-Ricci solitons
A complete K\"ahler metric $g$ on a K\"ahler manifold $M$ is a \emph{gradient expanding K\"ahler-Ricci soliton} if there exists a smooth real-valued function $f:M\to\mathbb{R}$ with $\nabla^{g}f$ holomorphic such that $\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0$. I will present new examples of such metrics on the total space of certain holomorphic vector bundles. This is joint work with Alix Deruelle (Universit\'e Paris-Sud).
Jiyuan Han Deformation theory of scalar-flat ALE Kahler surfaces
We prove a Kuranishi-type theorem for deformations of complex structures on ALE Kahler surfaces. This is used to prove that for any scalar-flat Kahler ALE surfaces, all small deformations of complex structure also admit scalar-flat Kahler ALE metrics. A local moduli space of scalar-flat Kahler ALE metrics is then constructed, which is shown to be universal up to small diffeomorphisms (that is, diffeomorphisms which are close to the identity in a suitable sense). A formula for the dimension of the local moduli space is proved in the case of a scalar-flat Kahler ALE surface which deforms to a minimal resolution of \C^2/\Gamma, where \Gamma is a finite subgroup of U(2) without complex reflections. This is a joint work with Jeff Viaclovsky.
Sean Howe Representation stability and hypersurface sections
We give stability results for the cohomology of natural local systems on spaces of smooth hypersurface sections as the degree goes to \infty. These results give new geometric examples of a weak version of representation stability for symmetric, symplectic, and orthogonal groups. The stabilization occurs in point-counting and in the Grothendieck ring of Hodge structures, and we give explicit formulas for the limits using a probabilistic interpretation. These results have natural geometric analogs -- for example, we show that the "average" smooth hypersurface in \mathbb{P}^n is \mathbb{P}^{n-1}!
Nan Li Quantitative estimates on the singular sets of Alexandrov spaces
The definition of quantitative singular sets was initiated by Cheeger and Naber. They proved some volume estimates on such singular sets in non-collapsed manifolds with lower Ricci curvature bounds and their limit spaces. On the quantitative singular sets in Alexandrov spaces, we obtain stronger estimates in a collapsing fashion. We also show that the (k,\epsilon)-singular sets are k-rectifiable and such structure is sharp in some sense. This is a joint work with Aaron Naber.
Yu Li
In this talk, we prove that if an asymptotically Euclidean (AE) manifold with nonnegative scalar curvature has long time existence of Ricci flow, it converges to the Euclidean space in the strong sense. By convergence, the mass will drop to zero as time tends to infinity. Moreover, in three dimensional case, we use Ricci flow with surgery to give an independent proof of positive mass theorem. A classification of diffeomorphism types is also given for all AE 3-manifolds with nonnegative scalar curvature.
Gaven Marin TBA Peyman Morteza TBA Richard Kent Analytic functions from hyperbolic manifolds
Thurston's Geometrization Conjecture, now a celebrated theorem of Perelman, tells us that most 3-manifolds are naturally geometric in nature. In fact, most 3-manifolds admit hyperbolic metrics. In the 1970s, Thurston proved the Geometrization conjecture in the case of Haken manifolds, and the proof revolutionized 3-dimensional topology, hyperbolic geometry, Teichmüller theory, and dynamics. Thurston's proof is by induction, constructing a hyperbolic structure from simpler pieces. At the heart of the proof is an analytic function called the
skinning map that one must understand in order to glue hyperbolic structures together. A better understanding of this map would more brightly illuminate the interaction between topology and geometry in dimension three. I will discuss what is currently known about this map. Caglar Uyanik TBA Bing Wang The extension problem of the mean curvature flow
We show that the mean curvature blows up at the first finite singular time for a closed smooth embedded mean curvature flow in R^3. A key ingredient of the proof is to show a two-sided pseudo-locality property of the mean curvature flow, whenever the mean curvature is bounded. This is a joint work with Haozhao Li.
Ben Weinkove Gauduchon metrics with prescribed volume form
Every compact complex manifold admits a Gauduchon metric in each conformal class of Hermitian metrics. In 1984 Gauduchon conjectured that one can prescribe the volume form of such a metric. I will discuss the proof of this conjecture, which amounts to solving a nonlinear Monge-Ampere type equation. This is a joint work with Gabor Szekelyhidi and Valentino Tosatti.
Jonathan Zhu Entropy and self-shrinkers of the mean curvature flow
The Colding-Minicozzi entropy is an important tool for understanding the mean curvature flow (MCF), and is a measure of the complexity of a submanifold. Together with Ilmanen and White, they conjectured that the round sphere minimises entropy amongst all closed hypersurfaces. We will review the basics of MCF and their theory of generic MCF, then describe the resolution of the above conjecture, due to J. Bernstein and L. Wang for dimensions up to six and recently claimed by the speaker for all remaining dimensions. A key ingredient in the latter is the classification of entropy-stable self-shrinkers that may have a small singular set.
Spring Abstracts Bena Tshishiku
"TBA"
Archive of past Geometry seminars
2015-2016: Geometry_and_Topology_Seminar_2015-2016
2014-2015: Geometry_and_Topology_Seminar_2014-2015 2013-2014: Geometry_and_Topology_Seminar_2013-2014 2012-2013: Geometry_and_Topology_Seminar_2012-2013 2011-2012: Geometry_and_Topology_Seminar_2011-2012 2010: Fall-2010-Geometry-Topology
|
This quantum statistical mechanical system encodes the arithmetic properties of cyclotomic extensions of $\mathbb{Q}$.
The corresponding Bost-Connes algebra encodes the action by the power-maps on the roots of unity.
It has generators $e_n$ and $e_n^*$ for every natural number $n$ and additional generators $e(\frac{g}{h})$ for every element in the additive group $\mathbb{Q}/\mathbb{Z}$ (which is of course isomorphic to the multiplicative group of roots of unity).
The defining equations are
\[ \begin{cases} e_n.e(\frac{g}{h}).e_n^* = \rho_n(e(\frac{g}{h})) \\ e_n^*.e(\frac{g}{h}) = \Psi^n(e(\frac{g}{h}).e_n^* \\ e(\frac{g}{h}).e_n = e_n.\Psi^n(e(\frac{g}{h})) \\ e_n.e_m=e_{nm} \\ e_n^*.e_m^* = e_{nm}^* \\ e_n.e_m^* = e_m^*.e_n~\quad~\text{if $(m,n)=1$} \end{cases} \]
Here $\Psi^n$ are the power-maps, that is $\Psi^n(e(\frac{g}{h})) = e(\frac{ng}{h}~mod~1)$, and the maps $\rho_n$ are given by
\[ \rho_n(e(\frac{g}{h})) = \sum e(\frac{i}{j}) \] where the sum is taken over all $\frac{i}{j} \in \mathbb{Q}/\mathbb{Z}$ such that $n.\frac{i}{j}=\frac{g}{h}$.
Conway’s Big Picture has as its vertices the (equivalence classes of) lattices $M,\frac{g}{h}$ with $M \in \mathbb{Q}_+$ and $\frac{g}{h} \in \mathbb{Q}/\mathbb{Z}$.
The Bost-Connes algebra acts on the vector-space with basis the vertices of the Big Picture. The action is given by:
\[ \begin{cases} e_n \ast \frac{c}{d},\frac{g}{h} = \frac{nc}{d},\rho^m(\frac{g}{h})~\quad~\text{with $m=(n,d)$} \\ e_n^* \ast \frac{c}{d},\frac{g}{h} = (n,c) \times \frac{c}{nd},\Psi^{\frac{n}{m}}(\frac{g}{h})~\quad~\text{with $m=(n,c)$} \\ e(\frac{a}{b}) \ast \frac{c}{d},\frac{g}{h} = \frac{c}{d},\Psi^c(\frac{a}{b}) \frac{g}{h} \end{cases} \]
This connection makes one wonder whether non-commutative geometry can shed a new light on monstrous moonshine?
This question is taken up by Jorge Plazas in his paper Non-commutative geometry of groups like $\Gamma_0(N)$
Plazas shows that the bigger Connes-Marcolli $GL_2$-system also acts on the Big Picture. An intriguing quote:
“Our interest in the $GL_2$-system comes from the fact that its thermodynamic properties encode the arithmetic theory of modular functions to an extend which makes it possible for us to capture aspects of moonshine theory.”
Looks like the right kind of paper to take along when I disappear next week for some time in the French mountains…
Similar Posts: A forgotten type and roots of unity (again) A tetrahedral snake the moonshine picture – at last the Bost-Connes Hecke algebra The Big Picture is non-commutative The Langlands program and non-commutative geometry Roots of unity and the Big Picture Bost-Connes for ringtheorists The defining property of 24 Snakes, spines, threads and all that
|
It seems that contrary to some other answers a continuous solution can be constructed.
First of all we interpolate with Newton series the flow of function $\cos(\cos(z))$:
$$\phi_{1/2}(x,z)=\cases { \arccos^{[x]}(z), & \text{if } x < 0 \cr \cos^{[x]}(z), & \text{if } x \ge 0 }$$
$$\phi_{1}(x,z)=\sum_{m=0}^\infty \binom{x/2+1}{m} \sum_{k=0}^m (-1)^{k-m} \binom{m}{k} \phi_{1/2}(k-1,z)$$
We interpolate from the first integer point where the value is real, i.e. from x=-1.
We now obtain the approximation of the other half-flow of cos x by taking arccos on the above function:
$$\phi_{2}(x,z)=\arccos(\phi_{1}(x+1,z))$$
We know that the flow of cos(x) should coincide with the first function in even integers and with the second function in odd integers.
So we make a stub of the flow following this knowledge (we also want that its absolute value to be monotonous).
$$\phi(x,z)=\frac{1}{2} \left((-1)^{x+1}+1\right) (\phi_{1}(x,z)-\text{FP})+\frac{1}{2} \left((-1)^x+1\right) (\phi_{2}(x,z)-\text{FP})+\text{FP}$$
where FP is the cosine fixed point.
This function coincides with the flow in integer points but still disagrees in between.To get a real flow we have to take a limit of repeated arccosine on the our stub:
$$\Phi(x,z)=\lim_{n\to\infty} \arccos^{[n]} (\phi(x+n,z))$$
Numerically this limit converges quite fast. If the limit exists, it by definition, satisfies the equation
$$ \cos(\Phi(x,z))=\Phi(x+1,z)$$
so it is the true flow.
The above can be illustrated by the graphic:
Here upper semi-flow (flow of cos(cos z)) ) is blue, lower semi-flow is red, real part of the flow is yellow, imaginable part of the flow is green. All flows are taken as point z=1.
Following this we can build a graphic of half-iterate of cosine $\Phi(1/2,z)$:
Here blue is the real part and red is the imaginary part.
We can verify that the half-iterate repeated twice $\Phi(1/2,\Phi(1/2,z))$ (blue) follows cosine (red) quite well at positive half-periods, and anywhere the cone is positive (that is, on the imaginary axis as well):
I think this coincodes with the answer by Gerald Edgar above.A modified function, iterated twice gives cosine in all real axis:
This is a true half-iterate of cosine, which works on the whole real axis, producing exactly cosine:
But as has been noted by Joel David Hamkins above, there is infinite number of such solutions, none of which work for the whole complex numbers.
This function can be considered though as the true solution on the complex plane if interpreted as a multi-valued function. To do this, take the function on the each interval and analytically extend it to the whole complex plane.
A mathematica notebook that produces the above is as follows:
$PlotTheme = None;
f[x_, z_] := If[x >= 0, Nest[Cos, z, 2*x], Nest[ArcCos, z, -2*x]]
n := 30
s := 15
Ni[x_, z_] :=
Sum[Binomial[x + 1, m]*
Sum[(-1)^(k - m)*Binomial[m, k]*f[k - 1, z], {k, 0, m}], {m, 0, n}]
Semi2[x_, z_] := Ni[x/2, z]
Semi1[x_, z_] := ArcCos[Semi2[x + 1, z]]
FP := Evaluate[N[FixedPoint[Cos, 1.]]]
a := 21
Flow2[x_, z_] :=
FP + (Semi2[x, z] - FP)*(((-1)^x + 1)/2) + (Semi1[x, z] -
FP)*(((-1)^(x + 1) + 1)/2)
FL[x_, z_] := Nest[ArcCos, Flow2[x + a, z], a]
Plot[{Semi1[x, 1], Semi2[x, 1], Re[FL[x, 1]], Im[FL[x, 1]]}, {x, -5,
5}, AspectRatio -> Automatic, PlotRange -> 3]
Plot[{Re[FL[0.5, x]], Im[FL[0.5, x]]}, {x, -5, 5},
AspectRatio -> Automatic, PlotRange -> 3]
Plot[{Re[FL[0.5, FL[0.5, x]]], Cos[x]}, {x, -5, 5},
AspectRatio -> Automatic, PlotRange -> 3]
HalfCos[z_] :=
If[Im[z] == 0, Sign[Re[Cos[z]]]*FL[0.5, z], Sign[Re[z]]*FL[0.5, z]]
Plot[{Re[HalfCos[x]], Im[HalfCos[x]]}, {x, -5, 5},
AspectRatio -> Automatic, PlotRange -> 3]
Plot[{Re[HalfCos[HalfCos[x]]], Cos[x]}, {x, -5, 5},
AspectRatio -> Automatic, PlotRange -> 3]
|
So from this page, I know that there is a relation between Chern-Simons Theory and Yang-Mills Theory, but I have difficulty proving the identities in the document.
I was going to prove $$\partial_\mu(\epsilon^{\mu\alpha\beta\gamma}(A_\alpha^a\partial_\beta A^a_\gamma+\dfrac13f^{abc}A^a_\alpha A^b_\beta A^c_\gamma)) = 4\epsilon^{\mu\alpha\beta\gamma}F^a_{\mu\alpha}F^a_{\beta\gamma}$$ where $F^a_{\mu\alpha}=\partial_\mu A_\alpha^a - \partial_\alpha A_\mu^a+f^{abc}A^b_\mu A^c_\alpha$.
My attempt:$$\partial_\mu(\epsilon^{\mu\alpha\beta\gamma}A_\alpha^a\partial_\beta A^a_\gamma) = \epsilon^{\mu\alpha\beta\gamma}\partial_\mu(A_\alpha^a\partial_\beta A^a_\gamma) = \epsilon^{\mu\alpha\beta\gamma}(\partial_\mu A_\alpha^a)(\partial_\beta A^a_\gamma)+\epsilon^{\mu\alpha\beta\gamma} A_\alpha^a(\partial_\mu\partial_\beta A^a_\gamma)$$
but note that $\partial_\beta\partial_\mu = \partial_\mu\partial_\beta$, so we have
$$\epsilon^{\mu\alpha\beta\gamma} A_\alpha^a(\partial_\mu\partial_\beta A^a_\gamma) = \epsilon^{\mu\alpha\beta\gamma} A_\alpha^a(\partial_\beta\partial_\mu A^a_\gamma) = \epsilon^{\beta\alpha\mu\gamma} A_\alpha^a(\partial_\beta\partial_\mu A^a_\gamma) = -\epsilon^{\mu\alpha\beta\gamma} A_\alpha^a(\partial_\beta\partial_\mu A^a_\gamma) = 0$$ so we are left with the product of two derivatives. We can rewrite that term as
$$\epsilon^{\mu\alpha\beta\gamma}(\partial_\mu A_\alpha^a)(\partial_\beta A^a_\gamma) = \dfrac 14 (\epsilon^{\mu\alpha\beta\gamma}(\partial_\mu A_\alpha^a)(\partial_\beta A^a_\gamma) - (\partial_\alpha A_\mu^a)(\partial_\beta A^a_\gamma)-(\partial_\mu A_\alpha^a)(\partial_\gamma A^a_\beta)+(\partial_\alpha A_\mu^a)(\partial_\gamma A^a_\beta))=\dfrac 14 \epsilon^{\mu\alpha\beta\gamma}(\partial_\mu A_\alpha^a - \partial_\alpha A_\mu^a)(\partial_\beta A^a_\gamma-\partial_\gamma A^a_\beta)$$
Now, onto the product of three $A$'s. $$\partial_\mu(\epsilon^{\mu\alpha\beta\gamma}\dfrac13f^{abc}A^a_\alpha A^b_\beta A^c_\gamma)) = \epsilon^{\mu\alpha\beta\gamma}\dfrac13 f^{abc} \partial_\mu(A^a_\alpha A^b_\beta A^c_\gamma)= \epsilon^{\mu\alpha\beta\gamma}\dfrac13 f^{abc}((\partial_\mu A^a_\alpha)A^b_\beta A^c_\gamma + A^a_\alpha(\partial_\mu A^b_\beta) A^c_\gamma + A^a_\alpha A^b_\beta(\partial_\mu A^c_\gamma)) = \dfrac13(\partial_\mu A^a_\alpha)A^b_\beta A^c_\gamma (\epsilon^{\mu\alpha\beta\gamma} f^{abc}+ \epsilon^{\mu\beta\alpha\gamma} f^{bac}+\epsilon^{\mu\gamma\alpha\beta} f^{cab}) $$ Since the structure constants are completely antisymmetric, we have $$\partial_\mu(\epsilon^{\mu\alpha\beta\gamma}\dfrac13f^{abc}A^a_\alpha A^b_\beta A^c_\gamma))=(\partial_\mu A^a_\alpha)A^b_\beta A^c_\gamma \epsilon^{\mu\alpha\beta\gamma} f^{abc}=\dfrac 12(\partial_\mu A^a_\alpha-\partial_\alpha A^a_\mu)A^b_\beta A^c_\gamma \epsilon^{\mu\alpha\beta\gamma} f^{abc}$$ That's difficulty #$1$: the prefactor is $\dfrac 12$ not $\dfrac 14$.
There is another difficulty #$2$: we are missing the product of $4$ $A$'s, i.e. the term $$\epsilon^{\mu\alpha\beta\gamma}f^{abc}A^b_\mu A^c_\alpha f^{ade}A^d_\beta A^e_\gamma$$ which has to equal $0$, but I am not able to show it is.
Edit (1): On second thought I can solve difficulty #$1$, it's a mistake in which the term $$ (\partial_\beta A^c_\gamma-\partial_\gamma A^c_\beta)A^a_\mu A^b_\alpha \epsilon^{\mu\alpha\beta\gamma} f^{abc}$$ is left out, which is the same as the term with the prefactor $\dfrac 12$ left out after index rearrangement, so the prefactor becomes $\dfrac 14$. So I am left with difficulty #$2$ only. Edit (2): Using Jacobi identity, we have $$f^{bac}f^{dea} = -f^{abc}f^{ade}= - f^{dac}f^{eba}-f^{eac}f^{bda}= - f^{acd}f^{aeb}-f^{ace}f^{abd}$$
The term becomes $$\epsilon^{\mu\alpha\beta\gamma}(f^{acd}f^{aeb}A^b_\mu A^c_\alpha A^d_\beta A^e_\gamma + f^{ace}f^{abd}A^b_\mu A^c_\alpha A^d_\beta A^e_\gamma) =\epsilon^{\gamma\mu\alpha\beta}f^{acd}f^{aeb}A^b_\gamma A^c_\mu A^d_\alpha A^e_\beta + \epsilon^{\beta\mu\gamma\alpha}f^{ace}f^{abd}A^b_\beta A^c_\mu A^d_\gamma A^e_\alpha =f^{abc}f^{ade}A^b_\mu A^c_\alpha A^d_\beta A^e_\gamma(-\epsilon^{\mu\alpha\beta\gamma}-\epsilon^{\mu\alpha\beta\gamma})$$ The quantity is the same as $-2$ times itself, i.e. $X = -2X$, so we have either $3=0$ or $X=0$. Since the field $\mathbb R$ does not have characteristic $3$, we are left with the product of the $4$ $A$'s being $0$, so we have solved difficulty #$2$.
|
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
|
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
|
I would not use an aligned approach in an inline set. Here is something your can work with:
\documentclass{memoir}
\usepackage{amsmath}
\begin{document}
Let us consider the function
\begin{align*}
f\colon A &\to A,\\
x &\mapsto \big(f_1(x), f_2(x)\big).
\end{align*}
Saepe at quas accusamus molestiae possimus consequatur vitae.
For an inline attempt, I would go with
$f\colon A \to A,\, x \mapsto \big(f_1(x), f_2(x)\big)$.
Saepe at quas accusamus molestiae possimus consequatur vitae.
\end{document}
Seriously, you should consider some basic training in TeX, for example https://ctan.org/pkg/lshort-english, https://ctan.org/pkg/short-math-guide or https://ctan.org/pkg/latex-amsmath.
|
't Hooft anomaly matching condition states that the (chiral) anomalous structure of the given theory is the same independently on the scale. This means that if we have one particle content $\{\psi\}$ for scales $> \Lambda$ with non-zero anomaly, then another particle content $\{\phi\}$ for scales $<\Lambda$ must reproduce anomaly coefficients $d_{abc}$.
Weinberg (in his QFT Vol. 2 Sec. 22.5) states that in Lorentz invariant theory the states $\{\phi\}$ can be recognized as massless helicity $\frac{1}{2}$ fermions or Goldstone bosons. As an example of the second case it proposes to discuss the QCD, where below the spontaneous symmetry breaking scale the anomalous structure of underlying theory with quarks is reproduced by pseudoscalar mesons.
But I don't understand one thing. These pseudo-scalar mesons actually are pseudo-goldstone bosons with non-zero mass. Therefore it seems that they can't reproduce the anomaly, or some argument referring to the fact that they are true Goldstone bosons in some approximation must be used in order to explain why the anomaly can be reproduced by them.
Do You know this argument?
|
==Definition==
=
==Symbol-free definition===
An '''abelian group''' is a [[group]] where any two elements commute.
===Definition with symbols===
A [[group]] <math>G</math> is termed '''abelian''' if for any elements <math>x</math> and <math>y</math> in <math>G</math>, <math>xy = yx</math> (here <math>xy</math> denotes the product of <math>x</math> and <math>y</math> in <math>G</math>).
<section end=beginner/>
===Equivalent formulations===
* A group is abelian if its [[defining ingredient::center]] is the whole group.* A group is abelian if its [[defining ingredient:: commutator subgroup]] is trivial.
<section begin=beginner/>
==Notation==
===Non-examples===
Not every group is abelian. The smallest non-abelian group is [[symmetric group:S3|the symmetric group on three letters]]: the group of all permutations on three letters, under composition. Its being non-abelian hinges on the fact that the order in which permutations are performed
, matters.
<section end=revisit/>
===Occurrence as quotients===
The maximal abelian quotient of any group is termed its [[abelianization]], and this is the quotient by the [[
commutator subgroup]]. A subgroup is an [[abelian-quotient subgroup]] (i.e., normal with abelian quotient group) if and only if the subgroup contains the commutator subgroup.
==
Formalisms==
{
{obtainedbyapplyingthe| diagonal- in- square operator|normal subgroup}}
A group <math>G</math> is an abelian group if and only if, in the [[external direct product]] <math>G \times G</math>, the diagonal subgroup <math>\{ (g,g) \mid g \in G \}</math> is a [[normal subgroup]].
==
Metaproperties==
{{
varietal}}
Abelian groups form a [[ variety of algebras]] . The defining equations for this variety are the equations for a [[group]] along with the commutativity equation.
{{S-closed}}
Any [[ subgroup]] of an abelian group is abelian -- viz., the property of being abelian is [[ subgroup-closed group property| subgroup-closed]]. This follows as a direct consequence of abelianness being varietal. {{proofat|[[ Abelianness is subgroup-closed]] }}
{{Q-closed}}
Any [[quotient]] of an abelian group is abelian -- viz the property of being abelian is [[quotient-closed group property|quotient-closed]]. This again follows as a direct consequence of abelianness being varietal. {{ proofat| [[Abelianness is quotient- closed]]}}
{{DP-closed}} A [[direct product]] of abelian groups is abelian -- viz the property of being abelian is [[direct product -closed group property|direct product-closed]] . This again follows as a direct consequence of abelianness being varietal. {{proofat|[[ Abelianness is direct product-closed]] }}
==Testing==
{{further|[[Abelianness testing problem]]}}
{{GAP command for gp|
To test whether a group is abelian, the GAP syntax is:
<
pre>IsAbelian (group)</ pre>
where <
pre>group</ pre> either defines the group or gives the name to a group previously defined.
==Study of this notion==
==References==
===Textbook references===
* {{booklink-defined|DummitFoote }}, Page 17 (definition as Point (2) in general definition of a group) * {{booklink-defined|AlperinBell }}, Page 2 (definition introduced in paragraph ) * {{booklink-defined|Artin }}, Page 42 ( defined immediately after the definition of group , as a group where the composition is commutative) * {{booklink-defined|Herstein}} , Page 28 (formal definition) * {{booklink-defined|RobinsonGT }}, Page 2 (formal definition ) * {{booklink-defined|FGTAsch }}, Page 1 (definition introduced in paragraph )
==External links==
|
[LON-CAPA-users] Hints for first LON-CAPA question Joseph Mingrone jrm at mathstat.dal.ca Thu Jun 13 15:04:44 EDT 2013 Joseph Mingrone <jrm at mathstat.dal.ca> writes:
Hello all;
I'm creating a problem that I've included below. My questions are:
1. How can a student submit each part of the question separately?
2. In the table, I'm building the header with $row1 .= "<td>$i+1</td>",
but the $i+1 isn't evaluated and 0+1, 1+1, etc. is shown. How can I get
the intended output?
3. How can \bar{x} be displayed?
4. In the last radiobuttonresponse question, only one foil is showing up
even though I've specified max=3. How can I make all three show up?
5. This is my first attempt at a question and I have no sample code
other than what's in the manual. If you have any other suggestions
about style or best practices, please pass them along.
Thanks,
Joseph
<problem>
<parameter name="maxtries" id="11" type="int_pos" default="1" description="Maximum Number of Tries" />
<script type="loncapa/perl">
$n = &random(8,16,1);
$mu_a = &random(4,16,.01);
$sd_a = &random(1,5,.01);
@a=&random_normal($n,55,$mu_a,$sd_a);
$mu_b = &random(4,16,.01);
$sd_b = &random(1,5,.01);
@b=&random_normal($n,55,$mu_b,$sd_b);
$table = '<table border=1>';
$row1 = '<tr><td></td>';
$row2 = '<tr><td>A</td>';
$row3 = '<tr><td>B</td>';
for ($i=0; $i<$n; $i++)
{
$row1 .= "<td>$i+1</td>";
$row2 .= "<td>&roundto($a[$i],2)</td>";
$row3 .= "<td>&roundto($b[$i],2)</td>";
}
$row1 .= '</tr>';
$row2 .= '</tr>';
$row3 .= '</tr>';
$table .= $row1;
$table .= $row2;
$table .= "$row3</table>";
</script>
<startouttext />
<p>
Super Sneaker Company is evaluating two different materials, A and B, to be used to construct the soles of their new active shoe targeted to city high school students in Canada. While material B costs less than material A, the company suspects that mean wear for material B is greater than mean wear for material A. Two study designs were initially developed to test this suspicion. In both designs, Halifax was chosen as a representative city of the targeted market. In Study Design 1, 10 high school students were drawn at random from the Halifax School District database. After obtaining their shoe sizes, the company manufactured 10 pairs of shoes, each pair with one shoe having a sole constructed from material A and the other shoe, a sole constructed from material B.
</p>
The researcher in charge of this design asked the company to randomly assign material A to the right shoe or to the left shoe, for each of the ten pairs. Why?
<endouttext />
<radiobuttonresponse direction="vertical" max="3" id="12" randomize="yes">
<foilgroup>
<foil location="random" value='true' name="foil1">
<startouttext />Right shoe wear is generally different than left shoe wear.<endouttext />
</foil>
<foil location="random" value='false' name="foil2">
<startouttext />Right shoe wear is always less than left shoe wear.<endouttext />
</foil>
<foil location="random" value='false' name="foil3">
<startouttext />Right shoe wear is always greater than left shoe wear.<endouttext />
</foil>
<foil location="random" value='false' name="foil4">
<startouttext />Left shoe wear is always less than right shoe wear.<endouttext />
</foil>
<foil location="random" value='false' name="foil5">
<startouttext />Left shoe wear is always greater than right shoe wear.<endouttext />
</foil>
</foilgroup>
</radiobuttonresponse>
<parameter name="maxtries" type="int_pos" description="Maximum Number of Tries" default="1" />
<startouttext />
<p>
After 3 months, the amount of wear in each shoe was recorded in standardized units as follows:
<parse>$table</parse>
</p>
The null hypothesis for the test is:
<endouttext />
<radiobuttonresponse max="4" randomize="yes" direction="vertical">
<foilgroup>
<foil location="random" value='true' name="foil1">
<startouttext /><m>$\mu_a - \mu_b = 0$</m><endouttext />
</foil>
<foil location="random" value='true' name="foil2">
<startouttext /><m>$\mu_b - \mu_a = 0$</m><endouttext />
</foil>
<foil location="random" value='false' name="foil3">
<startouttext /><m>$\bar{x}_a < \bar{X}_b$</m><endouttext />
</foil>
<foil location="random" value='false' name="foil4">
<startouttext /><m>$\bar{x}_a - \bar{x}_b = 0$</m><endouttext />
</foil>
<foil location="random" value='false' name="foil5">
<startouttext /><m>$\bar{x}_b - \bar{x}_a = 0$</m><endouttext />
</foil>
</foilgroup>
</radiobuttonresponse>
<parameter name="maxtries" type="int_pos" description="Maximum Number of Tries" default="1" />
<startouttext />
The alternative hypothesis is:
<endouttext />
<radiobuttonresponse max="4" randomize="yes" direction="vertical">
<foilgroup>
<foil location="random" value='true' name="foil1">
<startouttext /><m>$\mu_a - \mu_b \ne 0$</m><endouttext />
</foil>
<foil location="random" value='true' name="foil2">
<startouttext /><m>$\mu_b - \mu_a > 0$</m><endouttext />
</foil>
<foil location="random" value='false' name="foil3">
<startouttext /><m>$\mu_b - \mu_a > 0$</m><endouttext />
</foil>
<foil location="random" value='false' name="foil4">
<startouttext /><m>$\mu_a - \mu_b > 0$</m><endouttext />
</foil>
<foil location="random" value='false' name="foil5">
<startouttext /><m>$\bar{x} < $</m><endouttext />
</foil>
<foil location="random" value='false' name="foil6">
<startouttext /><m>$\bar{x}_a - \bar{x}_b < 0$</m><endouttext />
</foil>
<foil location="random" value='false' name="foil7">
<startouttext /><m>$\bar{x}_b - \bar{x}_a < 0$</m><endouttext />
</foil>
</foilgroup>
</radiobuttonresponse>
<parameter name="maxtries" type="int_pos" description="Maximum Number of Tries" default="3" />
<startouttext />To test the hypothesis that mean wear for material B is greater than mean wear for material A, calculate the test statistic.<endouttext />
<numericalresponse answer="">
<responseparam type="tolerance" default="5%" name="tol" description="Numerical Tolerance" />
<responseparam name="sig" type="int_range,0-16" default="0,15" description="Significant Figures" />
<textline readonly="no" />
</numericalresponse>
<startouttext />Which of the statistical tables should you use? You only have one try!<endouttext />
<radiobuttonresponse max="3" randomize="yes" direction="vertical">
<foilgroup>
<foil name="t" value="true" location="random">
<startouttext />t-distribution<endouttext />
</foil>
<foil name="z" value="unused" location="random">
<startouttext />z-distribution<endouttext />
</foil>
<foil name="chi" value="unused" location="random">
<startouttext /><m>$\chi^2$</m><endouttext />
</foil>
</foilgroup>
</radiobuttonresponse>
</problem>
More information about the LON-CAPA-usersmailing list
|
Expected Number of Happy Passengers Problem
Solution 1
Assuming the plane seats $n$ passengers, let $F(n)$ the expectation of the the number of unhappy passengers and $F^*(n)$ is a similar expectation, conditioned on the first passenger getting a wrong seat, so that the two are related as shown below:
$\displaystyle F(n)=\frac{1}{n}\cdot 0+\frac{n-1}{n}\cdot F^*(n).$
$F^*(n)$ is thus the number of unhappy passengers in an $n-seats$ plane with an a priori misplaced seat.
Let's now start from the beginning but counting passengers from the end of the line. When the $n^{th}$ passenger (the one without the boarding pass) enters the plane, he chooses any of the $n$ available seats - in particular, his own - with probability $\displaystyle \frac{1}{n}.$ With the same probability he lands in the seat of the $k^{th}$ passenger (from the end of the line). If this happens, all the passengers entering before the $k^{th}$ seat owner will be happy. The latter will have a choice of $k$ seats, of which one is that of the first passenger. With the probability of $\displaystyle \frac{1}{k}$ he'll choose the latter, making the total of $2$ of unhappy passengers and with the probability of $\displaystyle \frac{k-1}{k}$ he'll choose a seat of one of the passengers yet in line, thus adding $1$ to the count of unhappy passengers and reducing the problem to a $k-seats$ plane (with one seat a priori misplaced) which may be described as adding to the total expectation of unhappy passengers the quantity
$\displaystyle\begin{align} \frac{2}{n}+\frac{k-1}{k}(1+F^*(k)+1)&=\frac{k+1}{k}+\frac{k-1}{k}F^*(k)\\ &=\frac{k+1}{k}+F(k). \end{align}$
Summing up for all possible choices of $k,$
$\displaystyle F(n)=\frac{1}{n}\cdot 0+\frac{1}{n}\sum_{k=1}^{n-1}\left(\frac{k+1}{k}+F(k)\right).$
or, $\displaystyle n\cdot F(n)=\sum_{k=1}^{n-1}\left(\frac{k+1}{k}+F(k)\right).$ For $F(n-1)$ we have a similar expression:
$\displaystyle (n-1)\cdot F(n-1)=\sum_{k=1}^{n-2}\left(\frac{k+1}{k}+F(k)\right).$
Substitution of the latter into the former produces $nF(n)=(n-1)F(n-1)+\frac{n+1}{n}+F(n-1),$ or $\displaystyle F(n)=F(n-1)+\frac{1}{n-1}.$ With $F(1)=0$ we are able to solve the recurrence:
$\displaystyle F(n)=\sum_{k=1}^{n-1}\frac{1}{k},$
so that $\displaystyle F(100)\approx 5.177.$
Solution 2
The passenger number $n$ counting from the end, gets his/her own seat with the probability $\displaystyle \frac{n}{n+1}.$ (This is because the previous passenger was facing $n+1$ choices, and it does not matter whether the $n^{th}$ passenger seat was among them - another previous passenger could have taken it. See also Solution 4 in the predecessor problem.) Define the random variable $X(n)$ that takes value $1$ with the probability $\displaystyle \frac{n}{n+1}$ and value $0$ with probability $\displaystyle \frac{1}{n+1}.$ Then the expected value $\displaystyle E(X(n))=\frac{n}{n+1}.$
The first (to board the plane) has the probability of $\displaystyle \frac{1}{100}$ to be happy. Then (using the linearity of the expectation) the expected value $E(100)$ of happy passengers is $\displaystyle E(100)=\frac{1}{100}+\sum_{n=1}^{99}E(X(n))=\sum_{n=1}^{99}\frac{n}{n+1}+\frac{1}{100}\approx 94.823.$
The number of unhappy passengers is then about $100-94.823=5.177.$
Acknowledgment
This is a follow-up on the previous problem; the problem above was suggested to me by Konstantin Knop and to him by Alexander Pipersky. Konstantin also supplied his solution to the problem (Solution 1).
65462468
|
Index 1 fixed points of orientation reversing planar homeomorphisms
DOI: http://dx.doi.org/10.12775/TMNA.2015.044
Abstract
Let \(U \subset {\mathbb R}^2\) be an open subset, \(f\colon U \rightarrow f(U) \subset {\mathbb R}^2\) be an orientation reversing homeomorphism and let \(0 \in U\) be an isolated, as a~periodic orbit, fixed point. The main theorem of this paper says that if the fixed point indices \(i_{{\mathbb R}^2}(f,0)=i_{{\mathbb R}^2}(f^2,0)=1\) then there exists an orientation preserving dissipative homeomorphism $\varphi\colon {\mathbb R}^2 \rightarrow {\mathbb R}^2$ such that \(f^2=\varphi\) in a~small neighbourhood of \(0\) and \(\{0\}\) is a~global attractor for \(\varphi\). As a corollary we have that for orientation reversing planar homeomorphisms a~fixed point, which is an isolated fixed point for \(f^2\), is asymptotically stable if and only if it is stable. We also present an application to periodic differential equations with symmetries where orientation reversing homeomorphisms appear naturally.
Keywords
Fixed point index, Conley index, orientation reversing homeomorphisms, attractors, stability
Full Text:Full Text Refbacks There are currently no refbacks.
|
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
|
Difference between revisions of "NTSGrad Spring 2018/Abstracts"
(→May 8)
Line 241: Line 241:
| bgcolor="#BCD2EE" |
| bgcolor="#BCD2EE" |
−
Let f be a weight-2 newform on <math>\Gamma_0(N)</math>. Given a fixed isogeny class of semistable elliptic curves over <math>\mathbb{Q}</math>, for some N there exists a distinguished element A of the isogeny class such that A is the strong modular curve attached to f. In fact, A is a quotient of <math>J_0(N)</math> by an abelian variety, from which we can obtain a covering map <math>\pi: X_0(N) \rightarrow A </math>. Based on Ribet and Takahashi’s paper, I will discuss the properties of the covering map as well as its generalization to Shimura curves.
+
Let f be a weight-2 newform on <math>\Gamma_0(N)</math>. Given a fixed isogeny class of semistable elliptic curves over <math>\mathbb{Q}</math>, for some Nthere exists a distinguished element Aof the isogeny class such that Ais the strong modular curve attached to f. In fact, Ais a quotient of <math>J_0(N)</math> by an abelian variety, from which we can obtain a covering map <math>\pi: X_0(N) \rightarrow A </math>. Based on Ribet and Takahashi’s paper, I will discuss the properties of the covering map as well as its generalization to Shimura curves.
|}
|}
Latest revision as of 18:37, 6 May 2018
This page contains the titles and abstracts for talks scheduled in the Spring 2018 semester. To go back to the main NTSGrad page, click here.
Contents Jan 23
Solly Parenti Rankin-Selberg L-functions
What do you get when you cross an Eisenstein series with a cuspform? An L-function! Since there's no modular forms course this semester, I will try to squeeze in an entire semester's course on modular forms during the first part of this talk, and then I'll explain the Rankin-Selberg method of establishing analytic continuation of certain L-functions.
Jan 30
Wanlin Li Intersection Theory on Modular Curves
My talk is based on the paper by François Charles with title "FROBENIUS DISTRIBUTION FOR PAIRS OF ELLIPTIC CURVES AND EXCEPTIONAL ISOGENIES". I will talk about the main theorem and give some intuition and heuristic behind it. I will also give a sketch of the proof.
Feb 6
Dongxi Ye Modular Forms, Borcherds Lifting and Gross-Zagier Type CM Value Formulas
During the course of past decades, modular forms and Borcherds lifting have been playing an increasingly central role in number theory. In this talk, I will partially justify these by discussing some recent progress on some topics in number theory, such as representations by quadratic forms and Gross-Zagier type CM value formulas.
Feb 20
Ewan Dalby The Cuspidal Rational Torsion Subgroup of J_0(p)
I will define the cuspidal rational torsion subgroup for the Jacobian of the modular curve J_0(N) and try to convince you that in the case of J_0(p) it is cyclic of order (p-1)/gcd(p-1,12).
Feb 27
Brandon Alberts A Brief Introduction to Iwasawa Theory
A bare bones introduction to the subject of Iwasawa theory, its main results, and some of the tools used to prove them. This talk will serve as both a small taste of the subject and a prep talk for the upcoming Arizona Winter School.
Mar 13
Solly Parenti Do You Even Lift?
Theta series are generating functions of the number of ways integers can be represented by quadratic forms. Using theta series, we will construct the theta lift as a way to transfer modular(ish) forms between groups.
Mar 20
Soumya Sankar Finite Hypergeometric Functions: An Introduction Finite Hypergeometric functions are finite field analogues of classical hypergeometric functions that come up in analysis. I will define these and talk about some ways in which they are useful in studying important number theoretic questions. Apr 3
Brandon Alberts Certain Unramified Metabelian Extensions Using Lemmermeyer Factorizations
We use conditions on the discriminant of an abelian extension [math]K/\mathbb{Q}[/math] to classify unramified extensions [math]L/K[/math] normal over [math]\mathbb{Q}[/math] where the (nontrivial) commutator subgroup of [math]\text{Gal}(L/\mathbb{Q})[/math] is contained in its center. This generalizes a result due to Lemmermeyer stating that the quadratic field of discriminant [math]d[/math], [math]\mathbb{Q}( \sqrt{d})[/math], has an unramified extension [math]M/\mathbb{Q}( \sqrt{d})[/math] normal over [math]\mathbb{Q}[/math] with [math]\text{Gal}(M/\mathbb{Q}( \sqrt{d})) = H_8[/math] (the quaternion group) if and only if the discriminant factors [math]d = d_1 d_2 d_3[/math] into a product of three coprime discriminants, at most one of which is negative, satisfying [math]\left(\frac{d_i d_j}{p_k}\right) = 1[/math] for each choice of [math]\{i, j, k\} = \{1, 2, 3\}[/math] and prime [math]p_k | d_k[/math].
Apr 10
Niudun Wang Nodal Domains of Maass Forms
Hecke-Maass cusp forms on modular surfaces produce nodal lines that divide the surface into disjoint nodal domains. I will briefly talk about this process and estimate the number of nodal domains as the eigenvalues vary.
Apr 17
Qiao He An Introduction to Automorphic Representations
Automorphic representation is a powerful tool to study L-functions. For me, Tate's marvelous thesis is the real beginning of the whole theory. So I will start with Tate's thesis, which is really the automorphic representation of [math]GL_1[/math]. Then I will talk about how to generalize Tate's idea to higher dimensions and explain some ideas behind Langlands program. If there is still time left, I will also mention the trace formula and use it to prove the classical Poisson summation formula.
Apr 23
Iván Ongay Valverde Definability of Frobenius Orbits and a Result on Rational Distance Sets
In this talk I will present a paper by Héctor Pastén. We will talk about the meaning of definability in a ring and how having a formula that identifies Frobenius orbits can help you show an analogous case of Hilbert's tenth problem (the one asking for an algorithm that tells you if a diophantine equation is solvable or not). Finally, if time permits, we will do an application that solves the existence of a dense set in the plane with rational distances, assuming some form of the ABC conjecture. This last question was proposed by Erdös and Ulam.
Apr 24
Brandon Boggess Moving from Local to Global
What do problems over local fields tell us about global problems?
May 1
Qiao He An Introduction to Automorphic Representations - Part II
Last time I talked about Tate's thesis, which is actually the theory of automorphic representation of GL_1. This time I will continue. First, I will give the definition of automorphic representation, and use Hecke characters and modular forms to motivate the definition. Then I will explain some classical results about automorphic representation, and discuss how automorphic representations are related to L-functions.
May 8
Sun Woo Park Parametrization of elliptic curves by Shimura curves
Let f be a weight-2 newform on [math]\Gamma_0(N)[/math]. Given a fixed isogeny class of semistable elliptic curves over [math]\mathbb{Q}[/math], for some [math]N[/math] there exists a distinguished element [math]A[/math] of the isogeny class such that [math]A[/math] is the strong modular curve attached to f. In fact, [math]A[/math] is a quotient of [math]J_0(N)[/math] by an abelian variety, from which we can obtain a covering map [math]\pi: X_0(N) \rightarrow A [/math]. Based on Ribet and Takahashi’s paper, I will discuss the properties of the covering map as well as its generalization to Shimura curves.
|
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Quadratic programming (QP) involves minimizing or maximizing an objective function subject to bounds, linear equality, and inequality constraints. Example problems include portfolio optimization in finance, power generation optimization for electrical utilities, and design optimization in engineering.
Quadratic programming is the mathematical problem of finding a vector \(x\) that minimizes a quadratic function:
\[\min_{x} \left\{\frac{1}{2}x^{\mathsf{T}}Hx + f^{\mathsf{T}}x\right\}\]
Subject to the constraints:
\[\begin{eqnarray}Ax \leq b & \quad & \text{(inequality constraint)} \\A_{eq}x = b_{eq} & \quad & \text{(equality constraint)} \\lb \leq x \leq ub & \quad & \text{(bound constraint)}\end{eqnarray}\]
The following algorithms are commonly used to solve quadratic programming problems:
Interior-point-convex:solves convex problems with any combination of constraints Trust-region-reflective:solves bound constrained or linear equality constrained problems
For more information about quadratic programming, see Optimization Toolbox™.
|
I knew since I was ten (we had quite a comprehensive curriculum at school) that the shortest day of the year falls on December 22nd. What I didn’t ponder until very recently was whether it was also the day of the latest sunrise (and, consequently, the earliest sunset).
While it may seem like a natural consequence of December 22nd being the shortest day, it doesn’t necessarily have to be true. If we model the time of sunrise and of sunset as two sine waves, \(a(t) = A \text{sin}(t+α)+K\) and \(b(t) = B \text{sin}(t+β)+L\) such that \(t_0\) minimizes the difference (we can drop the constants):
\[B \text{sin}(t+β) - A \text{sin}(t+α)\]
This means that
\[\frac{d(b(t)-a(t)}{dt} = 0 \text{at} t_0 \Rightarrow B \text{cos}(t_0+β) - A \text{cos}(t_0+α) = 0\]
We need to show that this equality may hold (for some values of \(α\), \(β\), \(A\) and \(B\)) even if one of the waves is not minimized at \(t_0\). Let
\[\begin{align}\frac{da(t)}{dt} = P \neq 0 \text{at} t_0 \Rightarrow A \text{cos}(t_0+α) & = P\\ B \text{cos}(t_0+β) = A \text{cos}(t_0+α) &= P\\ \text{cos}(t_0+β) = \frac{P}{B}, \text{where} \frac{A}{B} \leq \frac{P}{B} \leq \frac{A}{B}\end{align}\]
We can always find some values of \(A\) and \(B\) such that \(\frac{P}{B}\) is between -1 and 1, and hence the equation will be satisfied for some values of \(t_0\) and \(β\).
We can also take a short route and recall that a linear combination of two sine waves of the same frequency but not necessarily the same phase is still a sine wave. Its phase is a function of the difference in phases of the two waves. The value of \(t_0\) that minimizes the resulting wave is not necessarily going to minimize any of the two input waves because the three phases are different (and not different by a multiple of π).
In fact, if you look at the sunrise and sunset times in Connecticut around this December, the shortest day, unsurprisingly, falls on December 22nd, but the latest sunrise is on January 4th 2010 and the earliest sunset is on December 8th.
This is great news–it means that starting on December 8th (and not the 22nd), it will finally start getting darker later and later!
|
Presentation of the problem :
We have a uniform homogenous isotropic dielectric sphere in an electrostatic field.
To solve this problem, we remark that we have an azimuthal symmetry. So the potential of the problem is $V(r, \theta)$.
Because we are in homogenous isotropic dielectric medium, we have a Laplace equation for the potential inside and outside of this sphere:
$$ \Delta V=0$$
By assuming $V=f(r)g(\theta)$, we end up by a potential of the following form:
$$ V(r,\theta)=\sum_{l=0}^{\infty} (A_l r^l+B_l r^{-(l+1)} )P_l (\cos(\theta)) $$
This solution is done in Plasmonic fundamentals by Stefan Alexander Maier. The image is taken from it.
My problem :
The moment I don't totally understand is that they say we decompose the potential in two contributions: $V_{in}$ inside the sphere and $V_{out}$ outside.
Then they say that as the potential can't diverge in $r=0$, we have:
$$ V_{in}=\sum_{l=0}^{\infty} (A_l r^l)P_l (\cos(\theta)) $$
And
$$ V_{out}=\sum_{l=0}^{\infty} (C_l r^l+D_l r^{-(l+1)})P_l (\cos(\theta)) $$
What I don't understand is: how do we know we have to write
two contributions of the potential.
If I had to solve the problem by myself I would only write
one potential $V$ as $$ V(r,\theta)=\sum_{l=0}^{\infty} (A_l r^l+B_l r^{-(l+1)} )P_l (\cos(\theta)) $$
And then saying that the potential can't diverge in $r=0$, thus I would write $A_l=0$.
But of course it would be wrong (not enough "unknown to solve").
But I don't see
from a math point of view why I should write two different potentials. Indeed, the potential $V$ is following the same equation $\Delta V=0$ inside and outside of the sphere so how can I know I have to write it in two different ways $V_{in}$ and $V_{out}$ ?
|
Many texts say that an observable should be represented by a hermitian operator. That is
sufficient, but not necessary. More generally, we can use any operator that can be expressed as a linear combination of mutually commuting projection operators. Such an operator is called a normal operator. A normal operator $N$ is most simply characterized by the fact that it commutes with its own adjoint: $N^*N = NN^*$. Examples of normal operators include hermitian operators, individual projection operators, and unitary operators.
Here's an example to illustrate this idea. If $P_1,P_2,...$ are mutually orthogonal projection operators, then the operator$$ A = a_1 P_1 + a_2 P_2 + \cdots$$is a normal operator for any choice of (possibly complex) coefficients $a_k$. The eigenvectors of this operator are mutually orthogonal because the projection operators $P_k$ are. If the coefficients are real, then $A$ is self-adjoint (hermitian). But what really matters in quantum theory is the projection operators $P_k$. These are what determine the various possible outcomes of the measurement and the relative frequencies of those outcomes. The coefficients $a_k$ are just convenient labels for the outcomes, making it possible to define statistics like mean values and standard deviations.
Using only self-adjoint operators is
sufficient, because allowing complex coefficients is only allowing a more general way of labelling the various implied projection operators. Nature doesn't care how we label things.
In the first paragraph, I said "mutually commuting projection operators", which is more general than "mutually orthogonal projection operators." The latter implies the former, but not conversely. The former is needed in order to include observables like the position operator in non-relativistic quantum mechanics, which does not have (normalizable) eigenvectors. However, it still implicitly defines projection operators like $$ P\psi(x)=\begin{cases} \psi(x)&\text{ if }x\in R\\0 &\text{otherwise},\end{cases}$$where $R$ is some region of space. We can think of the usual position operator $X$ as a convenient single-operator representation of this whole algebra of mutually commuting projection operators. It's the projection operators that we use in the measurement postulates. The fact that any normal operator implicitly defines such a set of mutually commuting projection operators is the subject of the
spectral decomposition theorem.
Given any observable $A$, if $P$ is one of the projection operators that it implicitly defines (through the spectral decomposition theorem), then a measurement of $A$ will result in a state $|\psi'\rangle$ that satisfies either $P|\psi'\rangle = |\psi'\rangle$ or $(1-P)|\psi'\rangle=|\psi'\rangle$. (I'm not trying to advocate any particular interpretation of quantum theory here; I'm just trying to be concise.) In terms of the state $|\psi\rangle$ prior to the measurement, the relative frequencies of these two possible outcomes are$\psi(P)$ and $\psi(1-P)$, respectively, using the abbreviation$$ \psi(\cdots)\equiv\frac{\langle\psi|\cdots|\psi\rangle}{\langle\psi|\psi\rangle}.$$The point here is that we don't need to worry if $A$ doesn't have a complete set of (normalizable) eigenstates. As long as $A$ is a normal operator, we can still use the corresponding projection operators to make useful predictions, because
each of the projection operators has (normalizable) eigenvectors. As long as they all commute with each other, we can think of this whole set of projection operators as a bunch of mutually compatible observables, each of which has only two possible outcomes.
|
The full picture is attained by solving for the modes of the whole waveguide (in theory for the refractive index profile right out to and beyond the jacket) by the methods described, for example, in Chapter 12 through 15 of:
A. W. Snyder and J. D. Love, "Optical Waveguide Theory", Chapman and Hall, 1983.
and the spectrum of effective indices ($c$ divided by the axial propagation speed of mode in question) for the bound modes lies between the maximum and minimum index of the refractive index profiles. So this translates to agree with George Smith's answer, for example, which gives a good intuitive description of the range in terms of a ray picture.
For many fibres, though, the range of bound mode effective indices is much narrower than the other answers would imply. This is because the modes in question are confined very tightly near the core, so only this region is relevant in setting the effective index and, especially for single mode fibres, the difference between core and cladding indices is miniscule. For a step index profile fibre, the fibre $V$ parameter:
$$V = \frac{2\,\pi\,\rho}{\lambda}\sqrt{n_{core}^2-n_{clad}^2}$$
where $\rho$ is the core radius, $\lambda$ the freespace light wavelength and $n_{core},\,n_{clad}$ the core and cladding refractive indices, must be less than the first zero of the first-kind, zeroth order Bessel function $J_0$, or about 2.405 if the fibre is to be single moded. For core radiuses that are readily manufactured (to wit, more than 1 micron), this means that $n_{core}$ and $n_{clad}$ are typically less than one percent difference. Let's plug in $\lambda = 1550\mathrm{nm}$ and $\rho=1\mu\mathrm{m}$ with $n_{clad} = 1.48$ (pure silica), then we find that the maximum $n_{core}$ we can have for $V\leq2.405$ is 1.59. This is an extreme example. More typically, for this wavelength, we would have $\rho = 5\mu\mathrm{m}$, when $n_{core}\leq1.4847$, a difference between $n_{core}$ and $n_{clad}$ of $0.3\%$.
So, for a single mode optical fibre, you can almost always say that the propagation speed is $c$ divided by the cladding index, and your error in assuming so will typically be less than half a percent.
|
Electronic Journal of Probability Electron. J. Probab. Volume 8 (2003), paper no. 18, 26 p. Approximation at First and Second Order of $m$-order Integrals of the Fractional Brownian Motion and of Certain Semimartingales Abstract
Let $X$ be the fractional Brownian motion of any Hurst index $H\in (0,1)$ (resp. a semimartingale) and set $\alpha=H$ (resp. $\alpha=\frac{1}{2}$). If $Y$ is a continuous process and if $m$ is a positive integer, we study the existence of the limit, as $\varepsilon\rightarrow 0$, of the approximations $$ I_{\varepsilon}(Y,X) :=\left\{\int_{0}^{t}Y_{s}\left(\frac{X_{s+\varepsilon}-X_{s}}{\varepsilon^{\alpha}}\right)^{m}ds,\,t\geq 0\right\} $$ of $m$-order integral of $Y$ with respect to $X$. For these two choices of $X$, we prove that the limits are almost sure, uniformly on each compact interval, and are in terms of the $m$-th moment of the Gaussian standard random variable. In particular, if $m$ is an odd integer, the limit equals to zero. In this case, the convergence in distribution, as $\varepsilon\rightarrow 0$, of $\varepsilon^{-\frac{1}{2}} I_{\varepsilon}(1,X)$ is studied. We prove that the limit is a Brownian motion when $X$ is the fractional Brownian motion of index $H\in (0,\frac{1}{2}]$, and it is in term of a two dimensional standard Brownian motion when $X$ is a semimartingale.
Article information Source Electron. J. Probab., Volume 8 (2003), paper no. 18, 26 p. Dates First available in Project Euclid: 23 May 2016 Permanent link to this document https://projecteuclid.org/euclid.ejp/1464037591 Digital Object Identifier doi:10.1214/EJP.v8-166 Mathematical Reviews number (MathSciNet) MR2041819 Zentralblatt MATH identifier 1063.60079 Citation
Gradinaru, Mihai; Nourdin, Ivan. Approximation at First and Second Order of $m$-order Integrals of the Fractional Brownian Motion and of Certain Semimartingales. Electron. J. Probab. 8 (2003), paper no. 18, 26 p. doi:10.1214/EJP.v8-166. https://projecteuclid.org/euclid.ejp/1464037591
|
As shown in the MWE below,
\sum (esp. when the index and lower/upper bounds are defined) is causing the slanty part of the
sqrt sign to be not slanty. Is there a way to preserve it?
\documentclass{article}\begin{document}\[\sqrt{{x^i}}\]\[\sqrt{\sum_{i = 1}{x^i}}\]\[\sqrt{\sum^{n}{x^i}}\]\[\sqrt{\sum_{i = 1}^{n}{x^i}}\]\end{document}
|
Suppose we have a domain $\Omega\subset \mathbb{R}^n$ which is homeomorrphic to the unit ball $B(0,1)\subset \mathbb{R}^n$ and such that $\partial \Omega$ is of class $C^1$ (technically, this means that for every point in the domain we can give a $C^1$-diffeomorphism to the half-space in dimension $n$).
A map $f\colon (X,d_X)\to (Y,d_Y)$ between metric spaces is said to be bi-Lipschitz if there is a constant $K>0$ such that $$ \frac{1}{K} d_X(x_1,x_2) \leq d_Y(f(x_1), f(x_2)) \leq K d_X(x_1,x_2)$$ for all $x\in X$. The spaces $X$ and $Y$ are said to be Lipschitz equivalent if there is a surjective bi-Lipschitz map between them (any bi-Lipschitz map is necessarily injective).
I wonder if in this case the domain $\Omega\subset \mathbb{R}^n$ is Lipschitz equivalent to the unit ball $B(0,1)\subset \mathbb{R}^n$. If true, is the condition $C^1$ necessary, sufficient?
EDIT: My original question did not assume that the domain was homeomorphic to the ball. However, in this case it is clearly false, so I added this condition.
|
This question already has an answer here:
Here is the question $f:X \rightarrow Y$ and $g: Y \rightarrow Z$ are functions and $g \circ f$ is surjective, is $g$ surjective?
My proof: If $g \circ f$ is surjective than $\forall z \in Z \; \exists x \in X \; \mid (g \circ f)(x)=z $ Suppose $f$ is surjective, than $\forall y \in Y \; \exists x \in X\; \mid f(x)=y$. By def. of $(g\circ f)(x)=z$ we have $g(f(x))=z$ and since $f(x)=y$ than we have $g(y)=z$ which implies surjectivity therefore $g$ is surjecive.
Is this the correct way to prove that $g$ is surjective? Or can I not assume surjectivity on $f$
|
You wrote,I've managed to get this using the tabular environment, but it's obviously not the right way to do this.You were actually quite close! The main change I'd recommend you make is switching from a tabular environment to an array environment. The following screenshot shows the effect of this change. The third "take" involves applying further ...
The question asks how to write equations, so this is not an answer in that sense. Here, I suggest a method on how to draw them.Since + is an operation of coalescence, in general, and = indicates identity (such that C is A and B combined together), and since everything algebraic can be represented visually, the TikZ solutionis presented by:MWE\...
The systeme package allows you to do this. The command \sysdelim.. is used here to remove the braces that are placed by default.\documentclass{article}\usepackage{systeme}\begin{document}\sysdelim..\systeme{4x+7=7x+2,7=3x+2,5=3x,\frac{5}{3}=x}\end{document}
You can use TABstacks. Shown here in 3 ways, depending on the desired equation-number vertical alignment.\documentclass{article}\usepackage{tabstackengine}\stackMath\setstackgap{L}{14pt}\begin{document}\begin{equation}\alignCenterstack{a+b+c+d+e+&f+g+h \\=& j + k + l + m +n\\=& j' + k' + l'+ m' +n'}\end{equation}\begin{...
Here are three possibilities. In the first, you alignthe = signs with another symbol of the first line, The second uses the optional argument of the \MoveEqLeft command from mathtools, and the third nests the aligned environment in a gathered environment (to fine-tune the placement of w= w.r.t. the first line, you can add to the latter some \hspace)....
The fact that the steepness of the surds increases with the overall size of the square-root symbols is not a flaw in the design of the math font. Instead, it embodies a long-standing typographic tradition that has held up pretty well over the decades (and probably even centuries).If you can't stand the "vertical look" of the taller surds, do contemplate ...
With the help of cases and aligned:\documentclass{article}\usepackage{amsmath}\begin{document}\begin{equation}\nonumberA =\begin{cases}A_{1} & m=1, n=1 \\A_{2} & 2 \leq m \leq M, n=1 \\A_{3} & m=1, 2 \leq n \leq N \\A_{4} & \begin{aligned} &2 \leq m \leq M, \\ &2 \leq n \leq N \end{aligned}\...
Just change the definition of \theequation when the supplementary material starts.\documentclass{article}\usepackage{mathtools}\numberwithin{equation}{section}\begin{document}\section{Main material}We have an equation\begin{equation}\label{main}0=0\end{equation}that will be used in~\eqref{suppl}.\section{Supplementary material}\renewcommand{...
A simple solution with \newtagform and \usetagform, from mathtools. However note tha cross referencing will have to be done by hand:\documentclass{article}\usepackage[utf8]{inputenc}\usepackage{mathtools}\newtagform{supplementary}[S.]()\counterwithin*{equation}{section}\begin{document}\begin{equation} \label{eq}\bar\gamma_M:=\frac{1}{\alpha+\...
While LaTeX and MathJax use similar syntax, their underlying engines are very different. Whereas it's possible to force a line break using the approach shown in your posting, that approach is not syntactically valid in a LaTeX document.If all you need to achieve is show two displayed equations, one below the other, in a LaTeX document, I suggest you (a) ...
This error was created by lineno package in the style file.I had to simply go in style file and un-comment \RequirePackage{lineno} and it was fixed.Note: While not a general fix but if you are working on a NeurIPS Submission, this could be one of the reasons for this errorThis is what I did finally (didn't need the line numbers - check Output PDF on ...
You can put the combined contents in a tcolorbox and draw the strikeout on top using the finish key. This answer is based on the answer to Strike a paragraph of text by Ulrike Fischer. I added the pattern fill, a custom pattern with wider lines, and a new tcolorbox environment to make it easier to use the box multiple times. Note that it is not possible to ...
The commands only work in displaymath mode. Try:$$ E = m c^2 \eqno (1) $$$$ E = m c^2 \leqno (2) $$Your examples cannot work as the definion of \endequation already uses \eqno:% from latex.ltx\def\equation{$$\refstepcounter{equation}}\def\endequation{\eqno \hbox{\@eqnnum}$$\@ignoretrue}So you can define your own environment (full MWE):\...
\leqno is a primitive and not used in latex. The option in latex redefines \@eqnnum:\documentclass{article}\makeatletter\newcommand\useleqno{\renewcommand\@eqnnum{\hb@xt@.01\p@{}%\rlap{\normalfont\normalcolor\hskip -\displaywidth(\theequation)}}}\begin{document}\begin{equation}a + b = c % ...
What about the following. The equation numbers were achieved using \leqnomode from here (Red lines indicate text width:\documentclass{article}\usepackage{mathtools}\usepackage{amsmath}\makeatletter\newcommand{\leqnomode}{\tagsleft@true}\newcommand{\reqnomode}{\tagsleft@false}\makeatother\begin{document}\leqnomode\renewcommand\theequation{\alph{...
You could use a top-aligned aligned (pun intended) environment:Observe that aligned, unlike flalign*, does not initiate and terminate math mode by itself; instead,aligned must be embedded in a math-mode group (here: the group that begins and ends with $ symbols).\documentclass{article}\usepackage{amsmath}\begin{document}\begin{itemize}\item[(a)]...
A solution with alignat, and some improvements (in particular, fractional coefficients in medium size look better, in my opinion):\documentclass[11pt]{book}\usepackage{mathtools, amssymb, nccmath}\begin{document}\begin{alignat*}{2}& \mathcal{L}_{a2c} & &= \mathbb{E}_{s_t,a_t\sim\pi_{\theta}}\Bigl[\mathcal{L}_{a2c_\text{policy}} + \mfrac{...
One option is to place each of the left-hand sides of te equations in equally-sized boxes, and <align> each element to the left. \eqmathbox[LHS][l]{<lhs>}, as defined below, will help with that:\documentclass{article}\usepackage{eqparbox,xparse,amsmath,amsfonts}% https://tex.stackexchange.com/a/34412/5764\makeatletter% \eqmathbox[<tag&...
You need to capture the widths of the widest elements in each of your equations. Then you can use those widths to impose alignment between elements that aren't as wide.Below I use a slight modification to eqparbox via \eqmathbox[<tag>][<align>]{<math>} which stores the maximum width of each <tag>ged box with varying <math> ...
I propose this other solution, with the Bmatrix environment `mathtools and various small improvements:\documentclass[]{article}\usepackage{amsmath,mathtools, nccmath}\usepackage{siunitx}\begin{document}\section{Equations}\begin{fleqn}\sisetup{exponent-product =\,}\begin{equation}\begin{aligned}& PMV =0.303\,e^{-0.036M}+0.028 \times{}\\&...
I squeezed the operator spacing slightly in the bracketed termI think this does what you ask, although the layout seems a bit confusing to me, if I understand its meaning correctly it would be clearer not to use the large brackets and let the terms wrap over several lines at the outer level.\documentclass[]{article}\usepackage{amsmath}\begin{document}...
You can use the mini! environment, from the dedicated package optidef, which defines several possible layouts. By default, the objective function part is the first subequation, but you can easily have the parent equation number using \tag{some number}:\documentclass[conference]{IEEEtran}\usepackage{amsmath}\usepackage[short]{optidef}\begin{document}\...
The second column in cases is typeset in math mode and is typically a short condition; when text is involved, \text can be used, but this doesn't split copy across lines.You can do it with a \parbox. On the other hand, the result is not pretty. I suggest using a shorthand that can be explained just below the equation.\documentclass[a4paper, 10pt, ...
As you've (re)discovered, the cases environment doesn't automatically line-wrap long explanations. As a remedy, I suggest you encase the contents of the cases environment in a custom array environment which allows line-wrapping. Note that it's not necessary to employ \text directives for the explanatory textual material.[For some reason, I can't upload a ...
Thank you very much to @David Carlisle and @Marijn that with your comments I have had another possibility:\documentclass{book}\usepackage[top=2.5cm,bottom=2.2cm,left=3.2cm,right=1.5cm,headsep=10pt,a4paper]{geometry}\usepackage{mathtools,amssymb}\usepackage[svgnames, dvipsnames, table, x11names]{xcolor}\usepackage{pifont}\...
I wouldn't really expect these to be aligned as they have different sizes, however you can get a bit closer if you use m rather than p columns. I use tabular here as tabularx isn't really helping as you know in advance given XX that there will be two equal columns, so you may as well specify that directly.\documentclass{book}\usepackage[top=2.5cm,bottom=2....
You have two solutions to use every now and then \abovedisplayshortskip in the place of \abovedisplayskip: the \useshortskip command from nccmath just before entering the amsmath environment, or \SwapAboveDisplaySkip from mathtools after entering the environment. Note however that if the line above is too long, the result can be ugly.Demo:\...
The reason the three equations are centered horizontally in your code is because you're using a matrix environment, which is programmed to center the contents of each entry, using inline math mode.Switching to a cases environment takes care of left-aligning the cell contents. However, the equations continue to be typeset in inline math mode and the ...
You can use the cases environment from amsmath.\documentclass{article}\usepackage{amsmath}\begin{document}\begin{equation}\begin{cases}u_{t}=\nabla\cdot D(u_{t})\nabla u_{t}+\mu_{1}\frac{c_1}{k_1+c_1}u_1+\mu_{2}\frac{c_2}{k_2+k_{12}c_1+c_2}u_1\\c_{1,t}=\Delta c_1-\frac{\mu_{1}}{y_1}u_{1\infty} \frac{c_1}{k_1+c_1}u_{1}\\c_{2,t}...
With use of the numcases environment from the cases package:\documentclass{article}\usepackage{amsmath}\usepackage{cases}\begin{document}\begin{numcases}{a=}b(x) & for $t<t_1$ \label{eq:subeq1},\\c(x) & for $t\geq t_1$ \label{eq:subeq2}\end{numcases}From \eqref{eq:subeq1} folows \dots\end{document}
You can have this, based on empheq(which loads mathtools) but you'll lose the possibility to refer to the definition as a whole:\documentclass{article}\usepackage{empheq}}\begin{document}\begin{empheq}[left={a=\empheqlbrace}]{alignat=2}b(x) \quad &\text{for } t<t_1 \label{eq:subeq1}\\c(x) \quad &\text{for } t\geq t_1 \label{eq:subeq2}\...
If it is ok to have a shared number for the two column part you can use\documentclass[a4paper]{article}\usepackage{amsmath,amssymb}\begin{document}\begin{gather}a=A\\\begin{aligned}b&=B & c&=C\\d&=D & e&=E\end{aligned}\end{gather}\end{document}Note also this is the preferred method for giving ...
Here two possibilities: one with the aligned environment, nested in equation, ans using\MoveEqLeftfrommathtools, the other usesflalignand an *adhoc* alignment point. Both use the medium-size fractions (\mfrac) fromnccmath`for the numerical fractions, as I think it looks better:\documentclass{article}\usepackage{nccmath}\usepackage{mathtools}\begin{...
Displayed equation(s) always consider text width, meanwhile \begin{center} ... \end{center} width of environment, where it is. Similar feature for displayed equation you can obtain with enclosing it in minipage with width of environment, where it is. This is done with using \linewidth for width of minipage.Edit: Use of minipage for equation may lead to ...
You can redefine the equation environment so that it would automatically adjust to the current \linewidth (thanks to @Zarko's comment) using the etoolbox package.\documentclass[12pt]{amsart}\usepackage{geometry, amsthm}\geometry{letterpaper, margin=1in}\usepackage{etoolbox}\BeforeBeginEnvironment{equation*}{\begin{minipage}{\linewidth}}\...
You can get the desired centering by using array; however, this doesn't seem the best way to lay out the equation.For the second equation, I'm not seeing why using l instead of {u,v}:\documentclass[journal]{IEEEtran}%\usepackage{cite} % is it compatible with IEEEtran?\usepackage{graphicx} % no pdftex option%\usepackage{epstopdf} % not required\...
Are you looking to achieve the following result?I must confess to having no idea what "note2" is supposed to represent in the structure of the equation.If you inspect the code below, you will notice that I replaced the \left\{ and \right. instructions and the aligned environment with a single cases environment.\documentclass{article}\usepackage{...
\text is when the subscript is textual.You should also prefer \textcolor and brace the whole \underbrace construct.\documentclass{article}\usepackage{amsmath}\usepackage{xcolor}\begin{document}\begin{equation}{% make the \underbrace an ordinary atom\underbrace{\textcolor{red}{I_j (x)} \ast \Delta_j(x,t_l,\textcolor{red}{p})}%_{I_j(x)(x-...
I see essentially two sources of trouble:LaTeX allows “Display math mode”, which is the math mode impliedby the \[...\] syntax, only inside the so-called “paragraph mode”(the name was chosen by Leslie Lamport himself), which is, roughlyspeaking, the mode LaTeX is in when it is typesetting ordinary paragraphs.Unfortunately, the main argument of a \...
Since you not provide MWE (Minimal Working Example, a small complete document which we can test as it is), we haven't any information about your document layout, used packages relevant to your equation and about eventual defined new commands or math operators.Also is not clear, why you require that part of equation use \tt fonts (btw, correct is \ttfamily) ...
Your MWE having some errors, e.g., \k, \kt, etc. I just removed just \ for compiling purpose, and the modified codes are given below:\documentclass{book}\usepackage{mathtools}\begin{document}\begin{align}\nonumber(A_{1}^{5}+A_{8}^2)\tt_{m}^{2}k^{2}P_{m}^{5}&-\left(2(A_{1}^{2}+A_{2}^2)(\Delta_{a}-\omega_{m}\chi^{2}C)\omega_{m}\chi^{2}-2J^{2}\...
I propose a solution based on \DeclarePairedDelimiterXPP from mathtools:\documentclass[]{article}\usepackage[T1]{fontenc}\usepackage[utf8]{inputenc}\usepackage{mathtools}\DeclarePairedDelimiterXPP{\PR}[1]{\operatorname{Pr}}{[} {]}{}{\begin{array}{@{}l@{}}#1 \end{array}}\begin{document}\[ \PR*{ (\mathsf{pp}, \mathcal{T}) \xleftarrow{\$} \textsf{...
|
Muon Reconstruction and Identification
Part of the Springer Theses book series (Springer Theses)
Chapter
First Online:
Abstract
The chapter discusses the reconstruction and identification of muons with the ATLAS detector.
References 1.ATLAS Collaboration (2018) Measurement of the \(W\)-boson mass in \(pp\) collisions at \(\sqrt{s}=7\) TeV with the ATLAS detector. Eur Phys J C 78:110. https://doi.org/10.1140/epjc/s10052-017-5475-4 2.ATLAS Collaboration (2016) Measurement of \(W\)\(^{\pm }\) and \(Z\)-boson production cross sections in \(pp\) collisions at \(\sqrt{s}=13\) TeV with the ATLAS detector. Phys Lett B 759:601. https://doi.org/10.1016/j.physletb.2016.06.023 3.ATLAS Collaboration (2017) Measurement of the Higgs boson coupling properties in the \(H \rightarrow ZZ^{*} \rightarrow 4{\ell }\) decay channel at \(\sqrt{s}=13\) TeV with the ATLAS detector. arXiv:1712.02304 [hep-ex] 4.Bagnaia P et al (2008) Calibration model for the MDT chambers of the ATLAS muon spectrometerGoogle Scholar 5.Nikolopoulos K et al (2008) Cathode strip chambers in ATLAS: installation, commissioning and in situ performance. In: Proceedings, 2008 IEEE nuclear science symposium, medical imaging conference and 16th international workshop on room-temperature semiconductor X-ray and gamma-ray detectors (NSS/MIC 2008/RTSD 2008), Dresden, Germany, 19–25 October 2008, pp 2819–2824. https://doi.org/10.1109/NSSMIC.2008.4774958 6.van Eldik N (2007) The ATLAS muon spectrometer. PhD thesis, University of Massachusetts, Amherst. http://weblib.cern.ch/abstract?CERN-THESIS-2007-045 7.Cornelissen TG et al (2006) Track fitting in the ATLAS experiment. http://cds.cern.ch/record/1005181. Accessed on 12 Dec 2006 8.van Kesteren Z (2010) Identification of muons in ATLAS. PhD thesis, University of Amsterdam. http://www.nikhef.nl/pub/services/biblio/theses_pdf/thesis_Z_v_Kesteren.pdf 9.Ordonez Sanz G (2009) Muon identification in the ATLAS calorimeters. https://cds.cern.ch/record/1196071. Accessed on 12 Jun 2009 10.ATLAS Collaboration (2016) Muon reconstruction performance of the ATLAS detector in proton-proton collision data at \(\sqrt{s}=13\) TeV. Eur Phys J C 76:292Google Scholar Copyright information
© Springer Nature Switzerland AG 2019
|
Forgot password? New user? Sign up
Existing user? Log in
Help on this interesting integral problem ??
Note by Ritvik Choudhary 6 years, 2 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
ya it's cool.you try to find the square of the quantity,with one variable x and the other y.then you integrate them simultaneously,a double integral.It's easy to evaluate once you transform it to polar coordinates.Nice One.Answer is root pi
Log in to reply
lower limit is negative infinity..
is it 0
You can also look up this pdf www.stankova.net/statistics2012/doubleintegration.pdf.
another not-so-cool way is to use gamma function.use the substitution x=root u
squareroot of Pi ? ( Error Function)
As our function is an even function therefore split the limit of integration from -infinity to +infinity as 2 times 0 to infinity and use Gamma function by making a suitable substitution. It can also be done by converting the problem into polar form.
This function is not integrable using the methods we learn till Undergraduate college level. I don't know about what we learn in college...
Although ∫e−x2 dx\int e^{-x^2}\,dx∫e−x2dx can't be expressed in terms of elementary functions, we can evaluate ∫−∞+∞e−x2 dx\int_{-\infty}^{+\infty} e^{-x^2}\,dx∫−∞+∞e−x2dx. Doing so yields π\sqrt{\pi}π (for justification see http://en.wikipedia.org/wiki/Gaussian_integral#Computation).
It can be solved by GAMMA FUNCTION=integrate(0-infinity)e^-x.x^(n-1)dx for all x>=1 x belongs to Z+!!!
I think ans is 0 .. I think we can solve it using integration by part
Problem Loading...
Note Loading...
Set Loading...
|
Colloquia/Fall18 Contents 1 Mathematics Colloquium 1.1 Spring 2018 1.2 Spring Abstracts 1.3 Past Colloquia Mathematics Colloquium
All colloquia are on Fridays at 4:00 pm in Van Vleck B239,
unless otherwise indicated. Spring 2018
date speaker title host(s) January 29 (Monday) Li Chao (Columbia) Elliptic curves and Goldfeld's conjecture Jordan Ellenberg February 2 (Room: 911) Thomas Fai (Harvard) The Lubricated Immersed Boundary Method Spagnolie, Smith February 5 (Monday, Room: 911) Alex Lubotzky (Hebrew University) High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Ellenberg, Gurevitch February 6 (Tuesday 2 pm, Room 911) Alex Lubotzky (Hebrew University) Groups' approximation, stability and high dimensional expanders Ellenberg, Gurevitch February 9 Wes Pegden (CMU) The fractal nature of the Abelian Sandpile Roch March 2 Aaron Bertram (University of Utah) Stability in Algebraic Geometry Caldararu March 16 Anne Gelb (Dartmouth) Reducing the effects of bad data measurements using variance based weighted joint sparsity WIMAW April 4 (Wednesday) John Baez (UC Riverside) TBA Craciun April 6 Edray Goins (Purdue) Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Melanie April 13 Jill Pipher (Brown) TBA WIMAW April 16 (Monday) Christine Berkesch Zamaere (University of Minnesota) TBA Erman, Sam April 20 Xiuxiong Chen (Stony Brook University) TBA Bing Wang April 25 (Wednesday) Hitoshi Ishii (Waseda University) Wasow lecture TBA Tran date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty Spring Abstracts January 29 Li Chao (Columbia)
Title: Elliptic curves and Goldfeld's conjecture
Abstract: An elliptic curve is a plane curve defined by a cubic equation. Determining whether such an equation has infinitely many rational solutions has been a central problem in number theory for centuries, which lead to the celebrated conjecture of Birch and Swinnerton-Dyer. Within a family of elliptic curves (such as the Mordell curve family y^2=x^3-d), a conjecture of Goldfeld further predicts that there should be infinitely many rational solutions exactly half of the time. We will start with a history of this problem, discuss our recent work (with D. Kriz) towards Goldfeld's conjecture and illustrate the key ideas and ingredients behind these new progresses.
February 2 Thomas Fai (Harvard)
Title: The Lubricated Immersed Boundary Method
Abstract: Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics.
February 5 Alex Lubotzky (Hebrew University)
Title: High dimensional expanders: From Ramanujan graphs to Ramanujan complexes
Abstract:
Expander graphs in general, and Ramanujan graphs , in particular, have played a major role in computer science in the last 5 decades and more recently also in pure math. The first explicit construction of bounded degree expanding graphs was given by Margulis in the early 70's. In mid 80' Margulis and Lubotzky-Phillips-Sarnak provided Ramanujan graphs which are optimal such expanders.
In recent years a high dimensional theory of expanders is emerging. A notion of topological expanders was defined by Gromov in 2010 who proved that the complete d-dimensional simplical complexes are such. He raised the basic question of existence of such bounded degree complexes of dimension d>1.
This question was answered recently affirmatively (by T. Kaufman, D. Kazdhan and A. Lubotzky for d=2 and by S. Evra and T. Kaufman for general d) by showing that the d-skeleton of (d+1)-dimensional Ramanujan complexes provide such topological expanders. We will describe these developments and the general area of high dimensional expanders.
February 6 Alex Lubotzky (Hebrew University)
Title: Groups' approximation, stability and high dimensional expanders
Abstract:
Several well-known open questions, such as: are all groups sofic or hyperlinear?, have a common form: can all groups be approximated by asymptotic homomorphisms into the symmetric groups Sym(n) (in the sofic case) or the unitary groups U(n) (in the hyperlinear case)? In the case of U(n), the question can be asked with respect to different metrics and norms. We answer, for the first time, one of these versions, showing that there exist fintely presented groups which are not approximated by U(n) with respect to the Frobenius (=L_2) norm.
The strategy is via the notion of "stability": some higher dimensional cohomology vanishing phenomena is proven to imply stability and using high dimensional expanders, it is shown that some non-residually finite groups (central extensions of some lattices in p-adic Lie groups) are Frobenious stable and hence cannot be Frobenius approximated.
All notions will be explained. Joint work with M, De Chiffre, L. Glebsky and A. Thom.
February 9 Wes Pegden (CMU)
Title: The fractal nature of the Abelian Sandpile
Abstract: The Abelian Sandpile is a simple diffusion process on the integer lattice, in which configurations of chips disperse according to a simple rule: when a vertex has at least 4 chips, it can distribute one chip to each neighbor.
Introduced in the statistical physics community in the 1980s, the Abelian sandpile exhibits striking fractal behavior which long resisted rigorous mathematical analysis (or even a plausible explanation). We now have a relatively robust mathematical understanding of this fractal nature of the sandpile, which involves surprising connections between integer superharmonic functions on the lattice, discrete tilings of the plane, and Apollonian circle packings. In this talk, we will survey our work in this area, and discuss avenues of current and future research.
March 2 Aaron Bertram (Utah)
Title: Stability in Algebraic Geometry
Abstract: Stability was originally introduced in algebraic geometry in the context of finding a projective quotient space for the action of an algebraic group on a projective manifold. This, in turn, led in the 1960s to a notion of slope-stability for vector bundles on a Riemann surface, which was an important tool in the classification of vector bundles. In the 1990s, mirror symmetry considerations led Michael Douglas to notions of stability for "D-branes" (on a higher-dimensional manifold) that corresponded to no previously known mathematical definition. We now understand each of these notions of stability as a distinct point of a complex "stability manifold" that is an important invariant of the (derived) category of complexes of vector bundles of a projective manifold. In this talk I want to give some examples to illustrate the various stabilities, and also to describe some current work in the area.
April 6 Edray Goins (Purdue)
Title: Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups
Abstract: A Belyĭ map [math] \beta: \mathbb P^1(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] is a rational function with at most three critical values; we may assume these values are [math] \{ 0, \, 1, \, \infty \}. [/math] A Dessin d'Enfant is a planar bipartite graph obtained by considering the preimage of a path between two of these critical values, usually taken to be the line segment from 0 to 1. Such graphs can be drawn on the sphere by composing with stereographic projection: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq \mathbb P^1(\mathbb C) \simeq S^2(\mathbb R). [/math] Replacing [math] \mathbb P^1 [/math] with an elliptic curve [math]E [/math], there is a similar definition of a Belyĭ map [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C). [/math] Since [math] E(\mathbb C) \simeq \mathbb T^2(\mathbb R) [/math] is a torus, we call [math] (E, \beta) [/math] a toroidal Belyĭ pair. The corresponding Dessin d'Enfant can be drawn on the torus by composing with an elliptic logarithm: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq E(\mathbb C) \simeq \mathbb T^2(\mathbb R). [/math]
This project seeks to create a database of such Belyĭ pairs, their corresponding Dessins d'Enfant, and their monodromy groups. For each positive integer [math] N [/math], there are only finitely many toroidal Belyĭ pairs [math] (E, \beta) [/math] with [math] \deg \, \beta = N. [/math] Using the Hurwitz Genus formula, we can begin this database by considering all possible degree sequences [math] \mathcal D [/math] on the ramification indices as multisets on three partitions of N. For each degree sequence, we compute all possible monodromy groups [math] G = \text{im} \, \bigl[ \pi_1 \bigl( \mathbb P^1(\mathbb C) - \{ 0, \, 1, \, \infty \} \bigr) \to S_N \bigr]; [/math] they are the ``Galois closure
of the group of automorphisms of the graph. Finally, for each possible monodromy group, we compute explicit formulas for Belyĭ maps [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] associated to some elliptic curve [math] E: \ y^2 = x^3 + A \, x + B. [/math] We will discuss some of the challenges of determining the structure of these groups, and present visualizations of group actions on the torus.
This work is part of PRiME (Purdue Research in Mathematics Experience) with Chineze Christopher, Robert Dicks, Gina Ferolito, Joseph Sauder, and Danika Van Niel with assistance by Edray Goins and Abhishek Parab.
|
Introduction:
Define a "Bit Map" to be a matrix whose entries can only be $0$ or $1$. Then numbers above and beside each column and row indicates how many entries are "filled" with a one.
For example consider,
$$ \begin{array}{c|lcr} & 2 & 2 & 2 \\ \hline 2 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 \\ 3 & 1 & 1 & 1 \end{array} $$
The $3$ by $3$ matrix depicted above, consists of zeros and ones, and the numbers outside tell you how many ones are in each row/column.
The real problem is finding the solution when all that is given is the above numbers, the number of ones in each column and row, and none of the binary entries are given. Keep in mind, that the inverse solution is NOT unique. Although, there are some interesting symmetries.
Conjecture:
The method to solve for the binary entries given the number of ones in each row and column is layed out as follows. Each entry is assigned a value according to $$g(r,c,i)={{x_c \cdot y_r} \over {o_c \cdot o_r}}$$
where
$r$ is the row number
$c$ is the column number
$i$ is the iteration number which starts at 0
$g(r,c,i)$ is the entry in the $r$th row & the $c$th column at iteration $i$
$x_c(0)$ is the number of entries in the column that can be filled.
$x_c(i)$ is the number of entries in the column that can still be filled.
$y_r(0)$ is the number of entries in the row that can be filled.
$y_r(i)$ is the number of entries in the row that can be filled.
$o_r$ & $o_c$ represent the number of rows and columns in the matrix.
To create the solution to this inverse problem, these steps must be followed. The iteration starts at $i=1$ and all entries in the matrix are undetermined.
(1) Find the maximum $g(r,c,i)$ in the matrix and denote it by $M_i$ fill it's entry to $1$. Denote the row and column it was found in by $m_r$ and $m_c$. Since there will be more than one maximum, pick one at random to proceed. (2) Take $m_r$ and $m_c$ to use in the next equations.
$$o_r(i+1)=o_r(i)-1$$ $$o_c(i+1)=o_c(i)-1$$ $$x_c(i+1)=x_r(i)-1$$ $$y_r(i+1)=y_r(i)-1$$
(3)
If $o_r(i+1)$ or $o_c(i+1)$ is zero,
all g(r,c) must be recalculated. If not, then calculate $g(m_r,m_c,i+1)$ and for the other entries set $g(r,c,i+1)=g(r,c,i)$ (4)If $\sum_c x_c(i)$ or $\sum_r y_r(i)$ equals $\sum_c x_c$ or $\sum_r y_r$ then we are done, fill the other entries with $0$. Otherwise, The iteration is now at $i+1$ move on to $i+2$ Interpretation and Proof
How does one interpret $g$? And how does on prove that this method indeed provides a solution to the problem?
I've noticed that the sum of the $g$ is of the form.... $$\sum g = {{(\sum_c x_c) \cdot (\sum_r y_r)} \over {o_c \cdot o_r}}$$ which may or may not have an interpretation.
As far as the proof is concerned. There is some rough multiplications of probability going on here, which may indicate some kind of convolution of discrete measures perhaps?
I know this is a long post, so there may be typographical errors, feel free to point them out. Also feel free to provide an example of the algorithm failing.
|
Molecular absorption in the ultraviolet and visible region depends on the electronic structure of the absorbing molecule. Light energy is absorbed in quanta, elevating electrons from filled orbitals in the ground state to empty orbitals. Excited molecules return to the ground state, most often by radiationless transition, the absorbed energy appears in the system as heat. Since the frequency (or wavelength) of light absorbed is characteristic of the energy levels in a molecule, the UV-visible spectrum is often used as a method of qualitative analysis. However, quantitative analysis can also be performed using this technique as is demonstrated in this experiment.
Introduction
According to Beer's law, the decrease in light intensity observed when monochromatic light passes through an absorbing medium is related to the concentration of the absorbing species by the equation
\[\log \dfrac{I_o}{I} = A = \epsilon bc \label{1}\]
where \(I_o\) is the incident light intensity, \(I\), the emerging light intensity, \(A\), the absorbance, \(\epsilon\) is the molar absorptivity, \(b\), the path length in cm, and \(c\), the concentration of the absorbing species in moles/L. The transmittance T is defined to be the ratio I / Io so that
\[- \log T = A \label{2}\]
(Note that log refers to logarithms to the base 10). The molar absorptivity, \(\epsilon\), is not only characteristic of the absorbing molecule so that it depends on the wavelength of the incident light, but it also depends on the medium in which this molecule is dissolved, that is, on the nature of the solvent. Having established the value of \(\epsilon\) at a given wavelength for a given medium, an unknown concentration of the absorbing molecule can be determined by measuring A in a cell of fixed geometry.
A typical UV-visible spectrophotometer consists of a tungsten or deuterium lamp as sources of visible or UV radiation, respectively, a monochromator, a detector, normally, a photomultiplier tube, and amplifying and readout electronics. In a double beam instrument, the intensity transmitted through the solution of interest is compared with that passing through an identical cell containing only the solvent. In this way compensation is made for loss in intensity due to absorption by the solvent and glass, and for loss by reflectance. You will use a single beam instrument. The compensation is made for loss in intensity due to absorption by the solvent and glass, and for loss by reflectance. by collecting a spectrum of the cell containing only the solvent. Storing this spectrum in memory and then subtracting the “blank” spectrum from the spectrum of the solvent plus sample. Then \(I_o\) in Equation \(\ref{1}\) refers to the intensity after passing through the cell containing the solvent.
Just as you used visible spectroscopy to identify and quantify metal ions in solution in general chemistry lab, you can use UV-visible spectroscopy to measure protein concentrations in solution. Proteins are extremely important molecules in biochemistry. They make up a large portion of the human body, playing both a structural role and as important catalysts for biochemical reactions. Understanding protein function is a key to understanding life itself and the molecular basis of disease. Proteins are large polymers of many of the 20 amino acids. Figure 1 shows the chemical structure of each of the 20 amino acids. The pKa for the ionizable side chains are also given in the figure.
Figure 1: A guide to the twenty common Amino Acids. Image used with permission (CC BY-SA-NC, Compoundschem.com).
The side chains are connected by amide linkages between the amino acids. There is a free amino and free carboxylic acid group at each end. Figure 2 Several methods are used to measure protein concentrations. Some methods are direct and others are indirect. One can measure the UV absorbance of the solution at 250-300 nm, a direct method. The indirect assay, usually called a colorimetric assay, takes advantage of a color change of a compound upon binding to the protein.
Using the absorbance at 280 nm is the simplest method. The aromatic side chains phenylalanine, tyrosine and tryptophan absorb UV radiation in the range of 250-300 nm. Figure 2 shows the UV absorbance spectra of phenylalanine, tyrosine and tryptophan. The extinction coefficient of a pure protein can be calculated from the number of tyrosine and tryptophan residues in the amino acid sequence. The extinction coefficients at 280 nm for the isolated amino acid side chains are 1200 M
-1 cm -1 for tyrosine and 5600 M -1 cm –1 for tryptophan. If you know the sequence of the protein of interest and know that it is pure, you can use the absorbance at 280 nm to determine the concentration of your protein. If you have a mixture of proteins and want to know the total concentration of protein in the solution, in milk for example, this method would not work because you cannot calculate the extinction coefficient for a mixture of proteins. You can get a relative measure of the amount of protein present in solutions. The solution with the highest absorbance at 280 nm had the greatest protein concentration. This is a rash generalization because a dilute solution of a protein that contains many aromatic residues will have a greater absorbance than a more concentrated solution of a protein with few aromatic amino acids.
Colorimetric assays for protein have been developed to get around this problem. These assays use a compound that changes color in the presence of protein.
\[\text{Yellow compound} + \text{protein} \rightarrow \text{purple-protein complex}\]
The amount of purple color produced is proportional to the amount of protein present. The protein concentration of an unknown is determined by comparing the color produced by the “unknown” solution with the color produced by solutions of known protein, the standard curve.
There are many different compounds that are used. Two of the most important properties for the compound are: 1) that they interact with all proteins similarly and 2) that the absorbance spectrum of the unbound compound does not significantly overlap with the spectrum of the complex. The reagent coomassie brilliant blue is a blue dye that binds tightly to most proteins. There are 3 forms of coomassie blue that are in equilibrium with each other at low pH. In very acidic solutions the red form, positively charged of the dye is most stable above pH=2 the blue form, negatively charged blue form is more stable. Figure 3 shows the structure of the red and blue forms of coomassie brilliant blue. The intermediate neutral form is stable over a very narrow pH range and is difficult to stabilize, except in the presence of detergents.
By looking at the pKas for the amino acid side chains, you can see that at pH < 4 all proteins carry a net positive charge. The negatively charged blue form of the coomassie blue interacts with the proteins and is stabilized. The dye will bind to the protein until all of the positive charges are neutralized or all the dye is bound to the proteins. If the dye is in great excess (there is much more dye than protein in solution), then the amount of blue color produced is proportional to the protein concentration in solution. If the amount of dye is not much greater than the amount of protein, the dye will all be bound and the amount of color produced will no longer be a measure of the protein concentration. The dye solution becomes saturated.
To run a colorimemtric protein assay with any color producing compound, you must first create a standard curve. Due to differences in the reagent solution from batch to batch you must create a new standard curve every time you run the experiment. To make a standard curve, you make a series of solutions of known protein concentration. You may have to repeat the process until you find a range of concentrations for which the absorbance increases linearly, or almost linearly with protein concentration. If the amount of protein in solution is close to the dye concentration you will not get a linear relationship between concentration and absorbance. Once you have made your standard curve you can then compare the absorbance of your unknown to the standard curve and thereby determine the amount of protein in solution. It may be necessary to dilute your unknown to get an absorbance that is within the linear range of your standard curve. If you do this you must keep track of how much you dilute the sample and then use the dilution factor to calculate the final concentration of your undiluted sample.
The caveats for this type of assay are that the compounds may interact with different proteins to a different extent. The absorbance of a 1mg/ml solution of BSA, bovine serum albumin, might give a different reading than a 1mg/ml solution of collagen, a common structural protein.
In this laboratory exercise you will measure the concentration of a BSA solution by both methods. Keep in mind that the two methods are completely unrelated and there is no reason to expect the same absorbance readings for both assays on the same sample.
Experiment Record absorbance spectra of the red and blue forms of coomassie blue. Do this on Day one of the experiment The red form at low pH: Prepare 10ml of a 0.02mM solution of coomassie blue by diluting the appropriate amount of the stock solution provided (concentration is given on the bottle) with 2.5M HCl. What volume of 0.1mM coomassie blue would you use? What is the pH of the resulting solution? The Blue form at pH 7: Prepare 10ml of a 0.02mM dye solution by diluting the appropriate volume of the stock solution with pH 7 buffer. Record the absorbance spectrum of the blue form of the dye from 390 to 800 nm. Record the absorbance at the wavelength of maximum absorbance (\(\lambda_{max}\)) Then record the spectrum of the red dye over the same wavelength range. Record \(\lambda_{max}\) of the red form and the absorbance of the red form at the \(\lambda_{max}\) of the blue form. Simple Instructions for Operation of HP-8452A
The Agilent (HP) 8453 UV-Vis unit in room 3480 is pictured below.
Turn on instrument using the power button. Wait 5 minutes for the instrument to warm up. When the indicator on the front turns green, launch the software. *** edit march 2018. Instructions from this point on are out of date, ask TA for advice!!! *** Click function key, top of keyboard, F4: Acquisition, set wavelength range to 390-800 Put blank solution C into cuvette and put cuvette in sample holder. Make sure the clear sides face the instrument. Make sure you depress the lever on the side of the cell holder. Click function key F2 to measure the blank. This takes a spectrum of the solvent and cuvette and stores it in memory. This is then subtracted from all subsequent scans. Place the cuvette containing the sample in the cell holder. If you are using the same cuvette repeatedly, rinse the cuvette with the new solution before taking the spectrum. Click F1 (Measure sample). The spectrum will appear on the screen. Click F2 (Cursor control). Use the left and right arrows to find \(\lambda_{max}\) and any shoulders in your spectrum. Click F1 (Mark) at each wavelength you want to record with cursor position. Click F10 to return to the main menu. Click F9 (hardcopy) will print your spectrum with cursor marks and wavelengths indicated. Measuring protein concentration
To determine the concentration of a protein solution you must first prepare a series of protein solutions of known concentration and construct a standard absorbance curve. Using the 2.00mg/ml stock solution prepare 10 ml of each of the following solutions. Use deionized water to make the solutions to volume. Day one, make the standard solutions. They are stable. You will do the assay on day 2 of the experiment.
Before coming to class calculate the volume of the 2.0 mg/ml stock solution you need for each solution. And complete this table:
Volume of stock solution to add (mL) Concentration Solution 1 0.1 mg/mL Solution2 0.2 mg/mL Solution3 0.4 mg/mL Solution4 0.6 mg/mL Solution5 0.8 mg/mL Solution6 1.0 mg/mL
Volume of stock solution to add concentration Solution 1 0.1 mg/ml Solution 2 0.2 mg/ml Solution 3 0.4 mg/ml Solution 4 0.6 mg/ml Solution 5 0.8 mg/ml Solution 6 1.0 mg/ml Protein Assay: This will be done on day two of the experiment. Using the table below as a guide prepare the protein assay. Label 1 test tube “blank” Label 2 sets of 8 test tubes 1-8 Using the 5 ml pipette in your drawer pipette 5 ml of the protein assay reagent into the 17 test tubes. Use the automatic pipettor to add the protein solution (or the water) to the tube and vortex to mix thoroughly after adding the protein. Wait 10 minutes and then read the absorbance of each solution at 596 nm. Get 17 cuvettes from the stockroom. blank 100 ul water 1 100 ul 0.1mg/ml 2 100 ul 0.2mg/ml 3 100 ul 0.4mg/ml 4 100 ul 0.6mg/ml 5 100 ul 0.8mg/ml 6 100 ul 1.0mg/ml 7 100 ul 2.0 mg/ml 8 100 ul unknown Remember you will do all measurements(except the blank) in duplicate.
Now you will measure the absorbance at 596nm of standards and unknown Press F10 (return) will get you back to the acquisition window. Click F3 (functions) and use the down arrow to highlight “Fn 1:” and press ENTER. Select L1. When prompted enter the wavelength 596. Press ESC to return to the menu. Click F5 (options) and use the down arrow to highlight “wavelength list mode” and press ENTER. Now place your reagent blank in the sample holder and press F2 to measure the blank. Next measure the absorbance of solutions 1-8. You will see a list of absorbance measurements at the selected wavelength on the screen. Press F9 hardcopy to get a printout of the data. Make sure it prints before quitting the program. UV spectrum of the unknown: Do this on day two of the experiment
Collect a UV spectrum of the unknown protein solution from 240nm to 340 nm. Use water as the blank. The molar absorptivity for bovine serum albumin (BSA) at 280 nm is 43,600 M-1cm-1. The molecular weight of BSA is 66,300 g/mol. Determine the absorbance at 280 nm calculate the concentration and compare it to that which you measure by the dye binding assay.
Analysis of the Data Calculate the molar absorptivities for the red and blue forms of coomassie blue each at their \(\lambda_{max}\). Calculate the molar absorptivity for the red form at the \(\lambda_{max}\) of the blue form. Make a table of the protein concentration and absorbance at 596 nm. Include both readings. Also plot the average value for each standard. It should be linear and should go through the origin. Use the standard curve to determine the concentration of protein in your unknown. If you are using excel or any other graphing/analysis package you must plot the standard curve as a full page and use a ruler to determine the protein concentration. If the absorbance of your unknown falls in the linear range of the standard curve, calculate an molar absorptivity for the BSA-coomassie complex using excel to calculate the best-fit line. Use this to determine the concentration of your unknown. If your graph is not linear, use the standard curve to determine the concentration graphically. Calculate the concentration of the unknown using the absorbance at 280nm. How well does it agree with the concentration found at 596 nm? Which do you consider to be more accurate, and why? Your report should include the spectra of the red and blue forms of coomassie blue, the UV spectrum of your unknown protein solution. All spectra should be annotated with the required maximum absorbance. All the absorbance data from the standard curve and unknown.
|
In the photoelectric effect, light incident on the surface of a metal causes electrons to be ejected. The number of emitted electrons and their kinetic energy can be measured as a function of the intensity and frequency of the light. One might expect, as did the physicists at the beginning of the Twentieth Century, that the energy in the light wave (its intensity in \(J/m^2s\)) should be transferred to the kinetic energy of the emitted electrons. Also, the number of electrons that break away from the metal should change with the frequency of the light wave. This dependence on frequency was expected because the oscillating electric field of the light wave causes the electrons in the metal to oscillate back and forth, and the electrons in the metal respond at different frequencies. In other words, it was expected that the number of emitted electrons should depend upon the frequency, and their kinetic energy should depend upon the intensity of the light wave.
The classical expectation of the photoelectric effect was that the number of emitted electrons would depend upon the frequency, and their kinetic energy should depend upon the intensity of the light wave.
As shown in Figure \(\PageIndex{1}\), just the opposite behavior is observed in the photoelectric effect. The intensity affects the number of electrons, and the frequency affects the kinetic energy of the emitted electrons. From these sketches, we see that
the kinetic energy of the electrons is linearly proportional to the frequency of the incident radiation above a threshold value of \(ν_0\) (no current is observed below \(ν_0\)), and the kinetic energy is independent of the intensity of the radiation. the number of electrons (i.e. the electric current) is proportional to the intensity and independent of the frequency of the incident radiation above the threshold value of \(ν_0\) (no current is observed below \(ν_0\)).
Figure \(\PageIndex{1}\): Schematic drawings showing the characteristics of the photoelectric effect. (a) The kinetic energy of any single emitted electron increases linearly with frequency above some threshold value and is independent of the light intensity. (b) The number of electrons emitted per second (i.e. the electric current) is independent of frequency and increases linearly with the light intensity.
In 1905, Albert Einstein explained the observations shown in Figure \(\PageIndex{1}\) with the bold hypothesis that energy carried by light existed in packets of an amount \(h\nu\). Each packet or photon could cause one electron to be ejected, which is like having a moving particle collide with and transfer energy to a stationary particle. The number of electrons ejected therefore depends upon the number of photons, i.e. the intensity of the light. Some of the energy in the packet is used to overcome the binding energy of the electron in the metal. This binding energy is called the work function, \(\Phi\). The remaining energy appears as the kinetic energy, \(\frac {1}{2} mv^2\), of the emitted electron.
Equations \(\ref{2-3}\) and \(\ref{2-4}\) express the conservation of energy for the photoelectric process
\[E_{photon} = K_{Eelectron} + W_{electron} \label {2-3}\]
\[h \nu = \frac {1}{2} mv^2 + \Phi \label {2-4}\]
Rearranging this equation reveals the linear dependence of kinetic energy on frequency as shown in Figure \(\PageIndex{1}\).
\[ \frac {1}{2} mv^2 = h \nu - \Phi \label {2-5}\]
The slope of the straight line obtained by plotting the kinetic energy as a function of frequency above the threshold frequency is just Planck’s constant, and the x-intercept, where \(\frac {1}{2} mv^2 = 0\), is just the work function of the metal, \(\Phi = hν_0\).
Example \(\PageIndex{1}\)
Sodium metal has a threshold frequency of \(4.40 × 10^{14}\) Hz. What is the kinetic energy of a photoelectron ejected from the surface of a piece of sodium when the ejecting photon is \(6.20 × 10^{14}\) Hz? What is the velocity of this photoelectron? From which region of the electromagnetic spectrum is this photon?
With such an analysis Einstein obtained a value for \(h\) in agreement with the value Planck deduced from the spectral distribution of black-body radiation. The fact that the same quantization constant could be derived from two very different experimental observations was very impressive and made the concept of energy quantization for both matter and light credible. In the next sections we will see that wavelength and momentum are properties that also are related for both matter and light.
Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
|
Fine Guidance Sensor, FGS
JWST's Fine Guidance Sensor (FGS) provides data for science attitude determination, fine pointing, and attitude stabilization using guide stars in the JWST focal plane. Absolute pointing and image motion performance is predicted on the JWST Pointing Performance page.
JWST's Fine Guidance Sensor (FGS) is a near-infrared (NIR) camera residing in the Integrated Science Instrument Module (ISIM). It has a passband from ~0.6 to 5.0 μm and operates at a temperature of ~37 K, similar to near-infrared science instruments. The FGS has two channels, each with 2.3′ × 2.3′ field of view (FOV).
The FGS functions are:
to identify and acquire a guide star, measure its position in one of the two guider channels, and provide this data to the JWST attitude control subsystem (ACS) for attitude determination.
to provide fine pointing data to the ACS for attitude stabilization. The FGS can provide this data for both fixed target pointings and for moving target observations.
Guide star position data is used by the ACS for absolute (right ascension and declination) pointing knowledge and pointing control in the plane of the sky (pitch and yaw). ACS uses the data from off-axis star trackers to control the spacecraft’s roll orientation.
In addition to its critical role in executing observations, the FGS also serves as an integral part in the commissioning of the JWST Observatory, and in observation planning. FGS pointing data are archived for every science observation and may be valuable for post-observation data analysis.
Unlike on HST, the Fine Guidance Sensors on JWST are expected to be used for guiding and calibration exclusively. Thus, at this time they are
not available for science proposals by general observers. Observational capabilities
The FGS has an unfiltered passband from ~0.6 to 5.0 μm. Each focal plane array is a 2048 × 2048 HgCdTe sensor chip assembly that has a 2.3’ × 2.3’ FOV after correcting for internal field distortions. The central 2040 × 2040 pixels are light sensitive; the four outermost rows and columns are reference pixels for bias measurements. However, the usable FOV for guide star identification and guiding is 2.15' × 2.15' in order to provide sufficient light-sensitive pixels for flat field corrections for potential guide stars near the edge of the FOV.
The FGS has neither a shutter nor a filter wheel; therefore, its detectors are always exposed to the sky.
The JWST proposal planning system currently uses the Guide Star Catalog (GSC) version 2.4.1, which was updated in the fall of 2017. The Guide Star Selection System has been updated to use this new catalog, with improvements to astrometry, photometry, and number and distribution of stars that are available—for additional information, please refer to the JWST Guide Stars article.
FGS optical design
The optical assembly of the FGS is shown in Figure 2. Light from the telescope is focused onto the pick-off mirror (POM), collimated by the three-mirror assembly (TMA), and focused by an adjustable fold mirror (fine focus mechanism) onto the two focal plane arrays. The fine focus mechanism allows tuning of FGS focus.
FGS operations
FGS has three operating modes: "OFF", "STANDBY",
1 and "OPERATE". In operational mode, it has five software functions: calibration, identification, acquisition, track, and fine guide. The calibration function allows the FGS to obtain necessary data for calibration by the ground system, while the remaining functions enable the identification, acquisition, and tracking of a guide star. These flight software functions are briefly described below. Calibration
In order to be able to calibrate the FGS, the ground system requires data collected with the "calibration" function. In this mode, the FGS acts like a camera, obtaining full-frame or subarray images with one guider while the other tracks a guide star. These data are then used to measure and correct for geometric distortion, intra-pixel non-uniformity, flat field response, bias, bad pixels, and other performance characteristics. The "calibration" function is only available for commissioning and calibration.
Identification
At the conclusion of a spacecraft slew, the telescope is pointing at the sky such that the selected guide star is near the center of one of the FGS detectors and the science target is in the desired science instrument, though not yet at the precise attitude for the scientific observation. To assure that the correct guide star is acquired, the FGS obtains an image of the sky and compares the observed positions of stars (and any other luminous objects) to a catalog of objects using a pattern-matching algorithm. To minimize smearing, the "identification" images are obtained in a sequence of "strips": 36 subarrays of 2048 × 64 pixels with an effective integration time of 0.3367 s each.
Acquisition
The approximate location of a guide star on the FGS detector is measured using the flight software "identification" function, or is determined at the end of a small angle maneuver that offsets the guide star from a previously known location in the FGS FOV. This is followed by executing the "acquisition" function. A 128 × 128 pixel (8.6” × 8.6”) subarray is centered at the expected position of the guide star. Images of the guide star within this subarray are obtained and autonomously analyzed by the FGS to locate the star. A second set of measurements using a 32 × 32 pixel (2.2" × 2.2") subarray, centered on the guide star position, is obtained. The FGS reports the position and intensity of the guide star to the ACS; this information is used by the ACS to update its knowledge of the spacecraft’s current attitude, and to bring the pointing of the telescope to within 0.45" (1-σ radial) of its commanded position.
Track
Following the successful completion of the "acquisition" function, and ACS’s corrective maneuver of the observatory pointing, the FGS executes the "track" function. The FGS places a 32 × 32 pixel (2.2" × 2.2") subarray on the expected location of the guide star. High cadence subarray images are obtained from which the guide star’s position centroid is determined and reported to ACS every 64 ms. Once the guide star is within ~0.06" of its desired location, the FGS can transition to "fine guide" mode.
In "track" mode the FGS will adjust the position of the 32 × 32 pixel subarray on the detector to remain centered on the guide star if the guide star moves. Thus, "track" mode is used for moving target observations.
Fine Guide
When the FGS transitions from "track" to "fine guide," a fixed 8 × 8 pixel (0.5" × 0.5") subarray is centered on the guide star position. The guide star centroid is computed from each subarray image and sent to the ACS every 64 ms, controlling the observatory pointing in a closed loop. In "fine guide" mode, the subarray location is fixed and cannot be changed without transitioning through the operating mode "STANDBY"
1, which requires exiting fine guidance control and starting over in "track" mode.
Once in fine guide control, the absolute pointing accuracy of JWST with respect to the celestial coordinate system will be determined by the astrometric accuracy of the Guide Star Catalog and the calibration of the JWST focal plane model.
1 In "STANDBY," the operations scripts subsystems (OSS) software is running and the guider is waiting, ready to transition to the operating mode "OPERATE" and execute a commandable function such as "identification." The FGS flight software (FSW) controls the physical and electrical conditions to which the guider's performance is sensitive. In "STANDBY," the FGS flight software will be capable of sending and receiving commands, data, and software updates. Subarrays
Each of the operational modes uses a different sized subarray and readout pattern. The frame readout time for each subarray can be calculated using the following equation:
t_{frame} = \Bigg(\frac{N_{columns}}{N_{outputs}} + C_{overhead}\Bigg) \times (N_{rows} + 1) \times 10 \mu s.
where N_{outputs} is the number of amplifiers used in the subarray, equal to 4 for CAL and ID or 1 for other functions; where C_{overhead} is a constant that accounts for electronic overhead, equal to 12 for ACQ1 or 6 for all other functions; and where the number of rows and columns N_{rows} and N_{columns}are as specified in the table below.
FGS data utilizes correlated double sampling (CDS) to correct for detector effects within integrations, a method in which the 0
th read is subtracted from the 1 st read. The time between reads, or CDS time, is a function of the readout pattern and the frame readout time:
t_{CDS} = (N_{DROP} + 1) \times t_{readout}
where N_{DROP} is the number of dropped frames between reads and t_{readout} is the frame readout time.
Table 1. Subarray and readout definitions for each function
Name Abbreviation Subarray Integration readout pattern Frame readout time (s) CDS time (s) Calibration CAL Variable 2 RESET READ DROP x n READ 2 Variable 2 Variable 2 Identification ID 2048 x 64 x 36 (strips) 3 RESET READ READ READ READ 0.3367 0.3367 Acquisition 1 ACQ1 128 x 128 RESET DROP READ DROP READ 0.1806 0.3612 Acquisition 2 ACQ2 32 x 32 RESET DROP READ DROP x 3 READ 0.01254 0.05016 Track TRK 32 x 32 RESET READ DROP READ 0.01254 0.02508 Fine Guide FG 8 x 8
RESET READ x 4 DROP x 39 READ x 4
0.00126 0.05418 2 In CAL, images can be taken either as full-frame (2048 x 2048) or as certain subarrays (128 x 128, 32 x 32, or 8 x 8) at fixed positions on the detector. Furthermore, the readout pattern in CAL can be modified to produce images with different integration times. The frame readout time is determined by the subarray size; for full-frame images it is 10.6138 s, and for all other subarrays the readout times are as listed in the table. As explained above, the CDS time depends on both the readout pattern and the subarray readout time. 3 The strips are read out as 36 subarrays with 64 rows by 2048 columns, with an overlap between rows of 8 pixels. This configuration means that the bottom 12 pixels and top 12 pixels of the detector are not read out during ID. Acknowledgements
The Canadian Space Agency (CSA) has contributed the FGS to the JWST Observatory. Honeywell (formerly COM DEV Space Systems) of Ottawa, Canada, is CSA’s prime contractor for the FGS.
|
Gambling in a Company Problem
Answers
The expected number of rounds is $\displaystyle\sum_{i\lt j}a_ia_j.$
The probability that the $i^{th}$ gambler ends up with all the money $\displaystyle \frac{a_i}{a_1+a_2+\ldots+a_n}.$
For $n=3,$ the expected number of rounds till the first loser quits the game is $\displaystyle \frac{3a_1a_2a_3}{a_1+a_2+a_3}.$
The $n=2$ Analogue
The problem is an obvious extension of the game of two players. The latter is usually modeled by the well-known one-dimensional walk where a point on an axis moves one step at a time - left or right - with probabilities $\displaystyle p=q=\frac{1}{2}.$ The two players start with the capitals of, say, $u$ and $v$ dollars and lose or win one dollar from the other player with the probability of $\displaystyle\frac{1}{2}.$ On average, the game lasts $uv$ rounds in which the probabilities of winning are $\displaystyle\frac{u}{u+v}$ and $\displaystyle\frac{v}{u+v},$ respectively.
Definitions
Let $\mathbf{u}=(u_1,u_2,\ldots, u_n)$ denote the state of the game where the $ i^{th}$ player owns $u_i\ge 0$ dollars. We'll write $|\mathbf{u}|=u_1+u_2+ \ldots+u_n.$ The quantity $a=|\mathbf{u}|$ remains unchanged during the game.
Let $N(\mathbf{u})$ be the set of all states that can be attained from $\mathbf{u}$ in one round. If $\mathbf{u}$ has $k$ non-zero components then $|N(\mathbf{u})|=k(k-1).$ Define $V(\mathbf{u})$ as the set of all states that could be attained starting with $\mathbf{u}):$
$V(\mathbf{u})=\left\{\mathbf{v}:\, \forall i\,v_i\ge 0, |\mathbf{u}|=|\mathbf{v}|, (u_i=0)\,\Rightarrow \,(v_i=0)\right\}.$
Solution 1, Part 1
Let $R(\mathbf{u})$ be the expected number of rounds, starting with $\mathbf{u},$ till one of the players has all the money. (We are to prove that $\displaystyle\mathbf{a}=\sum_{j\lt j}a_ia_j.)$ There is a recurrent relation: $R(\mathbf{u})$ is the average of all $\mathbf{x}\in N(\mathbf{u})$ plus $1.$ We also have boundary conditions: $R(\mathbf{u})=0$ only if all but one component of $\mathbf{u}$ are zero.
The function $\displaystyle R(\mathbf{u})=\sum_{i\lt j}a_ia_j$ satisfies the recurrence and the boundary condition. Indeed, for any two components $u$ and $v,$ the product $uv$ appears in every term of the sums over the neighborhood $N(\mathbf{u}),$ the same holds for $-1;$ the linear terms $u$ and $v$ cancel out. We only need to prove that no other function satisfies the recurrence, along with the boundary conditions.
If there are two such solutions then the difference at any point is the average of its values in the point's neighborhood and, therefore, could not be either maximum or minimum (in the neighborhood), unless the function is constant. Then the function is constant over the whole play field $V=V(\mathbf{x})$ and, being $0$ on the boundary is $0$ on $V,$ meaning that the two solutions coincide.
Solution 1, Part 2
Let $p_i(\mathbf{u})$ be the probability that the $i^{th}$ player ends up with all the money, starting with the state $\mathbf{u}.$ (We have to prove that $\displaystyle p_i(\mathbf{u})=\frac{u_i}{|\mathbf{u}|}.)$ $p_i(\mathbf{u})$ is the average of all $p_i(\mathbf{x}),$ for $x\in N(\mathbf{u}),$ with the following boundary conditions: $p_i(\mathbf{u})=0$ if $u_i=0$ and $p_i(\mathbf{u})=1$ if $u_i=a,$ so that all other components vanish automatically.
As before, all it takes is to prove uniqueness.
Solution 1, Part 3
Here the recurrence is exactly the same as for the first question, but the boundary condition is different: the game stops when one of the three components vanishes.
Solution 2, Part 1
Let $\displaystyle a=\sum_{i=1}^na_i$ be the total capital at play. Let $R_i$ be the number of rounds played by the $i^{th}$ gambler. The remaining $(n-1)$ gamblers are undistinguishable from a single gambler with $(a-a_j)$ dollars. Hence we only need to consider two gamblers, for which $E(R_i)=a_i(a-a_i).$ The total number of rounds is $\displaystyle R=\frac{1}{2}\sum_{i=1}^nR_i,$ hence
$\displaystyle E(R)=\frac{1}{2}\sum_{i=1}^na_i(a-a_i)=\sum_{i\lt j}a_ia_j.$
Solution 2, Part 2
Consider $a$ players with $1$ dollar each. Before the first round, the probability of winning the game is uniform among the players: $\displaystyle\frac{1}{a}.$ Partition the players into $n$ teams, so that the $i^{th}$ team has $a_i$ members/dollars. Then the probability of the $i^{th}$ team (i.e., gambler) winning the game is $\displaystyle p_i=a_i\cdot\frac{1}{a}=\frac{a_i}{a}.$
Acknowledgment
This problem 53 from B. Bollobás,
The Art of Mathematics: Coffee Time in Memphis, Cambridge University Press, 2006, 149-150.
The second set of solutions is by Hélvio Vairinhos.
65462473
|
Search
Now showing items 1-10 of 27
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
|
HALP: High-Accuracy Low-Precision Training by Chris De Sa, Megan Leszczynski, Jian Zhang, Alana Marzoev, Chris Aberger, Kunle Olukotun, and Chris Ré Using fewer bits of precision to train machine learning models limits training accuracy—or does it? This post describes cases in which we can get high-accuracy solutions using low-precision computation via a technique called bit recentering, and our theory to explain what's going on.
Low-precision computation has been gaining a lot of traction in machine learning. Companies have even started developing new hardware architectures that natively support and accelerate low-precision operations including Microsoft's Project Brainwave and Google's TPU. Even though using low precision can have a lot of systems benefits, low-precision methods have been used primarily for inference—not for training. Previous low-precision training algorithms suffered from a
fundamental tradeoff: when calculations use fewer bits, more round-off error is added, which limits training accuracy. According to conventional wisdom, this tradeoff limits practitioners' ability to deploy low-precision training algorithms in their systems. But is this tradeoff really fundamental? Is it possible to design algorithms that use low precision without it limiting their accuracy?
It turns out that yes,
it is sometimes possible to get high-accuracy solutions from low-precision training—and here we'll describe a new variant of stochastic gradient descent (SGD) called high- accuracy low precision (HALP) that can do it. HALP can do better than previous algorithms because it reduces the two sources of noise that limit the accuracy of low-precision SGD: gradient variance and round-off error. To reduce noise from gradient variance, HALP uses a known technique called stochastic variance-reduced gradient (SVRG). SVRG periodically uses full gradients to decrease the variance of the gradient samples used in SGD. To reduce noise from quantizing numbers into a low-precision representation, HALP uses a new technique we call bit centering. The intuition behind bit centering is that as we get closer to the optimum, the gradient gets smaller in magnitude and in some sense carries less information, so we should be able to compress it. By dynamically re-centering and re-scaling our low-precision numbers, we can lower the quantization noise as the algorithm converges. HALP is provably able to produce arbitrarily accurate solutions at the same linear convergence rate as full-precision SVRG, while using low-precision iterates with a fixed number of bits. This result upends the conventional wisdom about what low-precision training algorithms can accomplish. Why was low-precision SGD limited?
First, to set the stage: we want to solve training problems of the form\[ \text{maximize } f(w) = \frac{1}{N} \sum_{i=1}^N f_i(w) \text{ over } w \in \mathbb{R}^d. \]This is the classic empirical risk minimization problem used to train many machine learning models, including deep neural networks.One standard way of solving this is with
stochastic gradient descent, which is an iterative algorithm that approaches the optimum by running\[ w_{t+1} = w_t - \alpha \nabla f_{i_t}(w_t) \]where \( i_t \) is an index randomly chosen from \( \{1, \ldots, N\} \) at each iteration.We want to run an algorithm like this, but to make the iterates \( w_t \) low-precision. That is, we want them to use fixed-point arithmetic with a small number of bits, typically 8 or 16 bits (this is small compared with the 32-bit or 64-bit floating point numbers that are standard for these algorithms).But when this is done directly to the SGD update rule, we run into a representation problem: the solution to the problem \( w^* \) may not be representable in the chosen fixed-point representation.For example, if we use an 8-bit fixed-point representation that can store the integers \( \{ -128, -127, \ldots, 127 \} \), and the true solution is \( w^* = 100.5 \) then we can't get any closer than a distance of \( 0.5 \) to the solution since we can't even represent non-integers.Beyond this, the round-off error that results from converting the gradients to fixed-point can slow down convergence.These effects together limit the accuracy of low-precision SGD. Bit Centering
When we are running SGD, in some sense what we are actually doing is averaging (or summing up) a bunch of gradient samples.The key idea behind bit centering is
as the gradients become smaller, we can average them with less error using the same number of bits.To see why, think about averaging a bunch of numbers in \([-100, 100]\) and compare this to averaging a bunch of numbers in \([-1, 1]\). In the former case, we'd need to choose a fixed-point representation that can cover the entire range \([-100, 100]\) (for example, \( \{ -128, -127, \ldots, 126, 127 \} \)), while in the latter case, we can choose one that covers \([-1, 1]\) (for example, \( \{ -\frac{128}{127}, -\frac{127}{127}, \ldots, \frac{126}{127}, \frac{127}{127} \} \)). This means that with a fixed number of bits, the delta, the difference between adjacent representable numbers, is smaller in the latter case than in the former: as a consequence, the round-off error will also be lower.
This key idea gives us a key insight. To average the numbers in range \([-1, 1]\) with less error than the ones in \([-100, 100]\), we needed to use a different fixed-point representation.This insight suggests that
we should dynamically update the low-precision representation: as the gradients get smaller, we should use fixed-point numbers that have a smaller delta and cover a smaller range.
But how do we know how to update our representation? What range do we need to cover? Well, if our objective is strongly convex with parameter \( \mu \), then whenever we take a full gradient at some point \( w \), we can bound the location of the optimum with\[ \| w - w^* \| \le \frac{1}{\mu} \| \nabla f(w) \|. \] This inequality gives us a range of values in which the solution can be located, and so
whenever we compute a full gradient, we can re-center and re-scale the low-precision representation to cover this range.This process is illustrated in the following figure.
We call this operation
bit centering. Note that even if our objective is not strongly convex, we can still perform bit-centering: now the parameter \( \mu \) becomes a hyperparameter of the algorithm. With periodic bit centering, as an algorithm converges, the quantization error decreases—and it turns out that this can let it converge to arbitrarily accurate solutions. HALP
HALP is our algorithm which runs SVRG and uses bit centering with a full gradient at every epoch to update the low-precision representation.The full details and algorithm statement are in the paper; here, we'll just present an overview of those results. First, we showed that for strongly convex, Lipschitz smooth functions (this is the standard setting under which the convergence rate of SVRG was originally analyzed), as long as the number of bits \( b \) we use satisfies\[ 2^b > O\left(\kappa \sqrt{d} \right) \]where \( \kappa \) is the
condition number of the problem, then for an appropriate setting of the step size and epoch length (details for how to set these are in the paper), HALP will converge at a linear rate to arbitrarily accurate solutions.More explicitly, for some \( 0 < \gamma < 1 \),\[ \mathbf{E}\left[ f(\tilde w_{K+1}) - f(w^*) \right] \le \gamma^K \left( f(\tilde w_1) - f(w^*) \right) \]where \( \tilde w_{K+1} \) denotes the value of the iterate after the K-th epoch.We can see this happening in the following figure.
This figure evaluates HALP on linear regression on a synthetic dataset with 100 features and 1000 examples. It compares it with base full-precision SGD and SVRG, low-precision SGD (LP-SGD), and a low-precision version of SVRG without bit centering (LP-SVRG). Notice that HALP converges to very high-accuracy solutions even with only 8-bits (although it is eventually limited by floating-point error). In this case HALP converges to an even higher-accuracy solution than full-precision SVRG because HALP uses less floating-point arithmetic and therefore is less sensitive to floating-point inaccuracy.
...and there's more!
This was only a selection of results:
there's a lot more in the paper. We showed that HALP matches SVRG's convergence trajectory--even for Deep learning models. We implemented HALP efficiently, and showed that it can run up to \( 4 \times \) faster than full-precision SVRG on the CPU. We also implemented HALP in TensorQuant, a deep learning library, and showed that it can exceed the validation performance of plain low-precision SGD on some deep learning tasks.
The obvious but exciting next step is to implement HALP efficiently on low-precision hardware, following up on our work for the next generation of compute architectures (at ISCA 2017).
|
My research activities focus on the study of interaction effects in low-dimensional quantum systems.
Presently, I am interested in the so-called Dirac materials with a special focus on
planar systems such as graphene and graphene-like materials.
Graphene is a one-atom thick layer of graphite characterized by gapless bands, strong electron-electron interactions and emergent Lorentz
invariance deep in the infra-red. This system is subject to a number of challenging issues at the boundary
between condensed matter and high energy physics raised by recent experiments. For example, the influence of interactions
on transport and spectral properties of the system and their potential ability to dynamically generate a mass (or gap).
The orginality of my approach consists in studying interaction effects in this system starting from the infra-red
Lorentz-invariant fixed point where low-energy properties are captured by \((2 + 1)\)-dimensional effective (relativistic) field theories such as
the so-called reduced QED and QED\(_3\). These models constitute a nice playground where interaction effects
may be studied beyond the common leading order calculations. Such an ambitious task may be achieved
via the application of powerful multi-loop techniques originally developed in particle physics and statistical mechanics.
Interestingly, the odd dimensionality of space-time together with the (related) presence of Feynman diagrams with
non-integer indices brings a lot of novelties (as well as some additional complications) with respect to what is usually
known from the study of \((3 + 1)\)-dimensional theories. The study of the fixed point also offers a robust base from which the physics
away from the fixed point (which is closer to the experimental situation) may be explored. A striking feature of the
results obtained so far is that there seems to be a
quantitative agreement between the fixed point physics
(relativistic limit with fully retarded interactions) and physics far away from it (non-relativistic limit with instantaneous interactions).
This project involves a very nice collaboration with Anatoly Kotikov from Dubna who is a world-leading expert in the computation of (massless and massive) Feynman diagrams.
Due to its peculiar honeycomb lattice, the band structure of (intrinsic) graphene consists of
linearly dispersing bands (up to energies of the order of \(1\)eV) crossing at 2 Fermi (or Dirac) points.
As was probably first realized long ago by Semenoff (in the case of free fermions)
[Phys. Rev. Lett.
53 (1984) 2449], an effective low-energy
description then emerges in terms of a simple continuous \(U(1)\) QED-like gauge-field theory of massless Dirac fermions.
Upon adding interactions, Gonzáles, Guinea and
Vozmediano
[Nucl. Phys. B424 (1994) 595]
proved the existence of an infra-red (IR) Lorentz invariant fixed point due to the running
of the Fermi velocity, \(v\). The later flows to the velocity of light deep in the IR, \(v \rightarrow c\), with a corresponding flow of the coupling constant
of graphene to the QED coupling constant: \(\alpha_g = e^2/v \rightarrow \alpha =1/137\). Because \(v \approx c/300 \ll 1\), studies of interaction effects in graphene
generally focus on the experimentally relevant non-relativistic limit, \(v/c \rightarrow 0\), where the Coulomb interaction is instantaneous and the physics at the fixed point
has been largely overlooked.
Our work [Phys. Rev. D
86 (2012) 025005] focuses
on the study of interaction effects starting from the IR fixed point where \(v=c\) and the interaction is fully retarded.
Though a priori mainly of academic interest, the general motivation comes from the fact that interaction effects may be studied in a
rigorous and systematic way in this ultra-relativistic limit. Moreover, a full understanding of this limit may allow to extend the developed
techniques to experimentally accessible scales. It was then realized in [Phys. Rev. D 86 (2012) 025005]
that the effective field theory at the fixed point corresponds to the so-called massless reduced
[Gorbar,
Gusynin
and Miransky,
Phys. Rev. D 64 (2001) 105028] or pseudo
[Marino,
Nucl. Phys. B408 (1993) 551] QED. Reduced QED\(_{d_\gamma,d_e}\)
is a quantum field theory describing the interaction of an abelian \(U(1)\) gauge field living in a \(d_\gamma\)-dimensional space-time with a
fermionic field living in a reduced space-time of \(d_e\) dimensions (\(d_e \leq d_\gamma\)).
The interacting system may therefore be thought of as a physical realization of a "brane"-like
universe such as those which are often evoked in particle physics for a larger (and unphysical)
number of dimensions.
In the case where \(d_\gamma=d_e\), reduced QEDs correspond to usual QEDs.
The peculiar case of QED\(_{4,3}\) describes graphene at its fixed point.
In [Phys. Rev. D
86 (2012) 025005],
multi-loop techniques, originally developed in particle physics and statistical mechanics,
were applied to massless QED\(_{d_\gamma,d_e}\). This led to the exact computation of the polarization operator up to 2 loops.
In the specific case of QED\(_{4,3}\), the polarization operator is related to the optical conductivity of graphene at the fixed point (a quantity which was subject to some debate
in the non-relativistic limit, see next item). From the two-loop computation, the first order interaction correction coefficient
to the optical conductivity of graphene at the fixed point could be derived: \(\mathcal{C}^* = (92-9\pi^2)/(18\pi)\).
Surprisingly, the value of this coefficient, \(\mathcal{C}^* = 0.056\), agrees
quantitatively well with the one obtained in the non-relativistic limit (away from the fixed point), \(\mathcal{C} = 0.013\).
The value of \(\mathcal{C}^*\) was derived independently in [Phys. Rev. D 87 (2013) no.8, 087701]
on the basis of the method of uniqueness, a powerful method
for multi-loop computations in higher dimensional theories with conformal symmetry. In the continuity of these research papers,
the computation of the two-loop fermion self-energy in reduced QED was performed in [
Phys. Rev. D 89 (2014) no.6, 065038].
The optical conductivity of graphene is an important observable that has been studied by
different experimental groups around 2007. Surprisingly, despite the fact that graphene
is supposed to be in a strongly coupled regime at experimentally accessible scales,
experimental results show very weak deviations of this conductivity with respect to the
free fermion result. This has generated extensive theoretical works since 2008.
It turns out that contradictory results were obtained:
some in agreement with weak deviations such as
Mishchenko's result
[
Europhys. Lett.
83 , 17005 (2008)] while other works finding larger deviations.
Our work attempts at clarifying this situation. We do obtain weak deviations in accordance with Mishchenko's analysis. Moreover, we find that the origin of the disagreement lies in subtle renormalisation effects which need to be taken into account whatever regularization method is used. Such clarification is important because, based on the simple example of the optical conductivity, it allows to understand how interaction effects can be systematically taken into account in computing other observables.
Let us note that our results are valid in the limit of instantaneous Coulomb interaction,
\(v/c \rightarrow 0\) (in the non-relativistic limit, away from the Lorentz-invariant fixed point).
Technically, we could adapt powerful multi-loop techniques (for semi-massive Feynman diagrams)
to this non-relativistic limit. Moreover, from the physics point of view, the value of the first
interaction correction coefficient in this limit, \(\mathcal{C} = (19-6\pi)/12 \approx 0.013\), agrees
quantitatively well with the one obtained
in the ultra-relativistic limit, \(v/c \rightarrow 1\) (with fully retarded interactions), \(\mathcal{C}^* = 0.056\) (see item above).
The understanding of dynamical chiral symmetry breaking (D\(\chi\)SB) in QED\(_3\) is a long-standing problem with now three decades of extensive research. The question of the existence and stability of the critical point (separating massless and massive phases) is central to the vast majority of works. However, very few works address the accurate study of next-to-leading order (NLO) corrections. The reason is simply that such a task is of tremendous difficulty. It is however of utmost importance, especially that the value of the critical fermion flavour number (\(N_c\)) at leading order (LO) is not large.
In order to appreciate this, let us recall the 2 previous important works
on the subject. The first one is from
Appelquist,
Nash and
Wijewardhana,
"Critical Behavior in \((2+1\))-dimensional QED,"
[
Phys. Rev. Lett. 60 2575 (1988)]
where the authors provide LO estimate for \( N_c = 32/\pi^2 \approx 3.24 \) (in the Landau gauge).
The second one is from Nash,
"Higher-order corrections in \((2+1)\)-dimensional QED,"
[
Phys. Rev. Lett. 62 3024 (1989)]
where the author attempts to estimate NLO corrections to \(N_c\). Nash worked in a non-linear gauge and
performed a resummation leading to the suppression of the gauge dependence of \(N_c\) at LO yielding:
\(N_c = (4/3)(32/\pi^2) \approx 4.32\). His full NLO calculation is however only approximate and has been
carried out in the Feynman gauge with no possible discussion of the gauge dependence of \(N_c\) at NLO.
In the last 27 years there has been no further substantial progress in
understanding NLO corrections in QED\(_3\).
In the continuity of these research papers, our work
[
Phys. Rev. D
94 (2016) no.5, 056009]
partially fills this gap by providing an exact computation of all NLO corrections in the Landau gauge.
Our second work
[
Phys. Rev. D 94 (2016) no.11, 114011]
extends these results in two very non-trivial ways. First, all (exact) calculations are
carried out for an arbitrary non-local gauge.
Second, a Nash-like resummation is performed. We could then confirm the absence of
gauge dependence at LO and we could also explicitly prove the strong suppression of the gauge dependence
of \( N_c \) at NLO. In our third work:
[
arXiv:1902.03790 [hep-th]]
we proove the complete cancellation of the gauge dependence of the critical fermion flavour number resulting in: \(Nc=2.8469 \) at NLO.
This result is in full agreement with one of Gusynin and Pyatkovskiy [
Phys. Rev. D 94 (2016) no.12, 125009] who used a different method.
Thirty years after the seminal work of Nash, these results bring a definite and complete solution to NLO
computations in QED\(_3\). They provide order by order fully gauge-invariant methods to compute \(Nc \) and give
increasing support for the stability of the critical point.
They suggest that D\(\chi\)SB takes place for integer values of \(N\) smaller or equal to 2 (\(N \leq 2\)).
The study of dynamical gap generation (excitonic instability) in graphene is a very active field of
research for more than a decade now. The problem is very challenging because
the effect is non-perturbative and therefore beyond the reach of conventional perturbation theory.
Actually, to start with, graphene is a strongly coupled system with bare coupling constant
\(\alpha \approx 2.2\). A priori, this strong value should favour the instability.
It turns out that there is no experimental evidence for a gap. Theoretically, a very important
issue is to compute with high precision the critical coupling constant \(\alpha_c\) which is such that for
\(\alpha \gt \alpha_c\) a dynamical gap is generated. Alternatively, the instability manifests for
\(N \lt N_c\) where the critical fermion flavour number \(N_c\) is such that \(\alpha_c \rightarrow \infty\)
and of importance is also to compute \( N_c \) with high precision (for graphene \( N=2 \)).
Starting from the early work of
Khveshchenko
[
Phys. Rev. Lett.
87 246802 (2001)],
this has been done in a number of theoretical papers over the years (in the non-relativistic limit, \(v/c \rightarrow 0\)). The general agreement is that
\( \alpha_c = O(1) \) but the precise value of \(\alpha_c\) is still subject to some controversy
(some groups finding values smaller than \(2.2\) and others finding values larger than \(2.2\)).
Often, the approximations involved are criticised.
Our work revisits the problem from a completely different angle. We consider the ultra-relativistic limit, \(v/c \rightarrow 1\), which corresponds to graphene at its Lorentz-invariant fixed point. The corresponding effective field theory is called reduced QED\(_{4,3}\). The advantage of considering this limit is that \(\alpha_c\) and \(N_c\) may be derived fully analytically with unprecedented precision (up to NLO). A remarkable feature of our work is a very nice mapping between large-\(N\) QED\(_3\) and reduced QED\(_{4,3}\) which originates from the fact that the photon propagators in both models have the same form. This mapping allowed us to study the critical behaviour of reduced QED\(_{4,3}\) on the basis of our recent exact analysis of D\(\chi\)SB in QED\(_3\) (see item above). So, a first important development brought by our work is that we give more weight to QED\(_3\) as a physical model by directly relating it to a model describing one of the most challenging condensed matter system presently under study.
From the technical point of view: all calculations are exact and carried out for an arbitrary gauge fixing parameter; a (Nash-like) resummation of the wave-function renormalization is performed which strongly suppresses the gauge dependence of the critical coupling constant, \(\alpha_c\), and critical fermion number, \(N_c\); an additional RPA resummation was performed which is crucial beyond leading order to get non-trivial results. From these results, we could obtain high precision (gauge-invariant) estimates of \(\alpha_c\) and \(N_c\) which are compatible with the semi-metallic behaviour observed experimentally (at the fixed point \(\alpha \approx 1/137 \ll \alpha_c\)).
A third striking feature of our work is that the value obtained for \(\alpha_c\)
at the fixed point is of \(O(1)\) and therefore in good
quantitative
agreement with the results obtained in the non-relativistic limit,
e.g., away from the fixed point, including very good agreement with lattice simulations.
This suggest that the study of the fixed point is not only of academic interest and
that our model may be an efficient effective field theory model in describing some
of the features of actual planar condensed matter physics systems.
2017: Habilitation (HDR) 2007 - present: Associate Professor (Maître de Conférences) at Sorbonne Université (Université Pierre et Marie Curie), Laboratoire de Physique Théorique et Hautes Energies (LPTHE) 2007: Post-doc at Institut NEEL, Grenoble, France 2004 - 2007: Post-doc at the International Center for Theoretical Physics (ICTP), Trieste, Italy 2002 - 2004: Post-doc at the William I. Fine Theoretical Physics Institute, University of Minnesota, Minneapolis, United-States 1999 - 2002: PhD in Theoretical Solid State Physics at Université Paris XI (Orsay), Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS) 1998 - 1999: Masters in Solid State Physics (Doctoral School of Paris).
Sorbonne Université (campus Pierre et Marie Curie)
Laboratoire de Physique Théorique et Hautes Energies (LPTHE) 4 place Jussieu, Tour 13-14, 4ème étage, BP 126, 75252 Paris Cedex 05 Office (bureau): 507, tower 13-14, 5th floor (tour 13-14, 5ème étage) Tel: +33 (0) 1 44 27 28 52 Fax: +33 (0) 1 44 27 73 93 Email: teber [at] lpthe [dot] jussieu [dot] fr Access plan to LPTHE on Jussieu Campus: [map]
|
In Chapter 4 we considered the basic mathematical details of a propagation of uncertainty, limiting our treatment to the propagation of measurement error. This treatment is incomplete because it omits other sources of uncertainty that influence the overall uncertainty in our results. Consider, for example, Practice Exercise 4.2, in which we determined the uncertainty in a standard solution of Cu
2 + prepared by dissolving a known mass of Cu wire with HNO 3, diluting to volume in a 500-mL volumetric flask, and then diluting a 1-mL portion of this stock solution to volume in a 250-mL volumetric flask. To calculate the overall uncertainty we included the uncertainty in the sample's mass and the uncertainty of the volumetric glassware. We did not consider other sources of uncertainty, including the purity of the Cu wire, the effect of temperature on the volumetric glassware, and the repeatability of our measurements. In this appendix we take a more detailed look at the propagation of uncertainty, using the standardization of NaOH as an example. Standardizing a Solution of NaOH 1
Because solid NaOH is an impure material, we cannot directly prepare a stock solution by weighing a sample of NaOH and diluting to volume. Instead, we determine the solution's concentration through a process called a standardization.
2 A fairly typical procedure is to use the NaOH solution to titrate a carefully weighed sample of previously dried potassium hydrogen phthalate, C 8H 5O 4K, which we will write here, in shorthand notation, as KHP. For example, after preparing a nominally 0.1 M solution of NaOH, we place an accurately weighed 0.4-g sample of dried KHP in the reaction vessel of an automated titrator and dissolve it in approximately 50 mL of water (the exact amount of water is not important). The automated titrator adds the NaOH to the KHP solution and records the pH as a function of the volume of NaOH. The resulting titration curve provides us with the volume of NaOH needed to reach the titration's endpoint. 3
The end point of the titration is the volume of NaOH corresponding to a stoichiometric reaction between NaOH and KHP.
\[\ce{NaOH + C8H5O4K → C8H4O4- + K+ + Na+ + H2O}(l)\]
Knowing the mass of KHP and the volume of NaOH needed to reach the endpoint, we use the following equation to calculate the molarity of the NaOH solution.
\[\mathrm{C_{NaOH}}= \dfrac{1000 × m_\ce{KHP} × P_\ce{KHP}}{M_\ce{KHP} × V_\ce{NaOH}}\]
where
C NaOH is the concentration of NaOH (in mol KHP/L), m KHP is the mass of KHP taken (in g), P KHP is the purity of the KHP (where P KHP = 1 means that the KHP is pure and has no impurities), M KHP is the molar mass of KHP (in g KHP/mol KHP), and V NaOH is the volume of NaOH (in mL). The factor of 1000 simply converts the volume in mL to L. Identifying and Analyzing Sources of Uncertainty
Although it seems straightforward, identifying sources of uncertainty requires care as it easy to overlook important sources of uncertainty. One approach is to use a cause-and-effect diagram, also known as an Ishikawa diagram—named for its inventor, Kaoru Ishikawa—or a fish bone diagram. To construct a cause-and-effect diagram, we first draw an arrow pointing to the desired result; this is the diagram's trunk. We then add five main branch lines to the trunk, one for each of the four parameters that determine the concentration of NaOH and one for the method's repeatability. Next we add additional branches to the main branch for each of these five factors, continuing until we account for all potential sources of uncertainty. Figure A2.1 shows the complete cause-and-effect diagram for this analysis.
Figure A2.1 Cause-and-effect diagram for the standardization of NaOH by titration against KHP. The trunk, shown in black, represents the the concentration of NaOH. The remaining arrows represent the sources of uncertainty that affect C NaOH. Light blue arrows, for example, represent the primary sources of uncertainty affecting C NaOH, and green arrows represent secondary sources of uncertainty that affect the primary sources of uncertainty. See the text for additional details.
Before we continue, let's take a closer look at Figure A2.1 to be sure we understand each branch of the diagram. To determine the mass of KHP we make two measurements: taring the balance and weighing the gross sample. Each measurement of mass is subject to a calibration uncertainty. When we calibrate a balance, we are essentially creating a calibration curve of the balance's signal as a function of mass. Any calibration curve is subject to a systematic uncertainty in the
y-intercept (bias) and an uncertainty in the slope (linearity). We can ignore the calibration bias because it contributes equally to both m KHP (gross) and m KHP (tare), and because we determine the mass of KHP by difference.
\[m_\textrm{KHP} = m_\textrm{KHP(gross)} - m_\textrm{KHP(tare)}\]
The volume of NaOH at the end point has three sources of uncertainty. First, an automated titrator uses a piston to deliver the NaOH to the reaction vessel, which means the volume of NaOH is subject to an uncertainty in the piston's calibration. Second, because a solution's volume varies with temperature, there is an additional source of uncertainty due to any fluctuation in the ambient temperature during the analysis. Finally, there is a bias in the titration's end point if the NaOH reacts with any species other than the KHP.
Repeatability,
R, is a measure of how consistently we can repeat the analysis. Each instrument we use—the balance and the automatic titrator—contributes to this uncertainty. In addition, our ability to consistently detect the end point also contributes to repeatability. Finally, there are no additional factors that affect the uncertainty of the KHP's purity or molar mass. Estimating the Standard Deviation for Measurements
To complete a propagation of uncertainty we must express each measurement’s uncertainty in the same way, usually as a standard deviation. Measuring the standard deviation for each measurement requires time and may not be practical. Fortunately, most manufacture provides a tolerance range for glassware and instruments. A 100-mL volumetric glassware, for example, has a tolerance of ±0.1 mL at a temperature of 20
oC. We can convert a tolerance range to a standard deviation using one of the following three approaches. Figure A2.2a shows a uniform distribution between the limits of ± Assume a Uniform Distribution. x, in which each result between the limits is equally likely. A uniform distribution is the choice when the manufacturer provides a tolerance range without specifying a level of confidence and when there is no reason to believe that results near the center of the range are more likely than results at the ends of the range. For a uniform distribution the estimated standard deviation, s, is
\[s = \dfrac{x}{\sqrt{3}}\]
This is the most conservative estimate of uncertainty as it gives the largest estimate for the standard deviation.
Figure A2.2b shows a triangular distribution between the limits of ± Assume a Triangular Distribution. x, in which the most likely result is at the center of the distribution, decreasing linearly toward each limit. A triangular distribution is the choice when the manufacturer provides a tolerance range without specifying a level of confidence and when there is a good reason to believe that results near the center of the range are more likely than results at the ends of the range. For a uniform distribution the estimated standard deviation, s, is
\[s = \dfrac{x}{\sqrt 6}\]
This is a less conservative estimate of uncertainty as, for any value of
x, the standard deviation is smaller than that for a uniform distribution. Figure A2.3c shows a normal distribution that extends, as it must, beyond the limits of ± Assume a Normal Distribution. x, and which is centered at the mid-point between – xand x. A normal distribution is the choice when we know the confidence interval for the range. For a normal distribution the estimated standard deviation, s, is
\[s = \dfrac{x}{z}\]
where
z is 1.96 for a 95% confidence interval and 3.00 for a 99.7% confidence interval. Figure A2.2 Three possible distributions for estimating the standard deviation from a range: (a) a uniform distribution; (b) a triangular distribution; and (c) a normal distribution. Completing the Propagation of Uncertainty
Now we are ready to return to our example and determine the uncertainty for the standardization of NaOH. First we establish the uncertainty for each of the five primary sources—the mass of KHP, the volume of NaOH at the end point, the purity of the KHP, the molar mass for KHP, and the titration’s repeatability. Having established these, we can combine them to arrive at the final uncertainty.
After drying the KHP, we store it in a sealed container to prevent it from readsorbing moisture. To find the mass of KHP we first weigh the container, obtaining a value of 60.5450 g, and then weigh the container after removing a portion of KHP, obtaining a value of 60.1562 g. The mass of KHP, therefore, is 0.3888 g, or 388.8 mg. Uncertainty in the Mass of KHP.
To find the uncertainty in this mass we examine the balance’s calibration certificate, which indicates that its tolerance for linearity is ±0.15 mg. We will assume a uniform distribution because there is no reason to believe that any result within this range is more likely than any other result. Our estimate of the uncertainty for any single measurement of mass,
u( m), is
\[u(m) = \mathrm{\dfrac{0.15\: mg}{\sqrt 3} = 0.09\: mg}\]
Because we determine the mass of KHP by subtracting the container’s final mass from its initial mass, the uncertainty of the mass of KHP
u( m KHP), is given by the following propagation of uncertainty.
\[u(m_\ce{KHP}) = \mathrm{\sqrt{(0.09\: mg)^2 + (0.09\: mg)^2} = 0.13\: mg}\]
After placing the sample of KHP in the automatic titrator’s reaction vessel and dissolving with water, we complete the titration and find that it takes 18.64 mL of NaOH to reach the end point. To find the uncertainty in this volume we need to consider, as shown in Figure A2.1, three sources of uncertainty: the automatic titrator’s calibration, the ambient temperature, and any bias in determining the end point. Uncertainty in the Volume of NaOH.
To find the uncertainty resulting from the titrator’s calibration we examine the instrument’s certificate, which indicates a range of ±0.03 mL for a 20-mL piston. Because we expect that an effective manufacturing process is more likely to produce a piston that operates near the center of this range than at the extremes, we will assume a triangular distribution. Our estimate of the uncertainty due to the calibration,
u( V cal) is
\[u(V_\ce{cal}) = \mathrm{\dfrac{0.03\: mL}{\sqrt 6} = 0.012\: mL}\]
To determine the uncertainty due to the lack of temperature control, we draw on our prior work in the lab, which has established a temperature variation of ±3
oC with a confidence level of 95%. To find the uncertainty, we convert the temperature range to a range of volumes using water’s coefficient of expansion
\[\mathrm{(2.1×10^{−4}{^\circ C}^{−1}) × (±3^\circ C) × 18.64\: mL = ±0.012\: mL}\]
and then estimate the uncertainty due to temperature,
u( V temp) as
\[u(V_\ce{temp}) = \mathrm{\dfrac{0.012\: mL}{1.96} = 0.006\: mL}\]
Titrations using NaOH are subject to a bias due to the adsorption of CO
2, which can react with OH –, as shown here.
\[\ce{CO2}(aq) + \ce{2OH-}(aq) → \ce{CO3^2-}(aq) + \ce{H2O}(l)\]
If CO
2 is present, the volume of NaOH at the end point includes both the NaOH reacting with the KHP and the NaOH reacting with CO 2. Rather than trying to estimate this bias, it is easier to bathe the reaction vessel in a stream of argon, which excludes CO 2 from the titrator’s reaction vessel.
Adding together the uncertainties for the piston’s calibration and the lab’s temperature fluctuation gives the uncertainty in the volume of NaOH,
u( V NaOH) as
\[u(V_\ce{NaOH}) = \mathrm{\sqrt{(0.012\: mL)^2 + (0.006\: mL)^2} = 0.013\: mL}\]
According to the manufacturer, the purity of KHP is 100% ± 0.05%, or 1.0 ± 0.0005. Assuming a rectangular distribution, we report the uncertainty, Uncertainty in the Purity of KHP. u( P KHP) as
\[u(P_\ce{KHP}) = \dfrac{0.0005}{\sqrt 3} = 0.00029\]
The molar mass of C Uncertainty in the Molar Mass of KHP. 8H 5O 4K is 204.2212 g/mol, based on the following atomic weights: 12.0107 for carbon, 1.00794 for hydrogen, 15.9994 for oxygen, and 39.0983 for potassium. Each of these atomic weights has an quoted uncertainty that we can convert to a standard uncertainty assuming a rectangular distribution, as shown here (the details of the calculations are left to you).
element quoted uncertainty standard uncertainty carbon ±0.0008 ±0.00046 hydrogen ±0.00007 ±0.000040 oxygen ±0.0003 ±0.00017 potassium ±0.0001 ±0.000058
Adding together the uncertainties gives the uncertainty in the molar mass,
u( M KHP), as
\[u(M_\ce{KHP}) = \mathrm{\sqrt{8 × (0.00046)^2 + 5 × (0.000040)^2 + 4 × (0.00017)^2 + (0.000058)} = 0.0038\: g/mol}\]
To estimate the uncertainty due to repeatability we complete five titrations, obtaining results for the concentration of NaOH of 0.1021 M, 0.1022 M, 0.1022 M, 0.1021 M, and 0.1021 M. The relative standard deviation, Uncertainty in the Titration’s Repeatability. s r, for these titrations is
\[s_\ce{r} = \dfrac{5.477×10^{-5}}{0.1021} = 0.0005\]
If we treat the ideal repeatability as 1.0, then the uncertainty due to repeatability,
u(R), is equal to the relative standard deviation, or, in this case, 0.0005. Table A2.1 summarizes the five primary sources of uncertainty. As described earlier, we calculate the concentration of NaOH we use the following equation, which is slightly modified to include a term for the titration’s repeatability, which, as described above, has a value of 1.0. Combining the Uncertainties.
source value, x uncertainty, ( u ) x m KHP mass of KHP 0.3888 g 0.00013 g V NaOH volume of NaOH at end point 18.64 mL 0.013 mL P KHP purity of KHP 1.0 0.00029 M KHP molar mass of KHP 204.2212 g/mol 0.0038 g/mol R repeatability 1.0 0.0005
\[\mathrm{C_{NaOH}} = \dfrac{1000 × m_\ce{KHP}× P_\ce{KHP}}{M_\ce{KHP}× V_\ce{NaOH}} × R\]
Using the values from Table A2.1, we find that the concentration of NaOH is
\[C_\ce{NaOH} = \dfrac{1000 × 0.3888 × 1.0}{204.2212 × 18.64} × 1.0 = \mathrm{0.1021\: M}\]
Because the calculation of
C NaOH includes only multiplication and division, the uncertainty in the concentration, u( C NaOH) is given by the following propagation of uncertainty.
\[\dfrac{u(C_\ce{NaOH})}{C_\ce{NaOH}}= \dfrac{u(C_\ce{NaOH})}{0.1021\: \ce M} = \sqrt{\dfrac{(0.00013)^2}{(0.3888)^2} + \dfrac{(0.00029)^2}{(1.0)^2} + \dfrac{(0.0038)^2}{(204.2212)^2} + \dfrac{(0.013)^2}{(18.64)^2} + \dfrac{(0.0005)^2}{(1.0)^2}}\]
Solving for
u( C NaOH) gives its value as ±0.00010 M, which is the final uncertainty for the analysis. Evaluating the Sources of Uncertainty
Figure A2.3 shows the relative uncertainty in the concentration of NaOH and the relative uncertainties for each of the five contributions to the total uncertainty. Of the contributions, the most important is the volume of NaOH, and it is here to which we should focus our attention if we wish to improve the overall uncertainty for the standardization.
Figure A2.3 Bar graph showing the relative uncertainty in C NaOH, and the relative uncertainty in each of the main factors affecting the overall uncertainty. References This example is adapted from Ellison, S. L. R.; Rosslein, M.; Williams, A. EURACHEM/CITAC Guide: Quantifying Uncertainty in Analytical Measurement, 2nd Edition, 2000 (available at http://www.measurementuncertainty.org/). See Chapter 5 for further details about standardizations. For further details about titrations, see Chapter 9.
|
This question arose from the recent one, roots of a polynomial linked to mock theta function?. Let $$ g(x):=\sum_{k=0}^\infty x^k\prod_{j=1}^{k-1}(1 + x^j)^2\\=1+x+x^2+3 x^3+4 x^4+6 x^5+10 x^6+15 x^7+21 x^8+30 x^9+43 x^{10}+59 x^{11}+...; $$ the sequence $1,1,1,3,4,6,10,15,21,30,43,59,...$ with the generating function $g(x)$ is A059618 on OEIS, it is the sequence of numbers of strongly unimodal partitions.
Now let $$ f(q):=g(q)\prod_{n=1}^\infty(1-q^n), $$ and let $a_k$ be the $k$th coefficient in the Maclaurin series for $f$, $$ f(x)=\sum_{k=0}^\infty a_kx^k\\=1-x^2+x^3+x^6+x^7-x^9+x^{10}-x^{14}+x^{18}-x^{20}+x^{21}+x^{25}+x^{26}-x^{27}\\+x^{28}-x^{30}+x^{33}-x^{35}+x^{36}-x^{39}-x^{40}+x^{42}-x^{44}+2x^{45}-x^{49}+x^{52}-x^{54}\\+x^{55}+x^{56}+x^{57}-x^{60}-x^{65}+... $$ The sequence of $a_k$, starting with
1,0,-1,1,0,0,1,1,0,-1,1,0,0,0,-1,0,0,0,1,0,-1,1,0,0,0,1,1,-1,1,0,-1,0,0,1,0,-1,1,0,0,-1,-1,0,1,0,-1,2,0,0,...
is not on OEIS. Among the first 1000 terms of the sequence, there are 609 zeroes, 182 ones, 161 -1s, 19 of them are 2 ($a_{45},a_{150},a_{210},a_{221},a_{273},a_{300},...$), 22 are -2 ($a_{77},a_{90},a_{165},a_{225},...$), and two of them ($a_{525}$ and $a_{825}$) are 3; seems like $a_k$ are zero for $k=2^j$ ($j>0$), for $k=p$ or $k=2p$, with $p$ prime $>7$, $k=3p$ and $k=4p$ with $p$ prime $\geqslant23$, $k=5p$ with $p$ prime $>31$, $6p$ for $p>37$, $7p$ and $8p$ for $p>43$, $9p$ for $p>47$, $10p$ for $p>61$, $11p$ for $p>67$,...
What may (or may not) be relevant is another sequence obtained from introducing new variable in the way I learned from a paper by Rhoades linked to from the above OEIS page for $g$.
Let $$ g_t(q):=\sum_{k=0}^\infty q^k\prod_{j=1}^{k-1}(1 + q^jt)(1+q^j/t), $$ and let $$ f_t(q)=g_t(q)\prod_{n=1}^\infty(1-q^n), $$ so that $g_1(q)=g(q)$ and $f_1(q)=f(q)$. Then $$ f_t(q)=1-q^2+\frac{1+t^3}{(1+t)t}q^3+\frac{1+t^5}{(1+t)t^2}q^6+q^7-\frac{1+t^3}{(1+t)t}q^9+\frac{1+t^7}{(1+t)t^3}q^{10}+...; $$ most coefficients have form $\pm\frac{1+t^{2j+1}}{(1+t)t^j}$, except that I cannot figure out how $j$ depends on the number of the coefficient. Exceptions here start from the $15$th coefficient, which is $\frac{1+t^9}{(1+t)t^4}-1$ and the $45$th one which is $\frac{1+t^{17}}{(1+t)t^8}+\frac{1+t^3}{(1+t)t}$.
Despite all these clues, to my shame I've given up searching for an explicit formula for $a_k$. Is there one? I am pretty sure there is, but what is it?
|
I want to determine the credible interval of a quantity $\theta_1$. I want to make this estimate using observed data by assuming a certain model which depends on $\theta_1$ as well as about n=15 nuisance parameters $\theta_2, \ldots, \theta_n$.
I have a likelihood function $\mathcal L(x|\boldsymbol\theta)$ where $x$ are my observations which I can calculate for any $\boldsymbol \theta$.
I also have priors $\pi(\boldsymbol \theta)$ which I can similarly calculate for any $\boldsymbol \theta$.
My plan is as follows.
Draw a large number of samples of $\boldsymbol \theta$ from a uniform random distribution. (
Monte Carlo step)
Calculate the posterior $p = \mathcal{L}(x|\theta)\cdot\pi(\theta)$ for each sample. (
Bayes step)
Calculate the weighted mean and standard deviation of $\theta_1$, where the weights are $p$. (
Integration step)
Does this make sense? If not, can someone please point me in the right direction? If so, is there a way to estimate how many samples I will need?
(In practice I think I will actually use quasi-Monte Carlo sampling by making use of Sobol numbers, but I think this detail is not important for the question.)
I know how to solve this problem using MCMC, but my likelihood function is very expensive to calculate and so I prefer a simpler way to make this estimate.
|
I am trying to compile a latex document that is compact as possible because I am allowed to bring a one paged sheet into an exam I am writing (may sound kind of funny but I think it may be applicable in many cases) and so I really want to make my document as compact as possible. I don't really care if its microscopic font as long as I can read it really close. Currently I thought of the following,
\documentclass{article}\usepackage[english]{babel}\usepackage[T1]{fontenc}\usepackage[latin9]{inputenc}\usepackage[margin=0in,top=0in,bottom=0in]{geometry}\usepackage{amsmath}\begin{document}\fontsize{5}Some theorem will go here. some math equation $\text{taylor series = } \sum_{k=0}^{n}\frac{f^(k)(x)(x-a)^{k}}{k!}$ and some more theorems. But I think the general idea is conveyed that I want this to be compact as possible. \end{document}
Does anyone have any more suggestions for this ? I would greatly appreciate it if I could make it even more compact as I am trying to fit a lot of stuff on that 1 page.
Thanks
|
The answer is no, and a counterexample is the following plateau potential:
$V(x) = x^2 \ \ \ \ \; \mathrm{for}\ \ \ \ x\ge -A$
$V(x) = A^2 \ \ \ \ \mathrm{for}\ \ \ \ -A-k \le x < -A$
$V(x) = \infty\ \ \; \ \ \mathrm{for}\ \ \ \ x <-A-k$
A is imagined to be a huge constant, and k is a large constant, but not anywhere near as huge as A. The potential has a plateau between -A-k and -A, but is continuous and increasing on either side of the origin. It's loss of uncertainty happens when the energy reaches the Plateau value of $A^2$, and it happens semiclassically, so it happens for large quantum numbers.
Semiclassically, in the Bohr-Sommerfeld (WKB) approximation, the particle has the same eigenfunctions as the harmonic oscillator, until the energy equals $A^2$. At this point, the next eigenfunction oscillates around the minimum, then crawls at a very very slow speed along the plateau, reflects off the wall, and comes back very very slowly to the oscillator.
The time spent on the plateau is much longer than the time spent oscillating (for appropriate choice of A and k) because the classical velocity on the plateau is so close to zero. This means that the position and momentum uncertainty is dominated by the uncertainty on the plateau, and the value of the position uncertainty is much less than the uncertainty for the oscillation if k is much smaller than A, and the value of the momentum uncertainty is nearly zero, because the momentum on the plateau is next to zero.
WKB expectation values are classical orbit averages
This argument uses the WKB expression for the expectation values of functions of x, which, from the WKB wavefunction,
$$\psi(x) = {1\over \sqrt{2T}} {1\over \sqrt{v}} e^{i\int^x p dx}$$,
Where v(x) is the classical velocity a particle would have at position x, and T is just a constant, a perverse way to parametrize the normalization constant of the WKB wavefunction. The expected value of any function of the X operator is equal to
$$\langle f(x)\rangle = \int |\psi(x)|^2 f(x) = {1\over 2 T} \int {1\over v(x)} f(x) dx = {1\over T}\oint f(x(t)) dt$$
Where the last integral is the integral around the full classical orbit. The last expression obviously works for functions of P (it works for any operator using the corresponding classical function on phase space). So the expectation value is just the average value of the quantity along the orbit, the factor of 2 disappears because you go over every x value twice along the orbit, and the strangely named normalization factor "T" is revealed to be the period of the classical orbit, because the average value of the unit operator is 1.
|
Definition:Basis (Topology) Contents Definition
Let $\left({S, \tau}\right)$ be a topological space.
An analytic basis for $\tau$ is a subset $\mathcal B \subseteq \tau$ such that: $\displaystyle \forall U \in \tau: \exists \mathcal A \subseteq \mathcal B: U = \bigcup \mathcal A$ That is, such that for all $U \in \tau$, $U$ is a union of sets from $\mathcal B$.
Let $S$ be a set.
\((B1)\) $:$ $\mathcal B$ is a cover for $S$ \((B2)\) $:$ \(\displaystyle \forall U, V \in \mathcal B:\) $\exists \mathcal A \subseteq \mathcal B: U \cap V = \bigcup \mathcal A$ $\mathcal B$ is a cover for $S$ $\forall U, V \in \mathcal B: \forall x \in U \cap V: \exists W \in \mathcal B: x \in W \subseteq U \cap V$ Also known as
A
basis can also be seen referred to as a base.
The plural of
basis is bases.
This is properly pronounced
bay-seez, not bay-siz. Also see Results about basescan be found here.
|
Theorem (4.32) of "Lectures on modules and rings" by T.Y. Lam says that a module $P_R$ is flat iff any $R$-homomorphism $λ:M→P$ where $M$ is any finitely presented $R$-module can be factord through a finitely generated free module: there exist $ν:M→R^m , μ:R^m→P$ (for some finite $m$) with $λ=μoν$. My question leans on the 'if" part if one wants to use Theorem (4.24)(3). I could not realize how to reach a finitely presented module $M$ and a $λ$ from the hypothesis of Theorem (4.24)(3). Thanks for any help!
Assume you have relations $ \sum_j a_j r_{j,l} = 0 $ with $a_j \in P$, $r_{j,l} \in R$, $1 \le j \le n$, and $1 \le l \le p$, as in Theorem 4.24(3).
Let $e_j$ denote the $j$-th standard unit vector of $R^n$. Now let $K$ be the submodule of $R^n$ generated by $\{ \sum_{j} e_j r_{j,l} \mid 1 \le l \le p\}$. Set $M=R^n/K$ and let $\lambda \colon M \to P$ be the homomorphism induced from the homomorphism $R^n \to P$, $e_j \mapsto a_j$. ($K$ is indeed in the kernel of this homomorphism due to the given relations.)
Can you take it from there?
|
Search
Now showing items 1-10 of 182
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Highlights of experimental results from ALICE
(Elsevier, 2017-11)
Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ...
|
I'm following this document on on how to prove strong normalization for the simply typed lambda calculus. I understand how this proof works. The if-case of the proof is left to the reader as an exercise. I tried to solve this exercise, but I'm quite confused on how to continue.
This is what I've found by now:
Case:
$$\cfrac{ \Gamma\vdash e:\mathtt{bool}\qquad \Gamma\vdash e_1:\tau\qquad\Gamma\vdash e_2:\tau} {\Gamma\vdash \mathtt{if}\ e\ \mathtt{then}\ e_1\ \mathtt{else}\ e_2:\tau}$$
We know that $\gamma \vDash \Gamma$. Now we have to prove that $SN_\tau(\gamma(\mathtt{if}\ e\ \mathtt{then}\ e_1\ \mathtt{else}\ e_2))$ holds. By the definition of substitution we know that this is equivalent to proving that $SN_\tau(\mathtt{if}\ \gamma(e)\ \mathtt{then}\ \gamma(e_1)\ \mathtt{else}\ \gamma(e_2))$ holds.
The induction hypothesis tells us that $SN_{bool}(e)$, $SN_\tau(e_1)$ and $SN_\tau(e_2)$ holds.
But how can I continue from here?
PS: These are some of the definitions I've used.
$SN_\tau$ is a logical predicate. It is defined as follows:
$SN_{bool}(e) \equiv \bullet \vdash e : bool \space \land \space e \Downarrow$
$SN_{\tau_1\to\tau_2} \equiv \bullet \vdash e \land \space e \Downarrow \space \land \space (\exists e')(SN_{\tau_1}(e') \implies SN_{\tau_2}(e \space e')) $
$e \Downarrow \space \equiv (\exists v)(e \space \Downarrow \space v)$
$e \Downarrow v \equiv e \mapsto^\star v$
|
Now since the sum $$ \sum_{n=0}^\infty \frac{x^n}{n!},\quad x\in\Bbb R, $$ does have some relatively nice properties, is the same true for its analogues integral? If we take the gamma function to be a generalisation of the factorial with $\Gamma(n+1) = n!$, an obvious analogues integral formula would be $$ \int_0^\infty \frac{x^t}{\Gamma(t+1)}\text dt,\quad x\ge 0. $$ Does this integral have any similar, nice properties?
The integral can be rephrased, using the transformation $x=e^x$ in the same spirit as the general analogy of Taylor series to Laplace transforms, as$$\int_0^\infty \frac{x^t \mathrm dt}{\Gamma(t+1)}=\int_0^\infty \frac{e^{st}\mathrm dt}{\Gamma(t+1)}.$$This integral is known as the
nu function, denoted $\nu(x)=\nu(e^s)$. Wikipedia and MathWorld have the same definition, but don't offer much detail. For more information, the best place to go is probably Erdelyi et al., Higher Trascendental Functions vol. 3, p.217, §18.3 (where chapter 18 is just 'miscellaneous functions'). Quoting from there:
[The function $\nu(x)$] was encountered by Volterra in his theory of convolution-logarithms (Volterra 1916 Chapter VI, Volterra and Péres 1924, Chapter X) [...]. These functions also occur in connection with operational calculus, appear in an inversion formula of the Laplace transformation, and are of interest in connection with certain integral equations.
Erdelyi et al. show that $$ \nu(x)=\begin{cases} e^x+O(|x|^{-N}) & |\arg(x)|\leq\pi/2 \\ O(|x|^{-N}) & \pi/2<|\arg(x)|\leq\pi \end{cases} $$ for any integer $N$, and that apart from being a simple(ish) Laplace transform itself, $\nu(x)$ has a simple Laplace transform, $$ \int_0^\infty e^{-st}\nu(t)\mathrm dt=\frac{1}{s\log(s)} \quad\operatorname{Re}(s)>1. $$ Both of these properties are (apparently) relevant for the use of $\nu(x)$ in operational calculus; Erdelyi et al. give references from the 1940s but presumably the field has moved on since then.
Regarding the relation to integral equations, the nu function obeys the rather pleasing equation $$ \int_0^\infty \exp\left(-\frac{x^2}{4y}\right)\nu(x)\mathrm dx =\frac{\sqrt{\pi y}}{2}\nu(y), $$ i.e. it is an eigenfunction of the integral kernel $y^{-1/2}\exp\left(-\frac{x^2}{4y}\right)$, solving an integral equation which "gives all characteristic functions which, in a certain sense, are of regular growth", though it is apparently a solution to other interesting equations of similar form.
For further details, see the references in Erdelyi et al.
|
Definition:Strict Lower Closure/Set Definition
Let $T \subseteq S$.
The strict lower closure of $T$ (in $S$) is defined as: $T^\prec := \displaystyle \bigcup \left\{{t^\prec: t \in T}\right\}$
where $t^\prec$ denotes the strict lower closure of $t$ in $S$.
That is:
$T^\prec := \left\{ {u \in S: \exists t \in T: u \prec t}\right\}$ $a^\preccurlyeq := \left\{{b \in S: b \preccurlyeq a}\right\}$: the lower closure of $a \in S$: everything in $S$ that precedes $a$ $a^\succcurlyeq := \left\{{b \in S: a \preccurlyeq b}\right\}$: the upper closure of $a \in S$: everything in $S$ that succeeds $a$ $a^\prec := \left\{{b \in S: b \preccurlyeq a \land a \ne b}\right\}$: the strict lower closure of $a \in S$: everything in $S$ that strictly precedes $a$ $a^\succ := \left\{{b \in S: a \preccurlyeq b \land a \ne b}\right\}$: the strict upper closure of $a \in S$: everything in $S$ that strictly succeeds $a$. $\displaystyle T^\preccurlyeq := \bigcup \left\{{t^\preccurlyeq: t \in T:}\right\}$: the lower closure of $T \in S$: everything in $S$ that precedes some element of $T$ $\displaystyle T^\succcurlyeq := \bigcup \left\{{t^\succcurlyeq: t \in T:}\right\}$: the upper closure of $T \in S$: everything in $S$ that succeeds some element of $T$ $\displaystyle T^\prec := \bigcup \left\{{t^\prec: t \in T:}\right\}$: the strict lower closure of $T \in S$: everything in $S$ that strictly precedes some element of $T$ $\displaystyle T^\succ := \bigcup \left\{{t^\succ: t \in T:}\right\}$: the strict upper closure of $T \in S$: everything in $S$ that strictly succeeds some element of $T$. The astute reader may point out that, for example, $a^\preccurlyeq$ is ambiguous as to whether it means: The lower closure of $a$ with respect to $\preccurlyeq$ The upper closure of $a$ with respect to the dual ordering $\succcurlyeq$
By Lower Closure is Dual to Upper Closure and Strict Lower Closure is Dual to Strict Upper Closure, the two are seen to be equal.
Also denoted as
Other notations for closure operators include:
${\downarrow} a, {\bar \downarrow} a$ for lower closure of $a \in S$ ${\uparrow} a, {\bar \uparrow} a$ for upper closure of $a \in S$ ${\downarrow} a, {\dot \downarrow} a$ for strict lower closure of $a \in S$ ${\uparrow} a, {\dot \uparrow} a$ for strict upper closure of $a \in S$
However, as there is considerable inconsistency in the literature as to exactly which of these arrow notations is being used at any one time, its use is not endorsed on $\mathsf{Pr} \infty \mathsf{fWiki}$.
Also see
|
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
2012, Lettre, ISBN 3837620778, 360
Book
2013, Erste Auflage 2013., ISBN 386525344X, 388
Book
Volume Bd. 13
Conference Proceeding
The European Physical Journal C, ISSN 1434-6044, 12/2018, Volume 78, Issue 12, pp. 1 - 12
A search is presented for a Higgs-like boson with mass in the range 45 to 195$$\,{{\mathrm {GeV/}}c^2}$$ GeV/c2 decaying into a muon and a tau lepton. The...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS, PARTICLES & FIELDS | Confidence intervals | Luminosity | Large Hadron Collider | Leptons | Decay | Bosons | Physics - High Energy Physics - Experiment | Regular - Experimental Physics
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS, PARTICLES & FIELDS | Confidence intervals | Luminosity | Large Hadron Collider | Leptons | Decay | Bosons | Physics - High Energy Physics - Experiment | Regular - Experimental Physics
Journal Article
BMC Medical Imaging, ISSN 1471-2342, 06/2015, Volume 15, Issue 1, pp. 18 - 18
Background: The diagnosis of hip pain after total hip replacement (THR) represents a highly challenging question that is of increasing concern to orthopedic...
Loosening | Hip pain | SPECT/CT | Total hip arthroplasty | THR | REPLACEMENT | DIAGNOSIS | SCINTIGRAPHIC ASSESSMENT | COMPONENTS | BONE | PROSTHESIS | RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING | Arthralgia - etiology | Reproducibility of Results | Tomography, Emission-Computed, Single-Photon - methods | Bone Cements | Humans | Tomography, X-Ray Computed - methods | Male | Treatment Outcome | Arthralgia - diagnosis | Arthroplasty, Replacement, Hip - adverse effects | Hip Joint - diagnostic imaging | Sensitivity and Specificity | Multimodal Imaging - methods | Female | Aged | Index Medicus
Loosening | Hip pain | SPECT/CT | Total hip arthroplasty | THR | REPLACEMENT | DIAGNOSIS | SCINTIGRAPHIC ASSESSMENT | COMPONENTS | BONE | PROSTHESIS | RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING | Arthralgia - etiology | Reproducibility of Results | Tomography, Emission-Computed, Single-Photon - methods | Bone Cements | Humans | Tomography, X-Ray Computed - methods | Male | Treatment Outcome | Arthralgia - diagnosis | Arthroplasty, Replacement, Hip - adverse effects | Hip Joint - diagnostic imaging | Sensitivity and Specificity | Multimodal Imaging - methods | Female | Aged | Index Medicus
Journal Article
04/2018
JHEP 07(2018)134 The production cross-sections of $\Upsilon(1S)$, $\Upsilon(2S)$ and $\Upsilon(3S)$ mesons in proton-proton collisions at $\sqrt{s}$= 13 TeV...
Physics - High Energy Physics - Experiment
Physics - High Energy Physics - Experiment
Journal Article
CrossCurrents, ISSN 0011-1953, 9/2009, Volume 59, Issue 3, pp. 268 - 282
Religious life had "a desperate, almost frenzied quality" in the later middle ages because present and eternal health, harmony, and happiness were so...
Tourism | Travel | Motivation | Pragmatism | Pilgrimages | Poetry | Saliency | Pluralist school | Christianity | Clams | THE EXPERIENCES OF PILGRIMAGE | strangeness | Lingis, Alphonso | pluralism | Oliver, Mary | pilgrimage | 1900-1999 | James, William | American literature | poetry | Pilgrims and pilgrimages | Spatial behavior | Analysis | Religion | Universe | Divestiture
Tourism | Travel | Motivation | Pragmatism | Pilgrimages | Poetry | Saliency | Pluralist school | Christianity | Clams | THE EXPERIENCES OF PILGRIMAGE | strangeness | Lingis, Alphonso | pluralism | Oliver, Mary | pilgrimage | 1900-1999 | James, William | American literature | poetry | Pilgrims and pilgrimages | Spatial behavior | Analysis | Religion | Universe | Divestiture
Journal Article
Economics Letters, ISSN 0165-1765, 2010, Volume 108, Issue 2, pp. 137 - 140
We investigate whether the costs of job displacement differ between blue and white collar workers. In the short-run earnings and employment losses are...
Matching | Firm specific human capital | Plant closures | ECONOMICS | Firm specific human capital Plant closures Matching
Matching | Firm specific human capital | Plant closures | ECONOMICS | Firm specific human capital Plant closures Matching
Journal Article
European Physical Journal C, ISSN 1434-6044, 04/2014, Volume 74, Issue 4
Journal Article
10. Study of J/ψ production and cold nuclear matter effects in p Pb collisions at $ \sqrt{{{s_{NN }}}} $ = 5 TeV
ISSN 1029-8479, 04/2014, Volume 1402, Issue 1029-8479, pp. 072 - 072
The production of J/ψ mesons with rapidity 1.5 < y < 4.0 or −5.0 < y <\ud −2.5 and transverse momentum pT < 14 GeV/c is studied with the LHCb detector in\ud...
Particle Physics - Experiment | Nuclear Theory | Experiment | LHCb - Abteilung Hofmann | info:eu-repo/classification/arxiv/Nuclear Theory | Relativistic heavy ion physics | High Energy Physics - Experiment | Physics | Quarkonium | Science & Technology | Phenomenology | Physical Sciences | High Energy Physics | Particle and resonance production | info:eu-repo/classification/arxiv/High Energy Physics::Phenomenology | Nuclear Experiment | Heavy quark production | Heavy Ions | info:eu-repo/classification/arxiv/Nuclear Experiment | info:eu-repo/classification/arxiv/High Energy Physics::Experiment
Particle Physics - Experiment | Nuclear Theory | Experiment | LHCb - Abteilung Hofmann | info:eu-repo/classification/arxiv/Nuclear Theory | Relativistic heavy ion physics | High Energy Physics - Experiment | Physics | Quarkonium | Science & Technology | Phenomenology | Physical Sciences | High Energy Physics | Particle and resonance production | info:eu-repo/classification/arxiv/High Energy Physics::Phenomenology | Nuclear Experiment | Heavy quark production | Heavy Ions | info:eu-repo/classification/arxiv/Nuclear Experiment | info:eu-repo/classification/arxiv/High Energy Physics::Experiment
Journal Article
11. Measurement of the relative rate of prompt $\chi_{c0}$, $\chi_{c1}$ and $\chi_{c2}$ production at $\sqrt{s}=7$TeV
Journal of High Energy Physics, ISSN 1126-6708, 07/2013, Volume 10, p. 115
Journal Article
ChemMedChem, ISSN 1860-7179, 06/2009, Volume 4, Issue 6, pp. 951 - 956
X-ray crystal structures | Receptors | Drug design | PPAR | Aryl propionic acids | CHEMISTRY, MEDICINAL | receptors | RECOGNITION | drug design | PROLIFERATOR-ACTIVATED RECEPTORS | ALPHA | DELTA AGONIST | METABOLISM | GLUCOSE | PHARMACOLOGY & PHARMACY | COMPLICATIONS | ABSORPTION | THIAZOLIDINEDIONES | SELECTIVITY | aryl propionic acids
Journal Article
Book
14. Population dynamics in wild boarSus scrofa: ecology, elasticity of growth rate and implications for the management of pulsed resource consumers
: Population dynamics in wild boar
Journal of Applied Ecology, ISSN 0021-8901, 12/2005, Volume 42, Issue 6, pp. 1203 - 1213
Journal Article
15. Population Dynamics in Wild Boar Sus scrofa: Ecology, Elasticity of Growth Rate and Implications for the Management of Pulsed Resource Consumers
Journal of Applied Ecology, ISSN 0021-8901, 12/2005, Volume 42, Issue 6, pp. 1203 - 1213
1. In terrestrial ecosystems many species show large population fluctuations caused by pulsed resources, such as mast seeding. A prime example of a mammal...
Population ecology | Harvesting and Management | Population growth | Sustainable agriculture | Wild boars | Age structure | Population dynamics | Survival rates | Applied ecology | Seeding | Population growth rate | stochastic λ | Leslie matrix | continuum | tree masting | survival | pest control | fecundity | Tree masting | r-K continuum | Stochastic λ | Pest control | Fecundity | Survival | stochastic lambda | EVOLUTIONARY | CLASSICAL SWINE-FEVER | RANGELANDS | BIALOWIEZA PRIMEVAL FOREST | RELIABILITY | SEXUAL-ACTIVITY | FAT DORMOUSE | FERAL PIGS | ECOLOGY | ET-AL | REPRODUCTION | Hogs | Models | Animal populations | Terrestrial ecosystems | Environmental conditions
Population ecology | Harvesting and Management | Population growth | Sustainable agriculture | Wild boars | Age structure | Population dynamics | Survival rates | Applied ecology | Seeding | Population growth rate | stochastic λ | Leslie matrix | continuum | tree masting | survival | pest control | fecundity | Tree masting | r-K continuum | Stochastic λ | Pest control | Fecundity | Survival | stochastic lambda | EVOLUTIONARY | CLASSICAL SWINE-FEVER | RANGELANDS | BIALOWIEZA PRIMEVAL FOREST | RELIABILITY | SEXUAL-ACTIVITY | FAT DORMOUSE | FERAL PIGS | ECOLOGY | ET-AL | REPRODUCTION | Hogs | Models | Animal populations | Terrestrial ecosystems | Environmental conditions
Journal Article
Journal of High Energy Physics, ISSN 1029-8479, 7/2018, Volume 2018, Issue 7, pp. 1 - 27
The production cross-sections of ϒ(1S), ϒ(2S) and ϒ(3S) mesons in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV are measured with a data sample...
QCD | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Quarkonium | Elementary Particles, Quantum Field Theory | High Energy Physics | Nuclear Experiment | Experiment
QCD | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Quarkonium | Elementary Particles, Quantum Field Theory | High Energy Physics | Nuclear Experiment | Experiment
Journal Article
17. Determination of internal volume and volume distribution of lipid vesicles from dynamic light scattering data
BBA - Biomembranes, ISSN 0005-2736, 1989, Volume 985, Issue 1, pp. 1 - 8
Journal Article
18. Study of ψ(2S) production and cold nuclear matter effects in pPb collisions at s N N = 5 $$ \sqrt{s_{N\;N}}=5 $$ TeV
Journal of High Energy Physics, ISSN 1029-8479, 3/2016, Volume 2016, Issue 3, pp. 1 - 21
The production of ψ(2S) mesons is studied in dimuon final states using proton-lead (pPb) collision data collected by the LHCb detector. The data sample...
Particle and resonance production | Relativistic heavy ion physics | Quantum Physics | Heavy-ion collision | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Quarkonium | Elementary Particles, Quantum Field Theory | Heavy Ion Experiments
Particle and resonance production | Relativistic heavy ion physics | Quantum Physics | Heavy-ion collision | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Quarkonium | Elementary Particles, Quantum Field Theory | Heavy Ion Experiments
Journal Article
19. Study of $$\upchi _{{\mathrm {b}}}$$ χ b meson production in $$\mathrm {p} $$ p $$\mathrm {p} $$ p collisions at $$\sqrt{s}=7$$ s = 7 and $$8{\mathrm {\,TeV}} $$ 8 TeV and observation of the decay $$\upchi _{{\mathrm {b}}}\mathrm {(3P)} \rightarrow \Upsilon \mathrm {(3S)} {\upgamma } $$ χ b ( 3 P ) → Υ ( 3 S ) γ
|
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.)
@Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases.
@TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good.
It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors)
Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11...
$\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474.
Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function.
The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation}
Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation}
Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation}
Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation}
Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain.
Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$
We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better)
@TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P
Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
|
Definition:Ring of Polynomial Forms Definition
Let $R$ be a commutative ring with unity.
Let $I$ be a set
Let $\left\{{X_i: i \in I}\right\}$ be an indexed set.
Let $A = R \left[{\left\{{X_i: i \in I}\right\}}\right]$ be the set of all polynomial forms over $R$ in $\left\{{X_i: i \in I}\right\}$.
The ring of polynomial forms is the ordered triple $\left( A, +, \circ \right)$. Also known as
Because the
ring of sequences of polynomial forms can be used to construct the polynomial ring over $R$, it may be referred to as a polynomial ring.
The elements of the set $\left\{{X_j: j \in J}\right\}$ are called
indeterminates. Notation
Suppose we let $a_k \mathbf X^k$ denote the polynomial that has value $a_k$ on $\mathbf X^k$ and $0_R$ otherwise.
$f = a_1 \mathbf X^{k_1} + \cdots + a_r \mathbf X^{k_r}$
or non-uniquely by relaxing the condition that $\forall i = 1, \ldots, r: a_i \ne 0$.
This is the notation most frequently used when working with polynomials.
It is also sometimes helpful to include all the zero terms in this sum, in which case: $\displaystyle f = \sum_{k \in Z} a_k \mathbf X^k$
where $Z$ is the set of multiindices indexed by $J$.
Also see
|
The matrix isomorphisms of Clifford algebras are often expressed in terms of Pauli matrices. We will follow the common convention of using \({\left\{ i,j,k\right\} }\) to represent matrix indices that are an even permutation of \({\left\{ 1,2,3\right\} }\); \({i}\) also represents the square root of negative one, but the distinction should be clear from context.
The
Pauli matrices
\(\displaystyle \sigma_{1}\equiv\begin{pmatrix}0 & 1\\ 1 & 0 \end{pmatrix}\;\sigma_{2}\equiv\begin{pmatrix}0 & -i\\ i & 0 \end{pmatrix}\;\sigma_{3}\equiv\begin{pmatrix}1 & 0\\ 0 & -1 \end{pmatrix} \)
are traceless, hermitian, unitary, determinant \({-1}\) matrices that satisfy the relations \({\sigma_{i}\sigma_{j}=i\sigma_{k}}\) and \({\sigma_{i}\sigma_{j}\sigma_{k}=i}\). They also all anti-commute and square to the identity \({\sigma_{0}\equiv I}\); therefore, if we take matrix multiplication as Clifford multiplication, they act as an orthonormal basis of the vector space that generates the Clifford algebra \({C(3,0)\cong\mathbb{C}(2)}\). In physics \({C(3,0)}\) is associated with space, and is sometimes called the
Pauli algebra (AKA algebra of physical space).
We introduce the shorthand
\(\displaystyle \sigma_{13}\equiv\sigma_{1}\sigma_{3}=\begin{pmatrix}0 & -1\\ 1 & 0 \end{pmatrix} \)
so that \({\sigma_{2}=i\sigma_{13}}\). Since \({\left(\sigma_{13}\right)^{2}=-I}\), we can use it and \({\sigma_{0}}\) as a basis for \({\mathbb{C}\cong C(0,1)}\), allowing us to express complex numbers as real matrices via the isomorphism
\(\displaystyle a+ib\leftrightarrow a\sigma_{0}+b\sigma_{13}=\begin{pmatrix}a & -b\\ b & a \end{pmatrix}. \)
In physics \({C(3,1)}\) (or \({C(1,3)}\)) is associated with spacetime, but it turns out one is usually more interested in the complexified algebra \({C\mathbb{^{C}}(4)\cong\mathbb{C}(4)}\), which is sometimes called the
Dirac algebra. Any four matrices in \({\mathbb{C}(4)}\) that act as an orthonormal basis of the vector space generating \({C(3,1)}\) or \({C(1,3)}\) (and via complexification \({C\mathbb{^{C}}(4)}\)) are called Dirac matrices (AKA gamma matrices), and denoted \({\gamma^{i}}\). A fifth related matrix is usually defined as \({\gamma_{5}\equiv i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}}\). Many choices of Dirac matrices are in common use, a particular one being labeled the Dirac basis (AKA Dirac representation, standard basis). This is traditionally realized as a basis for \({C(1,3)}\):
\(\displaystyle \gamma^{0}=\begin{pmatrix}I & 0\\ 0 & -I \end{pmatrix},\;\gamma^{i}=\begin{pmatrix}0 & \sigma_{i}\\ -\sigma_{i} & 0 \end{pmatrix}\;\Rightarrow\gamma_{5}=\begin{pmatrix}0 & I\\ I & 0 \end{pmatrix} \)
Another common class of Dirac matrices requires \({\gamma_{5}}\) to be diagonal; this is called a
chiral basis (AKA Weyl basis or chiral / Weyl representation). The meaning of \({\gamma_{5}}\) and “chiral” will be explained in the next section. A chiral basis for \({C(1,3)}\) is
\(\displaystyle \gamma^{0}=\begin{pmatrix}0 & I\\ I & 0 \end{pmatrix},\;\gamma^{i}=\begin{pmatrix}0 & \sigma_{i}\\ -\sigma_{i} & 0 \end{pmatrix}\;\Rightarrow\gamma_{5}=\begin{pmatrix}-I & 0\\ 0 & I \end{pmatrix}, \)
and a chiral basis for \({C(3,1)}\) is
\(\displaystyle \gamma^{0}=\begin{pmatrix}0 & I\\ -I & 0 \end{pmatrix},\;\gamma^{i}=\begin{pmatrix}0 & \sigma_{i}\\ \sigma_{i} & 0 \end{pmatrix}\;\Rightarrow\gamma_{5}=\begin{pmatrix}-I & 0\\ 0 & I \end{pmatrix}. \)
Finally, a
Majorana basis generates the Majorana rep \({C(3,1)\cong\mathbb{R}(4)}\). We can find such a basis by applying the previous isomorphism for complex numbers as real matrices to the Pauli matrices themselves, obtaining anti-commuting matrices in \({\mathbb{R}(4)}\) that square to the identity; if we then include an initial anti-commuting matrix that squares to \({-I}\), we get:
\(\displaystyle \gamma^{0}=\begin{pmatrix}\sigma_{13} & 0\\ 0 & -\sigma_{13} \end{pmatrix},\;\gamma^{1}=\begin{pmatrix}\sigma_{1} & 0\\ 0 & \sigma_{1} \end{pmatrix},\;\gamma^{2}=\begin{pmatrix}0 & -\sigma_{13}\\ \sigma_{13} & 0 \end{pmatrix}, \)
\(\displaystyle \gamma^{3}=\begin{pmatrix}\sigma_{3} & 0\\ 0 & \sigma_{3} \end{pmatrix}\;\Rightarrow\gamma_{5}=\begin{pmatrix}0 & -\sigma_{2}\\ -\sigma_{2} & 0 \end{pmatrix}. \)
We know that these matrices act as a basis due to
Pauli’s fundamental theorem, whose extended form states that for even \({r+s=n}\), any two sets of \({n}\) anti-commuting matrices which square to \({\pm1}\) according to the signature are related by a similarity transformation; this means that any such elements can act as a basis for the vector space generating the Clifford algebra, since one of them must. This theorem also holds for \({C\mathbb{^{C}}(n)}\) for even \({n}\).
Note that all the above matrices are unitary, and those representing positive signature basis vectors are Hermitian, while those representing negative signature basis vectors are anti-Hermitian; these properties are sometimes required when (more restrictively) defining Dirac matrices. Dirac or gamma matrices can also be generalized to other dimensions and signatures; in this light the Pauli matrices are gamma matrices for \({C(3,0)}\). If the dimension is greater than 5, \({\gamma_{5}}\) can be confused with \({\gamma^{5}}\); this is made worse by the fact that one can also define the
covariant Dirac matrices \({\gamma_{i}\equiv\eta_{ij}\gamma^{j}}\).
Δ The Dirac matrices and \({\gamma_{5}}\) are defined in various ways by different authors. Most differ from the above only by a factor of \({±1}\) or \({±i}\); however, there is not much standardization in this area. Sometimes the Clifford algebra definition itself is changed by a sign; in this case the matrices represent a basis with the wrong signature, and according to our definition are not Dirac matrices. This is sometimes done for example when working with Majorana spinors, which only exist in \({C(3,1)}\) spacetime, yet where an author works nevertheless in the \({C(1,3)}\) “mostly minuses” signature.
Δ It is important to remember that the Dirac matrices are matrix representations of an orthonormal basis of the underlying vector space used to generate a Clifford algebra. So the Dirac and chiral bases are different representations of the orthonormal basis which generates the same matrix representation of \({C\mathbb{^{C}}(4)\cong\mathbb{C}(4)}\) acting on vectors (spinors) in \({\mathbb{C}^{4}}\).
The standard basis for the quaternions \({\mathbb{H}\cong C(0,2)}\) can be obtained in terms of Pauli matrices via the association \({\left\{ 1,i,j,k\right\} }\) \({\leftrightarrow\left\{ \sigma_{0},-i\sigma_{1},-i\sigma_{2},-i\sigma_{3}\right\} }\). Thus a quaternion can be expressed as a complex matrix via the isomorphism
\(\displaystyle a+ib+jc+kd\leftrightarrow\begin{pmatrix}a-id & -c-ib\\ c-ib & a+id \end{pmatrix} \),
and composing this with the previous isomorphism for complex numbers as real matrices allows the quaternions to be expressed as a subalgebra of \({\mathbb{R}(4)}\).
The Pauli matrices also form a basis for the vector space of traceless hermitian \({2\times2}\) matrices, which means that \({i\sigma_{i}}\) is a basis for the vector space of traceless anti-hermitian matrices \({so(3)\cong su(2)}\). Thus any element of \({SU(2)}\) can be written \({\textrm{exp}\left(ia^{j}\sigma_{j}\right)}\) for real numbers \({a^{j}}\). A similar construction is the eight
Gell-Mann matrices, which form a basis for the vector space of traceless hermitian \({3\times3}\) matrices and so multiplied by \({i}\) form a basis for \({su(3)}\).
Δ Since the Pauli matrices have so many potential roles, it is important to understand what use a particular author is making of them.
|
Commun. Math. Anal.
Volume 12, Number 2 (2012), 26 - 33
The Sharpness of Condition for Solving the Jump Problem
The Sharpness of Condition for Solving the Jump Problem
Abstract
Let $/gamma$ be a non-rectifiable closed Jordan curve in $\mathbb{C}$, which is merely assumed to be d-summable $1<d<2$ in the sense of Harrison and Norton [7]. We are interested i\gamma the so-called jump problem over γ, which is that of finding an analytic function in ℂ having a prescribed jump across the curve. The goal of this note is to show that the sufficient solvability condition of the jump problem given by $\displaystyle \nu > \frac{d}{2}$, being the jump function defined in γ and satisfying a Hölder condition with exponent $0<\nu\leq 1$, cannot be weakened on the whole class of d-summable curves.
|
An atom is the smallest unit of an element that can exist. Every atom is made up of protons, neutrons, and electrons. These particles define a nuclide and its chemical properties and were discovered in the early 20
th century and are described by modern atomic theory. Nuclide
Nuclides are specific types of atoms or nuclei. Every nuclide has a chemical element symbol (E) as well as an atomic number (Z) , the number of protons in the nucleus, and a mass number (A), the total number of protons and neutrons in the nucleus. The symbol for the element is as shown below:
\[^A_{Z}E\]
An example is neon, which has the element symbol Ne, atomic number 10 and mass number 20.
\[^{20}_{10}Ne\]
A nuclide has a measurable amount of energy and lasts for a measurable amount of time. Stable nuclides can exist in the same state indefinitely, but unstable nuclides are radioactive and decay over time. Some unstable nuclides occur in nature, but others are synthesized artificially through nuclear reactions.They emit energy (\(\alpha\), \(\beta\), or \(\gamma\) emissions) until they reach stability.
Atomic Number
Every element has a defining atomic number, with the symbol "Z". If an atom is neutrally charged, it has the same number of protons and electrons. If it is charged, there may be more protons than electrons or vice versa, but the atomic number remains the same. In the element symbol, the charge goes on the right side of the element. For instance, O
2- is an oxygen anion. O 2- still has an atomic number of 8, corresponding to the 8 protons, but it has 10 electrons. Every element has a different atomic number, ranging from 1 to over 100. On the periodic table, the elements are arranged in the order of atomic number across a period. The atomic number is usually located above the element symbol. For example, hydrogen has one proton and one electron, so it has an atomic number of 1. Copper has the atomic number of 29 for its 29 protons.
Examples as seen on Periodic Table Atomic Number 3 4 5 Element's Symbol B C N
Average Atomic Mass of all Element's Isotopes
10.811 12.011 14.007 Atomic Number and Chemical Properties
The atomic number defines an element's chemical properties. The number of electrons in an atom determines bonding and other chemical properties. In a neutral atom, the atomic number, Z, is also the number of electrons. These electrons are found in a cloud surrounding the nucleus, located by probability in electron shells or orbitals. The shell farthest from the nucleus is the valence shell. The electrons in this valence shell are involved in chemical bonding and show the behavior of the atom. The bonding electrons influence the molecular geometry and structure of the atom. They interact with each other and with other atoms in chemical reactions. The atomic number is unique to each atom and defines its characteristics of bonding or behavior or reactivity. Therefore, every atom, with a different atomic number, acts in a different manner.
Mass Number
The mass of an atom is mostly localized to the nucleus. Because an electron has negligible mass relative to that of a proton or a neutron, the mass number is calculated by the sum of the number of protons and neutrons. Each proton and neutron's mass is approximately one atomic mass unit (AMU). The two added together results in the mass number:
\[A=p^+ + n\]
Elements can also have isotopes with the same atomic number, but different numbers of neutrons. There may be a few more or a few less neutrons, and so the mass is increased or decreased. On the periodic table, the mass number is usually located below the element symbol. The mass number listed is the average mass of all of the element's isotopes. Each isotope has a certain percentage abundance found in nature, and these are added and averaged to obtain the average mass number.
For example,
4He has a mass number of 4. Its atomic number is 2, which is not always included in the notation because He is defined by the atomic number 2. References Petrucci, Ralph. General Chemistry: Principles and Modern Applications. New Jersey. Pearson Prentice Hall, 2006. Housecraft, Catherine E. and Alan G. Sharpe. Inorganic Chemistry. Third ed. IUPAC. Compendium of Chemical Terminology, 2nd ed. (the "Gold Book"). Compiled by A. D. McNaught and A. Wilkinson. Blackwell Scientific Publications, Oxford (1997). XML on-line corrected version: http://goldbook.iupac.org (2006-) created by M. Nic, J. Jirat, B. Kosata; updates compiled by A. Jenkins. ISBN 0-9678550-9-8. doi:10.1351/goldbook. http://science.uwaterloo.ca/~cchieh/...k/nuclide.html Outside Links Problems How many protons, neutrons, and electrons do chlorine atoms have? The mass of gold (Au) is 197, how many neutrons does it have? Carbon has several isotopes. 14C has how many protons, electrons, and neutrons? What is the atomic number of Li +? How many protons and electrons does Li +have? What does the mass number on the periodic table represent? Answers Because chlorine has an atomic number of 17, chlorine has 17 protons, 18, neutrons, and 17 electrons 118 neutrons 6 protons, 8 neutrons, and 6 electrons Z=3, 3 protons, 2 electrons The mass number represents the average mass of all of the isotopes of that particular element.
|
My try: Let $x = \sqrt{t}$, then $dx = \frac{1}{2\sqrt{t}}dt$. We get the following integral: $\int_{0}^{\infty}\frac{\sin(t)\sin(\sqrt{t})}{2\sqrt{t}}dt$. Now I tried to use Dirichlet's test: The function $g(x) = 1/2\sqrt{t}$ has limit $0$ at infinity and it is monotonically decreasing. Now if I could show that the function $F(b) = \int_{0}^{b} \sin(t)\sin(\sqrt{t})$ is bounded, that would mean the integral converges by Dirichlet's test, but I don't know how to prove it. Suggestions?
As usual, trig identities are the answer:
\begin{align} \int_0^\infty \sin(x^2)\sin(x)dx =& \frac{1}{2}\int_0^\infty\left[ \cos(x^2-x)-\cos(x^2+x)\right]dx \\ =& \frac{1}{2}\int_0^\infty\left( \cos\left[\left(x-\frac{1}{2}\right)^2 -\frac{1}{4}\right]-\cos\left[\left(x+\frac{1}{2}\right)^2 -\frac{1}{4}\right]\right)dx \\ = &\frac{1}{2}\left[\int_{-1/2}^\infty\cos\left(x^2 -\frac{1}{4}\right)dx-\int_{1/2}^\infty\cos\left(x^2 -\frac{1}{4}\right)dx\right] \end{align} and these integrals both converge, since the integrals of $\sin(x^2)$ and $\cos(x^2)$ both converge, and $\cos(x^2-1/4) = \cos(1/4)\cos(x^2)+\sin(1/4)\sin(x^2)$. Thus, the original integral converges as well.
The difference of integrals can be simplified to \begin{align} \int_0^\infty \sin(x^2)\sin(x)dx = &\int_0^{1/2}\cos\left(x^2-\frac{1}{4}\right)dx \\= &\cos\left(\frac{1}{4}\right)\int_0^{1/2}\cos(x^2)dx + \sin\left(\frac{1}{4}\right)\int_0^{1/2}\sin(x^2)dx \\ = &\sqrt{\frac{\pi}{2}}\left[\cos\left(\frac{1}{4}\right)C\left(\frac{1}{\sqrt{2\pi}}\right) + \sin\left(\frac{1}{4}\right)S\left(\frac{1}{\sqrt{2\pi}}\right)\right], \end{align} where $S$ and $C$ are the Fresnel sine and cosine integrals, respectively.
I think I found an answer: We will prove it by Cauchy's test: That is, we will prove that for all $\varepsilon > 0$ there exists $N > 0$ such that for all $a>b>N$ we have $|\int_{a}^{b} \sin t \frac{\sin\sqrt{t}}{\sqrt{t}}dt| < \varepsilon. $
Take $\varepsilon > 0$. Then since $\lim_{t \to \infty} \frac{\sin\sqrt{t}}{\sqrt{t}} = 0$ we can take $N>0$ such that for all $t > N$, $|\frac{\sin\sqrt{t}}{\sqrt{t}}|<\varepsilon/2$. Then, if $a >b >N$ we have: $\int_{a}^{b} \sin t \frac{\sin\sqrt{t}}{\sqrt{t}}dt< \int_{a}^{b} \sin t \cdot \varepsilon/2 \leq 2 \cdot \varepsilon/2=\varepsilon$, and similarlly we can get $\int_{a}^{b} \sin t \frac{\sin\sqrt{t}}{\sqrt{t}} > -\varepsilon$ which is what we wanted.
Dirichlet test is proved by integration by parts. One can apply it directly to the original integral. The convergence on $[0,+\infty)$ is equivalent to the convergence on $[1,+\infty)$. Now integrate by parts $$ \int_1^{+\infty}\sin x\sin x^2\,dx=\int_1^{+\infty}\frac{\sin x}{2x}\,d(-\cos x^2)=\\=\left[ -\frac{\sin x}{2x}\cos x^2\right]_1^{+\infty}+\int_1^{+\infty}\left(\frac{\cos x\cos x^2}{2x}-\frac{\sin x\cos x^2}{2x^2}\right)\,dx. $$ The only non-obviouos part is the integral $$ \int_1^{+\infty}\frac{\cos x\cos x^2}{2x}\,dx=\int_1^{+\infty}\frac{\cos x}{4x^2}\,d\sin(x^2), $$ however, a similar integration by part will do the job.
|
I want to typeset a piece of code containing line breaks. New lines should indent to a specified point in the previous line. Here is a monospaced example to illustrate what I mean. Notice how 'case' and 'of' line up below, as well as 'let' and 'in':
swap : forall a, b. Tuple a b -> Tuple b aswap a b x = case x of tuple a b y z. let x' = tuple b a z y in x'
The code I want to typeset is not monospaced, so I can't just use a verbatim environment to get what I want. The actual LaTeX of the first line, for instance, is
\mathsf{swap} : \forall\alpha,\beta.\;\mathsf{Tuple}\;\alpha\;\beta\to\mathsf{Tuple}\;\beta\;\alpha
How can I align some text to a specific point in the previous line?
|
June 15th, 2014, 09:14 PM
# 1
Member
Joined: Jun 2014
From: pennsylvania
Posts: 45
Thanks: 0
Help me understand this problem. Hello, I just wanted some help on how to answer this type of question, I don't understand what I have to do.
The 37 1/2 & 2 2/3 is what I guessed.
I also wanted to know how to solve this one because I was only able to get so far before not being sure what to do, and the answer that I was supposed to choose from, I was unable to extrapolate from the equation.
The last step I got to before I did not know what to do further was 15ax-1 divide by 5
Last edited by skipjack; June 16th, 2014 at 11:14 PM.
June 15th, 2014, 09:46 PM
# 2
Math Team
Joined: Dec 2013
From: Colombia
Posts: 7,685
Thanks: 2665
Math Focus: Mainly analysis and algebra
For the first one, you have the scale correct, but not the height.
The bridge is 44 feet long, and on the diagram it is 16.5 inches. So we have 16.5 inches represents 44 feet. So how many inches represent 1 foot? Well, if we divide by 44 we get
$\frac{16.5}{44} = 2 \frac{2}{3}$ inches represent $\frac{44}{44} = 1$ foot.
We can do a similar thing in reverse for the height. We have 1 foot is represented by $2 \frac{2}{3} = \frac{8}{3}$ inches. We want 10 feet, so let's multiply by 10. Then 10 feet are represented by $10 \cdot \frac{8}{3}$ inches.
For the second one you have $$3ax = \frac{1}{5}$$
So $x$ is multiplied by something: what? To isolate the $x$, we therefore divide both sides of the equation by that something.
Last edited by skipjack; June 16th, 2014 at 11:11 PM.
June 15th, 2014, 11:03 PM
# 3
Member
Joined: Jun 2014
From: pennsylvania
Posts: 45
Thanks: 0
Thanks again v8 for giving me the time.
First off I'm glad I got the scale right.
But redoing it just now, I'm not sure on something.
When I divide 16.5 by 44, I get 0.304 and if I turn that into a fraction I don't think I can turn it into the 2 2/3.
And the last part I understand now, it was 26 2/3.
I just need to exactly understand how we get that 2 2/3 scale?
I'm sorry if I'm not understanding your answer to the second problem I had, but if you can elaborate a bit more I would appreciate it even more
Last edited by skipjack; June 16th, 2014 at 11:16 PM.
June 16th, 2014, 08:47 PM
# 4
Member
Joined: Jun 2014
From: pennsylvania
Posts: 45
Thanks: 0
Sorry about my past post, everything checks out and you helped me solve the scale type of problem I might encounter again. But I still want to understand the other question so if you could please explain it to me.
Thank you.
Last edited by skipjack; June 16th, 2014 at 11:16 PM.
June 16th, 2014, 11:52 PM
# 5
Global Moderator
Joined: Dec 2006
Posts: 20,978
Thanks: 2229
There are mistakes in the above.
As 16.5 in. represents 44ft, 3/8 in. represents 1 ft, and so 3 3/4 in. represents 10 ft.
Thus 3 3/4 should have been dragged to the height box and 3/8 should have been dragged to the Scale box.
To solve 3ax = 1/5, divide both sides by 3a. That gives x = 1/(15a), which is choice A.
June 17th, 2014, 04:58 AM
# 6
Math Team
Joined: Dec 2013
From: Colombia
Posts: 7,685
Thanks: 2665
Math Focus: Mainly analysis and algebra
There are plenty of mistakes in my first post. I apologise. I'll look at it again later.
June 17th, 2014, 08:43 AM
# 7
Member
Joined: Jun 2014
From: pennsylvania
Posts: 45
Thanks: 0
Thanks skipjack for the correction. I want to make sure I get this 100%. Ok, I really wanna figure this out.
I guess I was trying to solve it differently?
I got stuck on 15ax-1 divide by 5
If I do it the way you mention, can you please show me how I divide 3a into 1/5.
If I divide 3a into 3ax, am I not left with 1x? I'm not sure how to divide the 3a into the 1/5.
Can you guys please explain the first problem to me? I'm not getting 3/8 in my calculations.
When I was explained, it was 2 2/3; I was able to get that calculation.
So please tell me what needs to be corrected, and what to do so I can understand.
Thanks
Last edited by skipjack; June 18th, 2014 at 12:01 PM.
June 18th, 2014, 02:40 AM
# 9
Math Team
Joined: Oct 2011
From: Ottawa Ontario, Canada
Posts: 14,597
Thanks: 1038
RULE: (a/b) / (c/d) = (a/b) * (d/c) = (ad) / (bc) : you knew that, right?
(1/5) / (3a/1) = (1/5) * (1/(3a)) = 1 / (15a) .... slap your forehead !!
June 18th, 2014, 12:21 PM
# 10
Global Moderator
Joined: Dec 2006
Posts: 20,978
Thanks: 2229
In the first problem, 16.5in. represents 44ft, so (16.5/44)in. represents 1ft.
To evaluate 16.5/44, I first doubled the numerator and denominator to get 33/88, then I divided the numerator and denominator by 11 to get 3/8.
Thus (3/8)in. represents 1ft, and so (30/8)in. represents 10ft.
As 30/8 = 3 3/4, that means (3 3/4)in. represents 10ft.
In the second problem, dividing 3ax by 3a gives x. Dividing 1/5 by 3a is done by multiplying 1/5 by 1/(3a).
$$\frac15 / (3a) = \frac15 \times \frac{1}{3a} = \frac{1 \times 1}{5 \times 3a} = \frac{1}{15a}$$
Tags problem, understand
Search tags for this page
Click on a term to search for related topics.
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post Couldn't understand this problem.. pnf123 Calculus 0 April 4th, 2014 10:02 PM need help to understand ment0smintz Calculus 6 February 18th, 2013 12:07 PM Help me understand this? goodjobbro Number Theory 13 December 25th, 2012 06:36 PM I don't understand this... johnny Algebra 1 July 21st, 2009 01:18 AM can someone help me understand this Williamson Calculus 5 July 13th, 2008 09:17 AM
|
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:663-695, 2019.
Abstract
We present an approach that improves the sample complexity for a variety of curve fitting problems, including active learning for linear regression, polynomial regression, and continuous sparse Fourier transforms. In the active linear regression problem, one would like to estimate the least squares solution $\beta^*$ minimizing $\|X\beta - y\|_2$ given the entire unlabeled dataset $X \in \mathbb{R}^{n \times d}$ but only observing a small number of labels $y_i$. We show that $O(d)$ labels suffice to find a constant factor approximation $\widetilde{\beta}$: \[ \mathbb{E}[\|{X} \widetilde{\beta} - y \|_2^2] \leq 2 \mathbb{E}[\|X \beta^* - y\|_2^2]. \]{This} improves on the best previous result of $O(d \log d)$ from leverage score sampling. We also present results for the \emph{inductive} setting, showing when $\widetilde{\beta}$ will generalize to fresh samples; these apply to continuous settings such as polynomial regression. Finally, we show how the techniques yield improved results for the non-linear sparse Fourier transform setting.
@InProceedings{pmlr-v99-chen19a,title = {Active Regression via Linear-Sample Sparsification},author = {Chen, Xue and Price, Eric},booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory},pages = {663--695},year = {2019},editor = {Beygelzimer, Alina and Hsu, Daniel},volume = {99},series = {Proceedings of Machine Learning Research},address = {Phoenix, USA},month = {25--28 Jun},publisher = {PMLR},pdf = {http://proceedings.mlr.press/v99/chen19a/chen19a.pdf},url = {http://proceedings.mlr.press/v99/chen19a.html},abstract = { We present an approach that improves the sample complexity for a variety of curve fitting problems, including active learning for linear regression, polynomial regression, and continuous sparse Fourier transforms. In the active linear regression problem, one would like to estimate the least squares solution $\beta^*$ minimizing $\|X\beta - y\|_2$ given the entire unlabeled dataset $X \in \mathbb{R}^{n \times d}$ but only observing a small number of labels $y_i$. We show that $O(d)$ labels suffice to find a constant factor approximation $\widetilde{\beta}$: \[ \mathbb{E}[\|{X} \widetilde{\beta} - y \|_2^2] \leq 2 \mathbb{E}[\|X \beta^* - y\|_2^2]. \]{This} improves on the best previous result of $O(d \log d)$ from leverage score sampling. We also present results for the \emph{inductive} setting, showing when $\widetilde{\beta}$ will generalize to fresh samples; these apply to continuous settings such as polynomial regression. Finally, we show how the techniques yield improved results for the non-linear sparse Fourier transform setting. }}
%0 Conference Paper%T Active Regression via Linear-Sample Sparsification%A Xue Chen%A Eric Price%B Proceedings of the Thirty-Second Conference on Learning Theory%C Proceedings of Machine Learning Research%D 2019%E Alina Beygelzimer%E Daniel Hsu%F pmlr-v99-chen19a%I PMLR%J Proceedings of Machine Learning Research%P 663--695%U http://proceedings.mlr.press%V 99%W PMLR%X We present an approach that improves the sample complexity for a variety of curve fitting problems, including active learning for linear regression, polynomial regression, and continuous sparse Fourier transforms. In the active linear regression problem, one would like to estimate the least squares solution $\beta^*$ minimizing $\|X\beta - y\|_2$ given the entire unlabeled dataset $X \in \mathbb{R}^{n \times d}$ but only observing a small number of labels $y_i$. We show that $O(d)$ labels suffice to find a constant factor approximation $\widetilde{\beta}$: \[ \mathbb{E}[\|{X} \widetilde{\beta} - y \|_2^2] \leq 2 \mathbb{E}[\|X \beta^* - y\|_2^2]. \]{This} improves on the best previous result of $O(d \log d)$ from leverage score sampling. We also present results for the \emph{inductive} setting, showing when $\widetilde{\beta}$ will generalize to fresh samples; these apply to continuous settings such as polynomial regression. Finally, we show how the techniques yield improved results for the non-linear sparse Fourier transform setting.
Chen, X. & Price, E.. (2019). Active Regression via Linear-Sample Sparsification. Proceedings of the Thirty-Second Conference on Learning Theory, in PMLR 99:663-695
This site last compiled Sat, 17 Aug 2019 00:05:37 +0000
|
When we know an input value and want to determine the corresponding output value for a function, we
evaluate the function. Evaluating will always produce one result because each input value of a function corresponds to exactly one output value.
When we know an output value and want to determine the input values that would produce that output value, we set the output equal to the function’s formula and
solve for the input. Solving can produce more than one solution because different input values can produce the same output value.
Evaluation of Functions in Algebraic Forms
When we have a function in formula form, it is usually a simple matter to evaluate the function. For example, the function [latex]f\left(x\right)=5 - 3{x}^{2}[/latex] can be evaluated by squaring the input value, multiplying by 3, and then subtracting the product from 5.
How To: Given the formula for a function, evaluate. Replace the input variable in the formula with the value provided. Calculate the result. Example 6: Evaluating Functions
Given the function [latex]h\left(p\right)={p}^{2}+2p[/latex], evaluate [latex]h\left(4\right)[/latex].
Solution
To evaluate [latex]h\left(4\right)[/latex], we substitute the value 4 for the input variable [latex]p[/latex] in the given function.
Therefore, for an input of 4, we have an output of 24.
Example 7: Evaluating Functions at Specific Values
Evaluate [latex]f\left(x\right)={x}^{2}+3x - 4[/latex] at
[latex]2[/latex] [latex]a[/latex] [latex]a+h[/latex] [latex]\frac{f\left(a+h\right)-f\left(a\right)}{h}[/latex] Solution
Replace the [latex]x[/latex] in the function with each specified value.
Because the input value is a number, 2, we can use algebra to simplify.[latex]\begin{cases}f\left(2\right)={2}^{2}+3\left(2\right)-4\hfill \\ =4+6 - 4\hfill \\ =6\hfill \end{cases}[/latex] In this case, the input value is a letter so we cannot simplify the answer any further.[latex]f\left(a\right)={a}^{2}+3a - 4[/latex] With an input value of [latex]a+h[/latex], we must use the distributive property.[latex]\begin{cases}f\left(a+h\right)={\left(a+h\right)}^{2}+3\left(a+h\right)-4\hfill \\ ={a}^{2}+2ah+{h}^{2}+3a+3h - 4\hfill \end{cases}[/latex] In this case, we apply the input values to the function more than once, and then perform algebraic operations on the result. We already found that[latex]f\left(a+h\right)={a}^{2}+2ah+{h}^{2}+3a+3h - 4[/latex]
and we know that[latex]f\left(a\right)={a}^{2}+3a - 4[/latex]
Now we combine the results and simplify.[latex]\begin{cases}\begin{cases}\hfill \\ \frac{f\left(a+h\right)-f\left(a\right)}{h}=\frac{\left({a}^{2}+2ah+{h}^{2}+3a+3h - 4\right)-\left({a}^{2}+3a - 4\right)}{h}\hfill \end{cases}\hfill \\ \begin{cases}\text{ }=\frac{2ah+{h}^{2}+3h}{h}\hfill & \hfill \\ \text{ }=\frac{h\left(2a+h+3\right)}{h}\hfill & \begin{cases}{cc}\begin{cases}{cc}& \end{cases}& \end{cases}\text{Factor out }h.\hfill \\ \text{ }=2a+h+3\hfill & \begin{cases}{cc}\begin{cases}{cc}& \end{cases}& \end{cases}\text{Simplify}.\hfill \end{cases}\hfill \end{cases}[/latex]
Try It 2
Given the function [latex]g\left(m\right)=\sqrt{m - 4}[/latex], evaluate [latex]g\left(5\right)[/latex].
Example 8: Solving Functions
Given the function [latex]h\left(p\right)={p}^{2}+2p[/latex], solve for [latex]h\left(p\right)=3[/latex].
Solution
[latex]\begin{cases}\text{ }h\left(p\right)=3\hfill & \hfill & \hfill & \hfill \\ \text{ }{p}^{2}+2p=3\hfill & \hfill & \hfill & \text{Substitute the original function }h\left(p\right)={p}^{2}+2p.\hfill \\ \text{ }{p}^{2}+2p - 3=0\hfill & \hfill & \hfill & \text{Subtract 3 from each side}.\hfill \\ \text{ }\left(p+3\text{)(}p - 1\right)=0\hfill & \hfill & \hfill & \text{Factor}.\hfill \end{cases}[/latex]
If [latex]\left(p+3\right)\left(p - 1\right)=0[/latex], either [latex]\left(p+3\right)=0[/latex] or [latex]\left(p - 1\right)=0[/latex] (or both of them equal 0). We will set each factor equal to 0 and solve for [latex]p[/latex] in each case.
[latex]\begin{cases}\left(p+3\right)=0,\hfill & p=-3\hfill \\ \left(p - 1\right)=0,\hfill & p=1\hfill \end{cases}[/latex]
This gives us two solutions. The output [latex]h\left(p\right)=3[/latex] when the input is either [latex]p=1[/latex] or [latex]p=-3[/latex].
We can also verify by graphing as in Figure 5. The graph verifies that [latex]h\left(1\right)=h\left(-3\right)=3[/latex] and [latex]h\left(4\right)=24[/latex].
Try It 3
Given the function [latex]g\left(m\right)=\sqrt{m - 4}[/latex], solve [latex]g\left(m\right)=2[/latex].
Evaluating Functions Expressed in Formulas
Some functions are defined by mathematical rules or procedures expressed in
equation form. If it is possible to express the function output with a formula involving the input quantity, then we can define a function in algebraic form. For example, the equation [latex]2n+6p=12[/latex] expresses a functional relationship between [latex]n[/latex] and [latex]p[/latex]. We can rewrite it to decide if [latex]p[/latex] is a function of [latex]n[/latex]. How To: Given a function in equation form, write its algebraic formula. Solve the equation to isolate the output variable on one side of the equal sign, with the other side as an expression that involves onlythe input variable. Use all the usual algebraic methods for solving equations, such as adding or subtracting the same quantity to or from both sides, or multiplying or dividing both sides of the equation by the same quantity. Example 9: Finding an Equation of a Function
Express the relationship [latex]2n+6p=12[/latex] as a function [latex]p=f\left(n\right)[/latex], if possible.
Solution
To express the relationship in this form, we need to be able to write the relationship where [latex]p[/latex] is a function of [latex]n[/latex], which means writing it as p = expression involving n.
[latex]\begin{cases}2n+6p=12\hfill & \hfill \\ 6p=12 - 2n\hfill & \begin{cases}& & \end{cases}\text{Subtract }2n\text{ from both sides}.\hfill \\ p=\frac{12 - 2n}{6}\hfill & \begin{cases}& & \end{cases}\text{Divide both sides by 6 and simplify}.\hfill \\ p=\frac{12}{6}-\frac{2n}{6}\hfill & \hfill \\ p=2-\frac{1}{3}n\hfill & \hfill \end{cases}[/latex]
Therefore, [latex]p[/latex] as a function of [latex]n[/latex] is written as
Analysis of the Solution
It is important to note that not every relationship expressed by an equation can also be expressed as a function with a formula.
Example 10: Expressing the Equation of a Circle as a Function
Does the equation [latex]{x}^{2}+{y}^{2}=1[/latex] represent a function with [latex]x[/latex] as input and [latex]y[/latex] as output? If so, express the relationship as a function [latex]y=f\left(x\right)[/latex].
Solution
First we subtract [latex]{x}^{2}[/latex] from both sides.
We now try to solve for [latex]y[/latex] in this equation.
We get two outputs corresponding to the same input, so this relationship cannot be represented as a single function
Try It 4
If [latex]x - 8{y}^{3}=0[/latex], express [latex]y[/latex] as a function of [latex]x[/latex].
Q & A Are there relationships expressed by an equation that do represent a function but which still cannot be represented by an algebraic formula? Yes, this can happen. For example, given the equation [latex]x=y+{2}^{y}[/latex], if we want to express [latex]y[/latex] as a function of [latex]x[/latex], there is no simple algebraic formula involving only [latex]x[/latex] that equals [latex]y[/latex]. However, each [latex]x[/latex] does determine a unique value for [latex]y[/latex], and there are mathematical procedures by which [latex]y[/latex] can be found to any desired accuracy. In this case, we say that the equation gives an implicit (implied) rule for [latex]y[/latex] as a function of [latex]x[/latex], even though the formula cannot be written explicitly. Evaluating a Function Given in Tabular Form
As we saw above, we can represent functions in tables. Conversely, we can use information in tables to write functions, and we can evaluate functions using the tables. For example, how well do our pets recall the fond memories we share with them? There is an urban legend that a goldfish has a memory of 3 seconds, but this is just a myth. Goldfish can remember up to 3 months, while the beta fish has a memory of up to 5 months. And while a puppy’s memory span is no longer than 30 seconds, the adult dog can remember for 5 minutes. This is meager compared to a cat, whose memory span lasts for 16 hours.
The function that relates the type of pet to the duration of its memory span is more easily visualized with the use of a table. See the table below.
Pet Memory span in hours Puppy 0.008 Adult dog 0.083 Cat 16 Goldfish 2160 Beta fish 3600
At times, evaluating a function in table form may be more useful than using equations. Here let us call the function [latex]P[/latex].
The
domain of the function is the type of pet and the range is a real number representing the number of hours the pet’s memory span lasts. We can evaluate the function [latex]P[/latex] at the input value of “goldfish.” We would write [latex]P\left(\text{goldfish}\right)=2160[/latex]. Notice that, to evaluate the function in table form, we identify the input value and the corresponding output value from the pertinent row of the table. The tabular form for function [latex]P[/latex] seems ideally suited to this function, more so than writing it in paragraph or function form. How To: Given a function represented by a table, identify specific output and input values. Find the given input in the row (or column) of input values. Identify the corresponding output value paired with that input value. Find the given output values in the row (or column) of output values, noting every time that output value appears. Identify the input value(s) corresponding to the given output value. Example 11: Evaluating and Solving a Tabular Function
Using the table below,
Evaluate [latex]g\left(3\right)[/latex]. Solve [latex]g\left(n\right)=6[/latex].
n 1 2 3 4 5 g(n) 8 6 7 6 8 Solution Evaluating [latex]g\left(3\right)[/latex] means determining the output value of the function [latex]g[/latex] for the input value of [latex]n=3[/latex]. The table output value corresponding to [latex]n=3[/latex] is 7, so [latex]g\left(3\right)=7[/latex]. Solving [latex]g\left(n\right)=6[/latex] means identifying the input values, [latex]n[/latex], that produce an output value of 6. The table below shows two solutions: [latex]n=2[/latex] and [latex]n=4[/latex].
n 1 2 3 4 5 g(n) 8 6 7 6 8
When we input 2 into the function [latex]g[/latex], our output is 6. When we input 4 into the function [latex]g[/latex], our output is also 6.
Try It 5
Using the table in Example 11, evaluate [latex]g\left(1\right)[/latex] .
Finding Function Values from a Graph
Evaluating a function using a graph also requires finding the corresponding output value for a given input value, only in this case, we find the output value by looking at the graph. Solving a function equation using a graph requires finding all instances of the given output value on the graph and observing the corresponding input value(s).
Example 12: Reading Function Values from a Graph
Given the graph in Figure 6,
Evaluate [latex]f\left(2\right)[/latex]. Solve [latex]f\left(x\right)=4[/latex]. Solution To evaluate [latex]f\left(2\right)[/latex], locate the point on the curve where [latex]x=2[/latex], then read the y-coordinate of that point. The point has coordinates [latex]\left(2,1\right)[/latex], so [latex]f\left(2\right)=1[/latex]. See Figure 7. To solve [latex]f\left(x\right)=4[/latex], we find the output value [latex]4[/latex] on the vertical axis. Moving horizontally along the line [latex]y=4[/latex], we locate two points of the curve with output value [latex]4:[/latex] [latex]\left(-1,4\right)[/latex] and [latex]\left(3,4\right)[/latex]. These points represent the two solutions to [latex]f\left(x\right)=4:[/latex] [latex]x=-1[/latex] or [latex]x=3[/latex]. This means [latex]f\left(-1\right)=4[/latex] and [latex]f\left(3\right)=4[/latex], or when the input is [latex]-1[/latex] or [latex]\text{3,}[/latex] the output is [latex]\text{4}\text{.}[/latex]See Figure 8. Try It 6
Using Figure 7, solve [latex]f\left(x\right)=1[/latex].
|
When you write
[...] to initialize a time series of random white noise (the errors), and than perform a first fit, to obtain a first model, than calculate the errors compared to the actual data, and than fit it again with the newly obtained errors, compare it to the actual data to obtain new errors and so on.
you actually outline the need for an estimation method that simultaneously handles the estimation of the vector of residuals as well as that of the parameters.
Say one has,
${y}_t = \boldsymbol{x}_t\boldsymbol{\beta} + {\varepsilon}_t$
Where $\boldsymbol{x}_t$ may contain anything you want, why not ${y}_{t-i}$ for $i=1,...,p$.
Putting aside the discussion about the conditions related to $p$.
In the MA(q) case, one assumes that ${\varepsilon}_t = {r}_t + \sum_{i=1}^q \lambda_i {r}_{t-i}$.
Which leads to
${y}_t = \boldsymbol{x}_t\boldsymbol{\beta} + {r}_t + \sum_{i=1}^q \lambda_i {r}_{t-i}$
or reformulated in matricial terms using backshift-operator,
$\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \left(\boldsymbol{I} + \sum_{i=1}^q \lambda_i \boldsymbol{B}_i\right)\boldsymbol{r}$
Given that playing with MLE actually means playing with distribution-conditioned
, you have to rearrange the above last equation as errors
$\left(\boldsymbol{I} + \sum_{i=1}^q \lambda_i \boldsymbol{B}_i\right)^{-1}\left(\boldsymbol{y} - \boldsymbol{X}\boldsymbol{\beta}\right) = \boldsymbol{r}$
In practice, this means playing with distribution-conditioned
residuals
$\left(\boldsymbol{I} + \sum_{i=1}^q \widehat{\lambda}_i \boldsymbol{B}_i\right)^{-1}\left(\boldsymbol{y} - \boldsymbol{X}\widehat{\boldsymbol{\beta}}\right) = \widehat{\boldsymbol{r}}$.
So arranged, one can both maximize the (knowledge-driven) likelihood that is assumed for our residuals in conjunction of the estimation of our parameters: that is simultaneously. Hence the frequent use of MLE.
But iterative approaches like the one you described are also used in practice, in the case of, say, GMM when dealing with endogeneity, stoping when a convergence criterion is met.
|
Zugehörigkeit:
Oberlin College
Datum: Die, 2018-09-04 09:50 - 10:10
Let $s(\cdot)$ denote the sum-of-proper-divisors function, that is, $s(n) =\sum_{d\mid n,~d<n}d$.
Erdös--Granville--Pomerance--Spiro conjectured that, for any set $\mathcal{A}$ of asymptotic density zero, the preimage set $s^{-1}(\mathcal{A})$ also has density zero. We prove a weak form of this conjecture. In particular, we show that the EGPS conjecture holds for infinite sets with counting function $O(x^{\frac12 + \epsilon(x)})$. We also disprove a hypothesis from the same paper of EGPS by showing that for any positive numbers $\alpha$ and $\epsilon$,
|
NIRISS Filters
JWST NIRISS has 12 medium- and broadband filters that cover the wavelength range between 0.8 and 5.0 μm in support of applications involving aperture masking interferometry, wide field slitless spectroscopy, and imaging.
Main article: NIRISS Observing Modes
NIRISS has a total of 12 filters, which are located in the NIRISS pupil and filter wheels as shown in Figure 1. The pupil wheel contains six short wavelength filters with central wavelengths between 0.9 and 2.0 μm, while the filter wheel has six long wavelength filters with central wavelengths between 2.8 and 4.8 μm. When used in combination with other optical elements, these filters support the NIRISS aperture masking interferometry (AMI), wide field slitless spectroscopy (WFSS), and imaging modes.
Historical context
Except for F158M, the NIRISS filters originated as NIRCam "flight spares." Long wavelength filters that populate the NIRISS filter wheel were transferred directly. However, filters with central wavelengths shorter than 2.5 μm required the addition of a "low pass" filter to eliminate "red leaks" inherent in the design of their optical coatings.
These red leaks (from radiation at wavelengths >2.5 μm) did not affect NIRCam: the short wavelength filters were designed to be used with HgCdTe detectors with a 2.5 μm cut-off, which preclude the detection of photons at longer wavelengths. Since these photons were still detectable by the NIRISS detector (5.2 μm cutoff), the "double-stack" filter design shown in Figure 2 was implemented for all the filters in pupil wheel except F158M
. Properties of the low pass filters are included in all estimates of the throughput of the short wavelength "double-stack" NIRISS filters.
As a result of their common heritage, the properties of the NIRISS filters are very similar to their counterparts in NIRCam, even when allowing for the addition of the "low pass" element for the short wavelength filters.
Properties of the NIRISS filters
Figure 3 illustrates the transmission curves for the entire suite of NIRISS filters. Key properties of the filters are quantified in Table 1.
Table 1. Properties of NIRISS filters
Filter
\lambda_{pivot}~(\mu\rm m)\,^1
BW (μm) 2 Effective response 3
\lambda_{-} (\mu \rm m)\,^4
\lambda_{+} (\mu \rm m)^4
Observing mode Broadband filters F090W 0.900 0.194 0.880 0.796 1.004 WFSS, Imaging F115W 1.150 0.241 0.839 1.013 1.282 WFSS, Imaging F150W 1.498 0.333 0.936 1.331 1.670 WFSS, Imaging F200W 1.984 0.461 0.945 1.751 2.225 WFSS, Imaging F277W 2.776 0.715 0.875 2.413 3.142 AMI, Imaging F356W 3.595 0.921 0.902 3.141 4.068 Imaging F444W 4.435 1.121 0.858 3.880 5.023 Imaging Medium-band filters F140M 1.405 0.147 0.897 1.332 1.480 WFSS, Imaging F158M 1.587 0.196 0.876 1.488 1.688 WFSS, Imaging F380M 3.828 0.205 0.867 3.726 3.931 AMI, Imaging F430M 4.286 0.202 0.794 4.182 4.395 AMI, Imaging F480M 4.817 0.298 0.690 4.669 4.971 AMI, Imaging 1 The pivot wavelength, \lambda_{pivot}, relates the flux per unit wavelength (F_\lambda) to the flux per unit frequency (F_\nu). It is defined as: \lambda^2_{pivot} = {\int{ \lambda T(\lambda)\, \rm{d}\lambda}}\,\, / \,\,{\int{ \left(T(\lambda)\,/\,\lambda \right)\, \rm{d}\lambda}}, where T(\lambda) is the filter transmission curve. 2 The bandwidth BW is the integral of the filter transmission curve normalized by the maximum transmission (T_{max}) : BW = \int T(\lambda) \,{\rm d}\lambda\,\, /\,\, {T_{max}}. 3 The effective response is the mean transmission value over the wavelength range of \lambda_{pivot} \pm BW / 2. 4 The short (\lambda_{-}) and long (\lambda_{+}) half power wavelengths of a filter are the wavelengths at which its transmission falls to 50% of its peak value. NIRISS system throughput
Figure 4 shows the net photon-to-electron conversion efficiency (PCE) for each of the NIRISS filters.These values were estimated by combining current understanding of the throughput of the JWST Optical Telescope Element (OTE), the internal optical properties of NIRISS, and the quantum efficiency of the NIRISS detector. At any epoch, these effects are taken into account by the JWST Exposure Time Calculator (ETC). Although Figure 4 is useful for "back-of-the-envelope" calculations, critical applications should use the ETC to derive definitive estimates of performance.
Ancillary data
NIRISS transmission and net throughput curves may be downloaded. Please see the accompanying "ReadMe" file for further information about this distribution.
|
It’s been a while, so let’s include a recap : a (transitive) permutation representation of the modular group $\Gamma = PSL_2(\mathbb{Z}) $ is determined by the conjugacy class of a cofinite subgroup $\Lambda \subset \Gamma $, or equivalently, to a dessin d’enfant. We have introduced a
quiver (aka an oriented graph) which comes from a triangulation of the compactification of $\mathbb{H} / \Lambda $ where $\mathbb{H} $ is the hyperbolic upper half-plane. This quiver is independent of the chosen embedding of the dessin in the Dedeking tessellation. (For more on these terms and constructions, please consult the series Modular subgroups and Dessins d’enfants).
Why are quivers useful? To start, any quiver $Q $ defines a noncommutative algebra, the
path algebra $\mathbb{C} Q $, which has as a $\mathbb{C} $-basis all oriented paths in the quiver and multiplication is induced by concatenation of paths (when possible, or zero otherwise). Usually, it is quite hard to make actual computations in noncommutative algebras, but in the case of path algebras you can just see what happens.
Moreover, we can also
see the finite dimensional representations of this algebra $\mathbb{C} Q $. Up to isomorphism they are all of the following form : at each vertex $v_i $ of the quiver one places a finite dimensional vectorspace $\mathbb{C}^{d_i} $ and any arrow in the quiver [tex]\xymatrix{\vtx{v_i} \ar[r]^a & \vtx{v_j}}[/tex] determines a linear map between these vertex spaces, that is, to $a $ corresponds a matrix in $M_{d_j \times d_i}(\mathbb{C}) $. These matrices determine how the paths of length one act on the representation, longer paths act via multiplcation of matrices along the oriented path.
A
necklace in the quiver is a closed oriented path in the quiver up to cyclic permutation of the arrows making up the cycle. That is, we are free to choose the start (and end) point of the cycle. For example, in the one-cycle quiver
[tex]\xymatrix{\vtx{} \ar[rr]^a & & \vtx{} \ar[ld]^b \\ & \vtx{} \ar[lu]^c &}[/tex]
the basic necklace can be represented as $abc $ or $bca $ or $cab $. How does a necklace act on a representation? Well, the matrix-multiplication of the matrices corresponding to the arrows gives a square matrix in each of the vertices in the cycle. Though the dimensions of this matrix may vary from vertex to vertex, what does not change (and hence is a property of the necklace rather than of the particular choice of cycle) is the
trace of this matrix. That is, necklaces give complex-valued functions on representations of $\mathbb{C} Q $ and by a result of Artin and Procesi there are enough of them to distinguish isoclasses of (semi)simple representations! That is, linear combinations a necklaces (aka super-potentials) can be viewed, after taking traces, as complex-valued functions on all representations (similar to character-functions).
In physics, one views these functions as potentials and it then interested in the points (representations) where this function is extremal (minimal) : the
vacua. Clearly, this does not make much sense in the complex-case but is relevant when we look at the real-case (where we look at skew-Hermitian matrices rather than all matrices). A motivating example (the Yang-Mills potential) is given in Example 2.3.2 of Victor Ginzburg’s paper Calabi-Yau algebras.
Let $\Phi $ be a super-potential (again, a linear combination of necklaces) then our commutative intuition tells us that extrema correspond to zeroes of all partial differentials $\frac{\partial \Phi}{\partial a} $ where $a $ runs over all coordinates (in our case, the arrows of the quiver). One can make sense of differentials of necklaces (and super-potentials) as follows : the partial differential with respect to an arrow $a $ occurring in a term of $\Phi $ is defined to be the
path in the quiver one obtains by removing all 1-occurrences of $a $ in the necklaces (defining $\Phi $) and rearranging terms to get a maximal broken necklace (using the cyclic property of necklaces). An example, for the cyclic quiver above let us take as super-potential $abcabc $ (2 cyclic turns), then for example
$\frac{\partial \Phi}{\partial b} = cabca+cabca = 2 cabca $
(the first term corresponds to the first occurrence of $b $, the second to the second). Okay, but then the vacua-representations will be the representations of the quotient-algebra (which I like to call the
vacualgebra)
$\mathcal{U}(Q,\Phi) = \frac{\mathbb{C} Q}{(\partial \Phi/\partial a, \forall a)} $
which in ‘physical relevant settings’ (whatever that means…) turn out to be
Calabi-Yau algebras.
But, let us return to the case of subgroups of the modular group and their quivers. Do we have a natural
super-potential in this case? Well yes, the quiver encoded a triangulation of the compactification of $\mathbb{H}/\Lambda $ and if we choose an orientation it turns out that all ‘black’ triangles (with respect to the Dedekind tessellation) have their arrow-sides defining a necklace, whereas for the ‘white’ triangles the reverse orientation makes the arrow-sides into a necklace. Hence, it makes sense to look at the cubic superpotential $\Phi $ being the sum over all triangle-sides-necklaces with a +1-coefficient for the black triangles and a -1-coefficient for the white ones. Let’s consider an index three example from a previous post [tex]\xymatrix{& & \rho \ar[lld]_d \ar[ld]^f \ar[rd]^e & \\ i \ar[rrd]_a & i+1 \ar[rd]^b & & \omega \ar[ld]^c \\ & & 0 \ar[uu]^h \ar@/^/[uu]^g \ar@/_/[uu]_i &}[/tex]
In this case the super-potential coming from the triangulation is
$\Phi = -aid+agd-cge+che-bhf+bif $
and therefore we have a noncommutative algebra $\mathcal{U}(Q,\Phi) $ associated to this index 3 subgroup. Contrary to what I believed at the start of this series, the algebras one obtains in this way from dessins d’enfants are
far from being Calabi-Yau (in whatever definition). For example, using a GAP-program written by Raf Bocklandt Ive checked that the growth rate of the above algebra is similar to that of $\mathbb{C}[x] $, so in this case $\mathcal{U}(Q,\Phi) $ can be viewed as a noncommutative curve (with singularities).
However, this is not the case for all such algebras. For example, the vacualgebra associated to the second index three subgroup (whose fundamental domain and quiver were depicted at the end of this post) has growth rate similar to that of $\mathbb{C} \langle x,y \rangle $…
I have an outlandish conjecture about the growth-behavior of all algebras $\mathcal{U}(Q,\Phi) $ coming from dessins d’enfants :
the algebra sees what the monodromy representation of the dessin sees of the modular group (or of the third braid group). I can make this more precise, but perhaps it is wiser to calculate one or two further examples…
|
For the better part of the 30ties, Ernst Witt (1) did hang out with the rest of the ‘Noetherknaben’, the group of young mathematicians around Emmy Noether (3) in Gottingen.
In 1934 Witt became Helmut Hasse‘s assistent in Gottingen, where he qualified as a university lecturer in 1936. By 1938 he has made enough of a name for himself to be offered a lecturer position in Hamburg and soon became an associate professor, the down-graded position held by Emil Artin (2) until he was forced to emigrate in 1937.
A former fellow student of him in Gottingen, Erna Bannow (4), had gone earlier to Hamburg to work with Artin. She continued her studies with Witt and finished her Ph.D. in 1939. In 1940 Erna Bannow and Witt married.
So, life was smiling on Ernst Witt that sunday january 28th 1940, both professionally and personally. There was just one cloud on the horizon, and a rather menacing one. He was called up by the Wehrmacht and knew he had to enter service in february. For all he knew, he was spending the last week-end with his future wife… (later in february 1940, Blaschke helped him to defer his military service by one year).
Still, he desperately wanted to finish his paper before entering the army, so he spend most of that week-end going through the final version and submitted it on monday, as the published paper shows.
In the 70ties, Witt suddenly claimed he did discover the Leech lattice $ {\Lambda} $ that sunday. Last time we have seen that the only written evidence for Witt’s claim is one sentence in his 1941-paper Eine Identität zwischen Modulformen zweiten Grades. “Bei dem Versuch, eine Form aus einer solchen Klassen wirklich anzugeben, fand ich mehr als 10 verschiedene Klassen in $ {\Gamma_{24}} $.”
But then, why didn’t Witt include more details of this sensational lattice in his paper?
Ina Kersten recalls on page 328 of Witt’s collected papers : “In his colloquium talk “Gitter und Mathieu-Gruppen” in Hamburg on January 27, 1970, Witt said that in 1938, he had found nine lattices in $ {\Gamma_{24}} $ and that later on January 28, 1940, while studying the Steiner system $ {S(5,8,24)} $, he had found two additional lattices $ {M} $ and $ {\Lambda} $ in $ {\Gamma_{24}} $. He continued saying that he had then given up the tedious investigation of $ {\Gamma_{24}} $ because of the surprisingly low contribution
$ \displaystyle | Aut(\Lambda) |^{-1} < 10^{-18} $
to the Minkowski density and that he had consented himself with a short note on page 324 in his 1941 paper.”
In the last sentence he refers to the fact that the sum of the inverse orders of the automorphism groups of all even unimodular lattices of a given dimension is a fixed rational number, the Minkowski-Siegel mass constant. In dimension 24 this constant is
$ \displaystyle \sum_{L} \frac{1}{| Aut(L) |} = \frac {1027637932586061520960267}{129477933340026851560636148613120000000} \approx 7.937 \times 10^{-15} $
That is, Witt was disappointed by the low contribution of the Leech lattice to the total constant and concluded that there might be thousands of new even 24-dimensional unimodular lattices out there, and dropped the problem.
If true, the story gets even better : not only claims Witt to have found the lattices $ {A_1^{24}=M} $ and $ {\Lambda} $, but also enough information on the Leech lattice in order to compute the order of its automorphism group $ {Aut(\Lambda)} $, aka the Conway group $ {Co_0 = .0} $ the dotto-group!
Is this possible? Well fortunately, the difficulties one encounters when trying to compute the order of the automorphism group of the Leech lattice from scratch, is one of the better documented mathematical stories around.
The books From Error-Correcting Codes through Sphere Packings to Simple Groups by Thomas Thompson, Symmetry and the monster by Mark Ronan, and Finding moonshine by Marcus du Sautoy tell the story in minute detail.
It took John Conway 12 hours on a 1968 saturday in Cambridge to compute the order of the dotto group, using the knowledge of Leech and McKay on the properties of the Leech lattice and with considerable help offered by John Thompson via telephone.
But then, John Conway is one of the fastest mathematicians the world has known. The prologue of his book On numbers and games begins with : “Just over a quarter of a century ago, for seven consecutive days I sat down and typed from 8:30 am until midnight, with just an hour for lunch, and ever since have described this book as “having been written in a week”.”
Conway may have written a book in one week, Ernst Witt did complete his entire Ph.D. in just one week! In a letter of August 1933, his sister told her parents : “He did not have a thesis topic until July 1, and the thesis was to be submitted by July 7. He did not want to have a topic assigned to him, and when he finally had the idea, he started working day and night, and eventually managed to finish in time.”
So, if someone might have beaten John Conway in fast-computing the dottos order, it may very well have been Witt. Sadly enough, there is a lot of circumstantial evidence to make Witt’s claim highly unlikely.
For starters, psychology. Would you spend your last week-end together with your wife to be before going to war performing an horrendous calculation?
Secondly, mathematical breakthroughs often arise from newly found insight. At that time, Witt was also working on his paper on root lattices “Spiegelungsgrupen and Aufzähling halbeinfacher Liescher Ringe” which he eventually submitted in january 1941. Contained in that paper is what we know as Witt’s lemma which tells us that for any integral lattice the sublattice generated by vectors of norms 1 and 2 is a direct sum of root lattices.
This leads to the trick of trying to construct unimodular lattices by starting with a direct sum of root lattices and ‘adding glue’. Although this gluing-method was introduced by Kneser as late as 1967, Witt must have been aware of it as his 16-dimensional lattice $ {D_{16}^+} $ is constructed this way.
If Witt wanted to construct new 24-dimensional even unimodular lattices in 1940, it would be natural for him to start off with direct sums of root lattices and trying to add vectors to them until he got what he was after. Now, all of the Niemeier-lattices are constructed this way, except for the Leech lattice!
I’m far from an expert on the Niemeier lattices but I would say that Witt definitely knew of the existence of $ {D_{24}^+} $, $ {E_8^3} $ and $ {A_{24}^+} $ and that it is quite likely he also constructed $ {(D_{16}E_8)^+, (D_{12}^2)^+, (A_{12}^2)^+, (D_8^3)^+} $ and possibly $ {(A_{17}E_7)^+} $ and $ {(A_{15}D_9)^+} $. I’d rate it far more likely Witt constructed another two such lattices on sunday january 28th 1940, rather than discovering the Leech lattice.
Finally, wouldn’t it be natural for him to include a remark, in his 1941 paper on root lattices, that not every even unimodular lattices can be obtained from sums of root lattices by adding glue, the Leech lattice being the minimal counter-example?
If it is true he was playing around with the Steiner systems that sunday, it would still be a pretty good story he discovered the lattices $ {(A_2^{12})^+} $ and $ {(A_1^{24})^+} $, for this would mean he discovered the Golay codes in the process!
Which brings us to our next question : who discovered the Golay code?
|
Background:Fix a linear algebraic group $G$ over an algebraically closed field $k$ of arbitrary characteristic and let $B \subseteq G$ be a Borel subgroup with unipotent radical $N$. Let $\Delta^+$ denote the positive roots in a root system of a torus of $G$. Then we have the hyperalgebra $\bar U(N)$ of $N$ which is generated as a $k$-algebra by the divided-power elements $E_\beta^{(n)}$ for $\beta \in \Delta^+$ and $n \geq 0$. (I would also be happy to just consider the characteristic 0 case, where $\bar U(N)$ is just the enveloping algebra of Lie($N$) and is also generated by the non-divided power elements $E_\beta^n$).
Fix an ordering $\beta_1, \ldots, \beta_k$ of $\Delta^+$. Then the elements $E_{\beta_1}^{(n_1)} \cdots E_{\beta_k}^{(n_k)}$, for $n_1, \ldots, n_k \geq 0$, form a vector space basis of $\bar U(N)$. This gives a natural vector space grading on $\bar U(N)$ that I will denote by $\bar U(N)_n$ for $n \geq 0$. It is definitely not the case that $\bar U(N)_n$ is a multiplicative grading on $\bar U(N)$, as is easy to see. The following is, however, a well-known fact (by the PBW theorem): $$\bar U(N)_n \cdot \bar U(N)_m \subseteq \bar U(N)_{ \leq m+n } \,\,\, .$$ This gives an
upper bound on the degree of a product of elements. Question:I am wondering if anyone has considered the question of a lower bound on the degree of a product of elements in $\bar U(N)$. That is, I would like to see a fact on the following kind: $$\bar U(N)_n \cdot \bar U(N)_m \subseteq \bar U(N)_{ \geq f(m,n) } \,\,\, ,$$ where $f(m,n)$ is some reasonable function of $m$ and $n$ that hopefully gives a sharp bound.
(Note that whereas the PBW theorem gives an upper bound on degrees for any hyperalgebra/enveloping algebra, I expect that this lower bound, if it exists, will very much depend on the fact that $N$ is a unipotent algebraic group.)
|
The sun is an extended source. This means that it occupies a definite solid angle in the sky $\omega = 6.8\times 10^{-5} Sr$.
To visualise this (not to scale), let say that the black area in the following diagram is the angular extend of the sun as seen from the surface of the Earth (ignore the other labels),
What happens when we concentrate sunlight from the perspective of the white plate in the diagram? Answer: the sun looks bigger! It beings to fill more of the "sky".
When light rays are travelling from all angles towards the plate then we have reached maximum concentration.
So maximum concentration is defined as when the true angular size of the sun is made to fill a hemisphere solid angles. The angular size of the sun is approx. $\theta_s = 0.2666^{\circ}$, note this is the half-angle, therefore performing the solid angle integration yields,
$$X_{2D} = \frac{\iint_{2\pi}\cos\theta\ d\omega}{\iint_{\omega_s}\cos\theta\ d\omega} = \frac{\int_0^{2\pi}d \phi \int_0^{\pi/2} \cos\theta\sin\theta\ d \theta }{\int_0^{2\pi}d \phi \int_0^{\theta_s} \cos\theta\sin\theta\ d \theta} = \frac{\frac{2\pi}{2}}{\frac{2\pi\sin^2\theta_s}{2}} = \frac{\pi}{6.8\times 10^{-5} Sr} = 46200\times$$
This assumes that we can change the angular extend of light in both $x$ and $y$ and focus to a point. If instead you can only focus to a line then the limit is much smaller,
$$X_{1D} = 220\times$$
|
I want to assign only one overall subscript that cover the both integral symbols in double integration, I tried:
\begin{equation}T_y=\iint_A \tau_{xy}\,dA=0\end{equation}
but it only goes with second integral?
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community
You can use
\limits to accomplish the job. The package
mathtools provides good enhancements to the basic features of LaTeX typesetting.
\documentclass{article}\usepackage{mathtools}\begin{document}\begin{gather}\iiint\limits_V \mu(t,u,v,w) \,dt\,du\,dv\,dw \\T_y=\iint\limits_A \tau_{xy}\,dA=0\end{gather} \end{document}
|
What do we really mean when we say that the neutron and proton wavefunctions together form an $\rm SU(2)$ isospin doublet? What is the significance of this? What does this transformation really doing to the wavefunctions (or fields)?
Two particles forming an $SU(2)$ doublet means that they transform into each other under an $SU(2) $ transformation. For example a proton and neutron (which form such a doublet) transform as, \begin{equation} \left( \begin{array}{c} p \\ n \end{array} \right) \xrightarrow{SU(2)} \exp \left( - \frac{ i }{ 2} \theta_a \sigma_a \right) \left( \begin{array}{c} p \\ n \end{array} \right) \end{equation} where $ \sigma _a $ are the Pauli matrices. It turns out the real world obeys certain symmetry properties. For example, the equations described the strong interactions of protons and neutrons are approximately invariant under unitary transformations with determinant 1 (the transformation shown above) between the proton and neutron. This didn't have to be case, but turns out that it is. Since the strong interaction is invariant under such transformations, each interaction term in the strong interaction Lagrangian is highly restricted. For one thing, this is useful since it allows one to make simple predictions about proton and neutron systems.
In order to get a better understanding of this transformation and why the symmetry holds. Consider the QCD Lagrangian for the up and down quarks (which, as for the proton and neutron, also make up an isospin doublet): \begin{equation} {\cal L} _{ QCD} = \bar{\psi} _{u,i} i \left( \left( \gamma ^\mu D _\mu \right) _{ ij} - m _u \delta _{ ij} \right) \psi _{u,j} + \bar{\psi} _{ d ,i} \left( \left( \gamma ^\mu D _\mu \right) _{ ij} - m _d \delta _{ ij} \right) \psi _{d ,j}% \\ % & \bar{\psi} _{i} i \left( \left( \gamma ^\mu D _\mu \right) _{ ij} - M \delta _{ ij} \right) \psi _{j} \end{equation} where $ D ^\mu $ is the covarient derivative and the sum over $ i,j $ is a sum over the color. Notice that if $ m _{ u} \approx m _d \equiv m $ we can write this Lagrangian in a more convenient form, \begin{equation} {\cal L} _{ QCD} = \bar{\psi} _{i} i \left( \left( \gamma ^\mu D _\mu \right) _{ ij} - m \delta _{ ij} \right) \psi _{j} \end{equation} where $ \psi \equiv \left( \psi _u \, \psi _d \right) ^T $. This Lagrangian is now invariant over transformations between up and down quarks ("isospin") since the color generators commute with the isospin generators. Since proton and neutron and only differ in their ratio of up to down quarks (the more precise statement is that their quantum numbers correspond to those of $uud$ and $udd$ respectively), we would expect these particles to behave very similarly when QED can be neglected (which is often the case because QED is much weaker then QCD at low energies).
As an explicit example of the use of the symmetry consider the reactions: \begin{align} & 1) \quad p p \rightarrow d \pi ^+ \\ & 2) \quad p n \rightarrow d \pi ^0 \end{align} where $ d $ is deuterium, an isospin singlet, and the pions form an isospin triplet. For the first interaction, the initial isospin state is $ \left| 1/2, 1/2 \right\rangle \otimes \left| 1/2, 1/2 \right\rangle = \left| 1, 1 \right\rangle $. The products have isospin $ \left| 0,0 \right\rangle \otimes \left| 1,1 \right\rangle = \left| 1,1 \right\rangle $. The second interaction has an initial isospin state, $ \frac{1}{\sqrt{2}} \left( \left| 0,0 \right\rangle + \left| 1,0 \right\rangle \right) $, and final isospin, $ \left| 0,0 \right\rangle $.
Since both cases have some overlap between the isospin wavefunctions, both can proceed. However, the second process has a suppression factor of $ 1/ \sqrt{2} $ when contracting the isospin wavefunctions. To get the probabilities this will need to be squared. Thus one can conclude, \begin{equation} \frac{ \mbox{Rate of 1} }{ \mbox{Rate of 2}} \approx 2 \end{equation}
Notice that even without knowing anything about specifics of the system we were able to make a very powerful prediction. All we needed to know is that the process occurs through QCD.
I don't know what background you bring to the question. So at the risk of sounding patronizing, let me give a down-to-earth answer. I wonder if this helps.
Think of rotations on the (Real) 2-dimensional plane $\mathbb{R}^2$. You can rotate the X-axis into the Y-axis, and the Y-axis into the negative X-axis. This group of 2d rotations is called $SO(2)$. Note that here, each axis consists of the set of real numbers. If, instead, each axis corresponded to the set of complex numbers, then we would have the 2d complex plane $\mathbb{C}^2$. Rotations in this plane would correspond to the group $SU(2)$ and you can think of protons and neutrons (rather, their wavefunctions) as the basis elements forming the two axes in this $\mathbb{C}^2$ space. The
"doublet" refers to having two axes.
This $\mathbb{C}^2$ does not refer to actual physical dimensions, but just some property of protons and neutrons.
I'm at a loss for explaining the "significance" of this, apart from the fact that this is how nature behaves. One consequence is the fact that the proton and neutron have approximately equal masses, because apart from being different axes corresponding to this property, they are supposed to be pretty similar otherwise.
Usually when you write the wavefunction (of a particle), you concentrate on its spatial profile (in introductory quantum mechanics) and neglect other properties which characterize it. Similarly, every particle also carries a wavefunction corresponding to every other characteristic and the complete description involves writing down all the wavefunctions (you could "multiply" the wavefunctions corresponding to all the different properties, for what it's worth). Just like you might have seen operators acting on the spatial part of a wavefunction, you will also have operators acting on the wavefunction corresponding to
every property.
|
It is better for you to have studied "Feynman lectures on Physics Vol.3", because I cannot distinguish whether the words or expressions are what Feynman uses only or not and in order to summarize my questions here, I have to just quote the contents of the book.
However, one thing I notice is that "base state" that Feynman explains seems to be "basic orthonomal state vector"...
With a pair of hamiltonian matrix equation
$i\hbar \frac{d{C}_{1}}{dt} = {H}_{11}{C}_{1} + {H}_{12}{C}_{2}$
$i\hbar \frac{d{C}_{2}}{dt} = {H}_{21}{C}_{1} + {H}_{22}{C}_{2}$
where ${C}_{x} = |\psi\rangle$ , $\psi =$ arbitrary state, the book set the states 1 and 2 as "base states". There are only two base states for some particle. Base states have a condition - $\langle i|j\rangle = {\delta}_{ij}$.
I think the "kronecker delta" means that once the particle is in the state of j, we will not be able to find the state i, so if we suppose all the components of hamiltonian are constant, we can say ${H}_{12}$ and ${H}_{21}$ should be zero. ..............(1)
However the book says that states 1 and 2 are base states and ${H}_{12}$ and ${H}_{21}$ can be nonzero at the same time (if you have the book, refer equ. (9.2) and (9.3) and page 9-3.). There can be probability to transform from state 1 to state 2 and vice versa.....
Then, the relationship that I think like (1) between the "Kronecker delta" and the components of hamiltonian is not correct at all?? Or, is my thought that ${H}_{12}$ and ${H}_{21}$ should be zero true?
|
Search
Now showing items 1-10 of 18
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
|
Use the comparison test to determine whether the following series converge.
1) \(\displaystyle \sum^∞_{n=1}a_n\) where \(\displaystyle a_n=\frac{2}{n(n+1)}\)
2) \(\displaystyle \sum^∞_{n=1}a_n\) where \(\displaystyle a_n=\frac{1}{n(n+1/2)}\)
Solution: Converges by comparison with \(\displaystyle 1/n^2\).
3) \(\displaystyle \sum^∞_{n=1}\frac{1}{2(n+1)}\)
4) \(\displaystyle \sum^∞_{n=1}\frac{1}{2n−1}\)
Solution: Diverges by comparison with harmonic series, since \(\displaystyle 2n−1≥n.\)
5) \(\displaystyle \sum^∞_{n=2}\frac{1}{(nlnn)^2}\)
6) \(\displaystyle \sum^∞_{n=1}\frac{n!}{(n+2)!}\)
Solution: \(\displaystyle a_n=1/(n+1)(n+2)<1/n^2.\) Converges by comparison with p-series, \(\displaystyle p=2\).
7) \(\displaystyle \sum^∞_{n=1}\frac{1}{n!}\)
8) \(\displaystyle \sum^∞_{n=1}\frac{sin(1/n)}{n}\)
Solution: \(\displaystyle sin(1/n)≤1/n,\) so converges by comparison with p-series, \(\displaystyle p=2\).
9) \(\displaystyle \sum_{n=1}^∞\frac{sin^2n}{n^2}\)
10) \(\displaystyle \sum_{n=1}^∞\frac{sin(1/n)}{\sqrt{n}}\)
Solution: \(\displaystyle sin(1/n)≤1,\) so converges by comparison with p-series, \(\displaystyle p=3/2.\)
11) \(\displaystyle \sum^∞_{n=1}\frac{n^{1.2}−1}{n^{2.3}+1}\)
12) \(\displaystyle \sum^∞_{n=1}\frac{\sqrt{n+1}−\sqrt{n}}{n}\)
Solution: \(\displaystyle Since \sqrt{n+1}−\sqrt{n}=1/(\sqrt{n+1}+\sqrt{n})≤2/\sqrt{n},\) series converges by comparison with p-series for \(\displaystyle p=1.5\).
13) \(\displaystyle \sum^∞_{n=1}\frac{\sqrt[4]{n}}{\sqrt[3]{n^4+n^2}}\)
Use the limit comparison test to determine whether each of the following series converges or diverges.
14) \(\displaystyle \sum^∞_{n=1}(\frac{lnn}{n})^2\)
Solution: Converges by limit comparison with p-series for \(\displaystyle p>1\).
15) \(\displaystyle \sum^∞_{n=1}(\frac{lnn}{n^{0.6}})^2\)
16) \(\displaystyle \sum^∞_{n=1}\frac{ln(1+\frac{1}{n})}{n}\)
Solution: Converges by limit comparison with p-series, \(\displaystyle p=2.\)
17) \(\displaystyle \sum^∞_{n=1}ln(1+\frac{1}{n^2})\)
18) \(\displaystyle \sum^∞_{n=1}\frac{1}{4^n−3^n}\)
Solution: Converges by limit comparison with \(\displaystyle 4^{−n}\).
19) \(\displaystyle \sum^∞_{n=1}\frac{1}{n^2−nsinn}\)
20) \(\displaystyle \sum^∞_{n=1}\frac{1}{e^{(1.1)n}−3^n}\)
Solution: Converges by limit comparison with \(\displaystyle 1/e^{1.1n}\).
21) \(\displaystyle \sum^∞_{n=1}\frac{1}{e^{(1.01)n}−3^n}\)
22) \(\displaystyle \sum^∞_{n=1}\frac{1}{n^{1+1/n}}\)
Solution: Diverges by limit comparison with harmonic series.
23) \(\displaystyle \sum^∞_{n=1}\frac{1}{2^{1+1/n}}{n^{1+1/n}}\)
24) \(\displaystyle \sum^∞_{n=1}(\frac{1}{n}−sin(\frac{1}{n}))\)
Solution: Converges by limit comparison with p-series, \(\displaystyle p=3\).
25) \(\displaystyle \sum^∞_{n=1}(1−cos(\frac{1}{n}))\)
26) \(\displaystyle \sum^∞_{n=1}\frac{1}{n}(tan^{−1}n−\frac{π}{2})\)
Solution: Converges by limit comparison with p-series, \(\displaystyle p=3\).
27) \(\displaystyle \sum^∞_{n=1}(1−\frac{1}{n})^{n.n}\) (Hint:\(\displaystyle (1−\frac{1}{n})^n→1/e.\))
28) \(\displaystyle \sum^∞_{n=1}(1−e^{−1/n})\) (Hint:\(\displaystyle 1/e≈(1−1/n)^n,\) so \(\displaystyle 1−e^{−1/n}≈1/n.\))
Solution: Diverges by limit comparison with \(\displaystyle 1/n\).
29) Does \(\displaystyle \sum^∞_{n=2}\frac{1}{(lnn)^p}\) converge if \(\displaystyle p\) is large enough? If so, for which \(\displaystyle p?\)
30) Does \(\displaystyle \sum^∞_{n=1}(\frac{(lnn)}{n})^p\) converge if \(\displaystyle p\) is large enough? If so, for which \(\displaystyle p?\)
Solution: Converges for \(\displaystyle p>1\) by comparison with a \(\displaystyle p\) series for slightly smaller \(\displaystyle p\).
31) For which \(\displaystyle p\) does the series \(\displaystyle \sum^∞_{n=1}2^{pn}/3^n\) converge?
32) For which \(\displaystyle p>0\) does the series \(\displaystyle \sum^∞_{n=1}\frac{n^p}{2^n}\) converge?
Solution: Converges for all \(\displaystyle p>0\).
33) For which \(\displaystyle r>0\) does the series \(\displaystyle \sum^∞_{n=1}\frac{r^{n^2}}{2^n}\) converge?
34) For which \(\displaystyle r>0\) does the series \(\displaystyle \sum^∞_{n=1}\frac{2^n}{r^{n^2}}\) converge?
Solution: Converges for all \(\displaystyle r>1\). If \(\displaystyle r>1\) then \(\displaystyle r^n>4\), say, once \(\displaystyle n>ln(2)/ln(r)\) and then the series converges by limit comparison with a geometric series with ratio \(\displaystyle 1/2\).
35) Find all values of \(\displaystyle p\) and \(\displaystyle q\) such that \(\displaystyle \sum^∞_{n=1}\frac{n^p}{(n!)^q}\) converges.
36) Does \(\displaystyle \sum^∞_{n=1}\frac{sin^2(nr/2)}{n}\) converge or diverge? Explain.
Solution: The numerator is equal to \(\displaystyle 1\) when \(\displaystyle n\) is odd and \(\displaystyle 0\) when \(\displaystyle n\) is even, so the series can be rewritten \(\displaystyle \sum^∞_{n=1}\frac{1}{2n+1},\) which diverges by limit comparison with the harmonic series.
37) Explain why, for each \(\displaystyle n\), at least one of \(\displaystyle {|sinn|,|sin(n+1)|,...,|sinn+6|}\) is larger than \(\displaystyle 1/2\). Use this relation to test convergence of \(\displaystyle \sum^∞_{n=1}\frac{|sinn|}{\sqrt{n}}\).
38) Suppose that \(\displaystyle a_n≥0\) and \(\displaystyle b_n≥0\) and that \(\displaystyle \sum_{n=1}^∞a^2_n\) and \(\displaystyle \sum_{n=1}^∞b^2_n\) converge. Prove that \(\displaystyle \sum_{n=1}^∞a_nb_n\) converges and \(\displaystyle \sum_{n=1}^∞a_nb_n≤\frac{1}{2}(\sum_{n=1}^∞a^2_n+\sum_{n=1}^∞b^2_n)\).
Solution: \(\displaystyle (a−b)^2=a^2−2ab+b^2\) or \(\displaystyle a^2+b^2≥2ab\), so convergence follows from comparison of \(\displaystyle 2a_nb_n\) with \(\displaystyle a^2_n+b^2_n.\) Since the partial sums on the left are bounded by those on the right, the inequality holds for the infinite series.
39) Does \(\displaystyle \sum_{n=1}^∞2^{−lnlnn}\) converge? (Hint: Write \(\displaystyle 2^{lnlnn}\) as a power of \(\displaystyle lnn\).)
40) Does \(\displaystyle \sum_{n=1}^∞(lnn)^{−lnn}\) converge? (Hint: Use \(\displaystyle t=e^{ln(t)}\) to compare to a \(\displaystyle p−series\).)
Solution: \(\displaystyle (lnn)^{−lnn}=e^{−ln(n)lnln(n)}.\) If \(\displaystyle n\) is sufficiently large, then \(\displaystyle lnlnn>2,\) so \(\displaystyle (lnn)^{−lnn}<1/n^2\), and the series converges by comparison to a \(\displaystyle p−series.\)
41) Does \(\displaystyle \sum_{n=2}^∞(lnn)^{−lnlnn}\) converge? (Hint: Compare \(\displaystyle a_n\) to \(\displaystyle 1/n\).)
42) Show that if \(\displaystyle a_n≥0\) and \(\displaystyle \sum_{n=1}^∞a_n\) converges, then \(\displaystyle \sum_{n=1}^∞a^2_n\) converges. If \(\displaystyle \sum_{n=1}^∞a^2_n\) converges, does \(\displaystyle \sum_{n=1}^∞a_n\) necessarily converge?
Solution: \(\displaystyle a_n→0,\) so \(\displaystyle a^2_n≤|a_n|\) for large \(\displaystyle n\). Convergence follows from limit comparison. \(\displaystyle \sum1/n^2\) converges, but \(\displaystyle \sum1/n\) does not, so the fact that \(\displaystyle \sum_{n=1}^∞a^2_n\) converges does not imply that \(\displaystyle \sum_{n=1}^∞a_n\) converges.
43) Suppose that \(\displaystyle a_n>0\) for all \(\displaystyle n\) and that \(\displaystyle \sum_{n=1}^∞a_n\) converges. Suppose that \(\displaystyle b_n\) is an arbitrary sequence of zeros and ones. Does \(\displaystyle \sum_{n=1}^∞a_nb_n\) necessarily converge?
44) Suppose that \(\displaystyle a_n>0\) for all \(\displaystyle n\) and that \(\displaystyle \sum_{n=1}^∞a_n\) diverges. Suppose that \(\displaystyle b_n\) is an arbitrary sequence of zeros and ones with infinitely many terms equal to one. Does \(\displaystyle \sum_{n=1}^∞a_nb_n\) necessarily diverge?
Solution: No. \(\displaystyle \sum_{n=1}^∞1/n\) diverges. Let \(\displaystyle b_k=0\) unless \(\displaystyle k=n^2\) for some \(\displaystyle n\). Then \(\displaystyle \sum_kb_k/k=\sum1/k^2\) converges.
45) Complete the details of the following argument: If \(\displaystyle \sum_{n=1}^∞\frac{1}{n}\) converges to a finite sum \(\displaystyle s\), then \(\displaystyle \frac{1}{2}s=\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+⋯\) and \(\displaystyle s−\frac{1}{2}s=1+\frac{1}{3}+\frac{1}{5}+⋯.\) Why does this lead to a contradiction?
46) Show that if \(\displaystyle a_n≥0\) and \(\displaystyle \sum_{n=1}^∞a^2_n\) converges, then \(\displaystyle \sum_{n=1}^∞sin^2(a_n)\) converges.
Solution: \(\displaystyle |sint|≤|t|,\) so the result follows from the comparison test.
47) Suppose that \(\displaystyle a_n/b_n→0\) in the comparison test, where \(\displaystyle a_n≥0\) and \(\displaystyle b_n≥0\). Prove that if \(\displaystyle \sum b_n\) converges, then \(\displaystyle \sum a_n\) converges.
48) Let \(\displaystyle b_n\) be an infinite sequence of zeros and ones. What is the largest possible value of \(\displaystyle x=\sum_{n=1}^∞b_n/2^n\)?
Solution: By the comparison test, \(\displaystyle x=\sum_{n=1}^∞b_n/2^n≤\sum_{n=1}^∞1/2^n=1.\)
49) Let \(\displaystyle d_n\) be an infinite sequence of digits, meaning \(\displaystyle d_n\) takes values in \(\displaystyle {0,1,…,9}\). What is the largest possible value of \(\displaystyle x=\sum_{n=1}^∞d_n/10^n\) that converges?
50) Explain why, if \(\displaystyle x>1/2,\) then \(\displaystyle x\) cannot be written \(\displaystyle x=\sum_{n=2}^∞\frac{b_n}{2^n}(b_n=0or1,b_1=0).\)
Solution: If \(\displaystyle b_1=0,\) then, by comparison, \(\displaystyle x≤\sum_{n=2}^∞1/2^n=1/2.\)
51) [T] Evelyn has a perfect balancing scale, an unlimited number of \(\displaystyle 1-kg\) weights, and one each of \(\displaystyle 1/2-kg,1/4-kg,1/8-kg,\) and so on weights. She wishes to weigh a meteorite of unspecified origin to arbitrary precision. Assuming the scale is big enough, can she do it? What does this have to do with infinite series?
52) [T] Robert wants to know his body mass to arbitrary precision. He has a big balancing scale that works perfectly, an unlimited collection of \(\displaystyle 1-kg\) weights, and nine each of \(\displaystyle 0.1-kg, 0.01-kg,0.001-kg,\) and so on weights. Assuming the scale is big enough, can he do this? What does this have to do with infinite series?
Solution: Yes. Keep adding \(\displaystyle 1-kg\) weights until the balance tips to the side with the weights. If it balances perfectly, with Robert standing on the other side, stop. Otherwise, remove one of the \(\displaystyle 1-kg\) weights, and add \(\displaystyle 0.1-kg\) weights one at a time. If it balances after adding some of these, stop. Otherwise if it tips to the weights, remove the last \(\displaystyle 0.1-kg\) weight. Start adding \(\displaystyle 0.01-kg\) weights. If it balances, stop. If it tips to the side with the weights, remove the last \(\displaystyle 0.01-kg\) weight that was added. Continue in this way for the \(\displaystyle 0.001-kg\) weights, and so on. After a finite number of steps, one has a finite series of the form \(\displaystyle A+\sum_{n=1}^Ns_n/10^n\) where \(\displaystyle A\) is the number of full kg weights and \(\displaystyle d_n\) is the number of \(\displaystyle 1/10^n-kg\) weights that were added. If at some state this series is Robert’s exact weight, the process will stop. Otherwise it represents the Nth partial sum of an infinite series that gives Robert’s exact weight, and the error of this sum is at most \(\displaystyle 1/10^N\).
53) The series \(\displaystyle \sum_{n=1}^∞\frac{1}{2n}\) is half the harmonic series and hence diverges. It is obtained from the harmonic series by deleting all terms in which \(\displaystyle n\) is odd. Let \(\displaystyle m>1\) be fixed. Show, more generally, that deleting all terms \(\displaystyle 1/n\) where \(\displaystyle n=mk\) for some integer \(\displaystyle k\) also results in a divergent series.
54) In view of the previous exercise, it may be surprising that a subseries of the harmonic series in which about one in every five terms is deleted might converge. A depleted harmonic series is a series obtained from \(\displaystyle \sum_{n=1}^∞\frac{1}{n}\) by removing any term \(\displaystyle 1/n\) if a given digit, say \(\displaystyle 9\), appears in the decimal expansion of \(\displaystyle n\).Argue that this depleted harmonic series converges by answering the following questions.
a. How many whole numbers \(\displaystyle n\) have \(\displaystyle d\) digits?
b. How many \(\displaystyle d-digit\) whole numbers \(\displaystyle h(d)\). do not contain \(\displaystyle 9\) as one or more of their digits?
c. What is the smallest \(\displaystyle d-digit\) number \(\displaystyle m(d)\)?
d. Explain why the deleted harmonic series is bounded by \(\displaystyle \sum_{d=1}^∞\frac{h(d)}{m(d)}\).
e. Show that \(\displaystyle \sum_{d=1}^∞\frac{h(d)}{m(d)}\) converges.
Solution: a. \(\displaystyle 10^d−10^{d−1}<10^d\) b. \(\displaystyle h(d)<9^d\) c. \(\displaystyle m(d)=10^{d−1}+1\) d. Group the terms in the deleted harmonic series together by number of digits. \(\displaystyle h(d)\) bounds the number of terms, and each term is at most \(\displaystyle 1/m(d). \sum_{d=1}^∞h(d)/m(d)≤\sum_{d=1}^∞9^d/(10)^{d−1}≤90\). One can actually use comparison to estimate the value to smaller than \(\displaystyle 80\). The actual value is smaller than \(\displaystyle 23\).
55) Suppose that a sequence of numbers \(\displaystyle a_n>0\) has the property that \(\displaystyle a_1=1\) and \(\displaystyle a_{n+1}=\frac{1}{n+1}S_n\), where \(\displaystyle S_n=a_1+⋯+a_n\). Can you determine whether \(\displaystyle \sum_{n=1}^∞a_n\) converges? (Hint: \(\displaystyle S_n\) is monotone.)
56) Suppose that a sequence of numbers \(\displaystyle a_n>0\) has the property that \(\displaystyle a_1=1\) and \(\displaystyle a_{n+1}=\frac{1}{(n+1)^2}S_n\), where \(\displaystyle S_n=a_1+⋯+a_n\). Can you determine whether \(\displaystyle \sum_{n=1}^∞a_n\) converges? (Hint: \(\displaystyle S_2=a_2+a_1=a_2+S_1=a_2+1=1+1/4=(1+1/4)S_1, S_3=\frac{1}{3^2}S_2+S_2=(1+1/9)S_2=(1+1/9)(1+1/4)S_1\), etc. Look at \(\displaystyle ln(S_n)\), and use \(\displaystyle ln(1+t)≤t, t>0.\))
Solution: Continuing the hint gives \(\displaystyle S_N=(1+1/N^2)(1+1/(N−1)^2…(1+1/4)).\) Then \(\displaystyle ln(S_N)=ln(1+1/N^2)+ln(1+1/(N−1)^2)+⋯+ln(1+1/4).\) Since \(\displaystyle ln(1+t)\) is bounded by a constant times \(\displaystyle t\), when \(\displaystyle 0<t<1\) one has \(\displaystyle ln(S_N)≤C\sum_{n=1}^N\frac{1}{n^2}\), which converges by comparison to the
p-series for \(\displaystyle p=2\).
|
Euler Triangle Formula Theorem
Then:
$d^2 = R \left({R - 2 \rho}\right)$
where:
Proof
Let $I$ be the incenter of $\triangle ABC$.
Then:
$AP = BP = IP$ $\Box$
Let the incenter of $\triangle ABC$ be $I$.
Let the circumcenter of $\triangle ABC$ be $O$.
Let $F$ be the point where the incircle of $\triangle ABC$ meets $BC$.
We are given that:
the distance between the incenter and the circumcenter is $d$ the inradius is $\rho$ the circumradius is $R$. Thus: $OI = d$ $OG = OJ = R$
Therefore:
$IJ = R + d$ $GI = R - d$
By the Intersecting Chord Theorem:
$GI \cdot IJ = IP \cdot CI$
By the lemma:
$IP = PB$
and so:
$GI \cdot IJ = PB \cdot CI$
Now using the Extension of Law of Sines in $\triangle CPB$:
$\dfrac {PB} {\sin \left({\angle PCB}\right)} = 2 R$
and so:
$GI \cdot IJ = 2 R \sin \left({\angle PCB}\right) \cdot CI$ $\angle PCB = \angle ICF$
and so:
$(1): \quad GI \cdot IJ = 2 R \sin \left({\angle ICF}\right) \cdot CI$ We have that: $IF = \rho$
and by Radius at Right Angle to Tangent:
$\angle IFC$ is a right angle.
By the definition of sine:
$\sin \left({\angle ICF}\right) = \dfrac {\rho} {CI}$
and so:
$\sin \left({\angle ICF}\right) \cdot CI = \rho$ Substituting in $(1)$:
\(\displaystyle GI \cdot IJ\) \(=\) \(\displaystyle 2 R \rho\) \(\displaystyle \implies \ \ \) \(\displaystyle \left({R + d}\right) \left({R - d}\right)\) \(=\) \(\displaystyle 2 R \rho\) \(\displaystyle \implies \ \ \) \(\displaystyle R^2 - d^2\) \(=\) \(\displaystyle 2 R \rho\) Difference of Two Squares \(\displaystyle \implies \ \ \) \(\displaystyle d^2\) \(=\) \(\displaystyle R^2 - 2 R \rho\) \(\displaystyle \) \(=\) \(\displaystyle R \left({R - 2 \rho}\right)\)
$\blacksquare$
Source of Name
This entry was named for Leonhard Paul Euler.
|
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1161-1191, 2019.
Abstract
We develop lower bounds for estimation under local privacy constraints—including differential privacy and its relaxations to approximate or Rényi differential privacy—by showing an equivalence between private estimation and communication-restricted estimation problems. Our results apply to arbitrarily interactive privacy mechanisms, and they also give sharp lower bounds for all levels of differential privacy protections, that is, privacy mechanisms with privacy levels $\varepsilon \in [0, \infty)$. As a particular consequence of our results, we show that the minimax mean-squared error for estimating the mean of a bounded or Gaussian random vector in $d$ dimensions scales as $\frac{d}{n} \cdot \frac{d}{ \min\{\varepsilon, \varepsilon^2\}}$.
@InProceedings{pmlr-v99-duchi19a,title = {Lower Bounds for Locally Private Estimation via Communication Complexity},author = {Duchi, John and Rogers, Ryan},booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory},pages = {1161--1191},year = {2019},editor = {Beygelzimer, Alina and Hsu, Daniel},volume = {99},series = {Proceedings of Machine Learning Research},address = {Phoenix, USA},month = {25--28 Jun},publisher = {PMLR},pdf = {http://proceedings.mlr.press/v99/duchi19a/duchi19a.pdf},url = {http://proceedings.mlr.press/v99/duchi19a.html},abstract = {We develop lower bounds for estimation under local privacy constraints—including differential privacy and its relaxations to approximate or Rényi differential privacy—by showing an equivalence between private estimation and communication-restricted estimation problems. Our results apply to arbitrarily interactive privacy mechanisms, and they also give sharp lower bounds for all levels of differential privacy protections, that is, privacy mechanisms with privacy levels $\varepsilon \in [0, \infty)$. As a particular consequence of our results, we show that the minimax mean-squared error for estimating the mean of a bounded or Gaussian random vector in $d$ dimensions scales as $\frac{d}{n} \cdot \frac{d}{ \min\{\varepsilon, \varepsilon^2\}}$.}}
%0 Conference Paper%T Lower Bounds for Locally Private Estimation via Communication Complexity%A John Duchi%A Ryan Rogers%B Proceedings of the Thirty-Second Conference on Learning Theory%C Proceedings of Machine Learning Research%D 2019%E Alina Beygelzimer%E Daniel Hsu%F pmlr-v99-duchi19a%I PMLR%J Proceedings of Machine Learning Research%P 1161--1191%U http://proceedings.mlr.press%V 99%W PMLR%X We develop lower bounds for estimation under local privacy constraints—including differential privacy and its relaxations to approximate or Rényi differential privacy—by showing an equivalence between private estimation and communication-restricted estimation problems. Our results apply to arbitrarily interactive privacy mechanisms, and they also give sharp lower bounds for all levels of differential privacy protections, that is, privacy mechanisms with privacy levels $\varepsilon \in [0, \infty)$. As a particular consequence of our results, we show that the minimax mean-squared error for estimating the mean of a bounded or Gaussian random vector in $d$ dimensions scales as $\frac{d}{n} \cdot \frac{d}{ \min\{\varepsilon, \varepsilon^2\}}$.
Duchi, J. & Rogers, R.. (2019). Lower Bounds for Locally Private Estimation via Communication Complexity. Proceedings of the Thirty-Second Conference on Learning Theory, in PMLR 99:1161-1191
This site last compiled Sat, 17 Aug 2019 00:05:37 +0000
|
Definition:Zero Digit Definition
Let $x \in \R$ be a number.
Let $b \in \Z$ such that $b > 1$ be a number base in which $x$ is represented.
By the Basis Representation Theorem, $x$ can be expressed uniquely in the form:
$\displaystyle x = \sum_{j \mathop \in \Z}^m r_j b^j$ Any instance of $r_j$ being equal to $0$ is known as a zero (digit) of $n$. Also known as
The somewhat dated term
cipher or cypher can on occasion be seen for the zero digit.
The word
nought can commonly be seen.
The word
cipher can also be found in its less common spelling: cypher.
The word ultimately derives from the Arabic
صِفْر ( ṣifr), meaning zero or empty. Sources 1989: Ephraim J. Borowski and Jonathan M. Borwein: Dictionary of Mathematics... (previous) ... (next): Entry: zero: 1a. 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: cipher (cypher): 1. 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: cipher (cypher)
|
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
|
Geometry and Topology Seminar Contents 1 Fall 2016 2 Spring 2017 3 Fall Abstracts 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2016 Spring 2017
date speaker title host(s) Jan 20 Jan 27 Feb 3 Feb 10 Feb 17 Feb 24 March 3 March 10 March 17 March 24 Spring Break March 31 April 7 April 14 April 21 April 28 Bena Tshishiku (Harvard) "TBA" Dymarz Fall Abstracts Ronan Conlon New examples of gradient expanding K\"ahler-Ricci solitons
A complete K\"ahler metric $g$ on a K\"ahler manifold $M$ is a \emph{gradient expanding K\"ahler-Ricci soliton} if there exists a smooth real-valued function $f:M\to\mathbb{R}$ with $\nabla^{g}f$ holomorphic such that $\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0$. I will present new examples of such metrics on the total space of certain holomorphic vector bundles. This is joint work with Alix Deruelle (Universit\'e Paris-Sud).
Jiyuan Han Deformation theory of scalar-flat ALE Kahler surfaces
We prove a Kuranishi-type theorem for deformations of complex structures on ALE Kahler surfaces. This is used to prove that for any scalar-flat Kahler ALE surfaces, all small deformations of complex structure also admit scalar-flat Kahler ALE metrics. A local moduli space of scalar-flat Kahler ALE metrics is then constructed, which is shown to be universal up to small diffeomorphisms (that is, diffeomorphisms which are close to the identity in a suitable sense). A formula for the dimension of the local moduli space is proved in the case of a scalar-flat Kahler ALE surface which deforms to a minimal resolution of \C^2/\Gamma, where \Gamma is a finite subgroup of U(2) without complex reflections. This is a joint work with Jeff Viaclovsky.
Sean Howe Representation stability and hypersurface sections
We give stability results for the cohomology of natural local systems on spaces of smooth hypersurface sections as the degree goes to \infty. These results give new geometric examples of a weak version of representation stability for symmetric, symplectic, and orthogonal groups. The stabilization occurs in point-counting and in the Grothendieck ring of Hodge structures, and we give explicit formulas for the limits using a probabilistic interpretation. These results have natural geometric analogs -- for example, we show that the "average" smooth hypersurface in \mathbb{P}^n is \mathbb{P}^{n-1}!
Nan Li Quantitative estimates on the singular sets of Alexandrov spaces
The definition of quantitative singular sets was initiated by Cheeger and Naber. They proved some volume estimates on such singular sets in non-collapsed manifolds with lower Ricci curvature bounds and their limit spaces. On the quantitative singular sets in Alexandrov spaces, we obtain stronger estimates in a collapsing fashion. We also show that the (k,\epsilon)-singular sets are k-rectifiable and such structure is sharp in some sense. This is a joint work with Aaron Naber.
Yu Li
In this talk, we prove that if an asymptotically Euclidean (AE) manifold with nonnegative scalar curvature has long time existence of Ricci flow, it converges to the Euclidean space in the strong sense. By convergence, the mass will drop to zero as time tends to infinity. Moreover, in three dimensional case, we use Ricci flow with surgery to give an independent proof of positive mass theorem. A classification of diffeomorphism types is also given for all AE 3-manifolds with nonnegative scalar curvature.
Gaven Marin TBA Peyman Morteza TBA Richard Kent Analytic functions from hyperbolic manifolds
Thurston's Geometrization Conjecture, now a celebrated theorem of Perelman, tells us that most 3-manifolds are naturally geometric in nature. In fact, most 3-manifolds admit hyperbolic metrics. In the 1970s, Thurston proved the Geometrization conjecture in the case of Haken manifolds, and the proof revolutionized 3-dimensional topology, hyperbolic geometry, Teichmüller theory, and dynamics. Thurston's proof is by induction, constructing a hyperbolic structure from simpler pieces. At the heart of the proof is an analytic function called the
skinning map that one must understand in order to glue hyperbolic structures together. A better understanding of this map would more brightly illuminate the interaction between topology and geometry in dimension three. I will discuss what is currently known about this map. Caglar Uyanik TBA Bing Wang The extension problem of the mean curvature flow
We show that the mean curvature blows up at the first finite singular time for a closed smooth embedded mean curvature flow in R^3. A key ingredient of the proof is to show a two-sided pseudo-locality property of the mean curvature flow, whenever the mean curvature is bounded. This is a joint work with Haozhao Li.
Ben Weinkove Gauduchon metrics with prescribed volume form
Every compact complex manifold admits a Gauduchon metric in each conformal class of Hermitian metrics. In 1984 Gauduchon conjectured that one can prescribe the volume form of such a metric. I will discuss the proof of this conjecture, which amounts to solving a nonlinear Monge-Ampere type equation. This is a joint work with Gabor Szekelyhidi and Valentino Tosatti.
Jonathan Zhu Entropy and self-shrinkers of the mean curvature flow
The Colding-Minicozzi entropy is an important tool for understanding the mean curvature flow (MCF), and is a measure of the complexity of a submanifold. Together with Ilmanen and White, they conjectured that the round sphere minimises entropy amongst all closed hypersurfaces. We will review the basics of MCF and their theory of generic MCF, then describe the resolution of the above conjecture, due to J. Bernstein and L. Wang for dimensions up to six and recently claimed by the speaker for all remaining dimensions. A key ingredient in the latter is the classification of entropy-stable self-shrinkers that may have a small singular set.
Spring Abstracts Bena Tshishiku
"TBA"
Archive of past Geometry seminars
2015-2016: Geometry_and_Topology_Seminar_2015-2016
2014-2015: Geometry_and_Topology_Seminar_2014-2015 2013-2014: Geometry_and_Topology_Seminar_2013-2014 2012-2013: Geometry_and_Topology_Seminar_2012-2013 2011-2012: Geometry_and_Topology_Seminar_2011-2012 2010: Fall-2010-Geometry-Topology
|
Let A be $\ \begin{bmatrix} a & c \\ c & b\end{bmatrix} $ where $\ a,b,c, \in \mathbf R $
Prove $\ A $ eigen values are real numbers.
I guess it should be pretty straight forward so I just need to see what are solutions of characteristic polynomial which will be $\ |A - \lambda I| = (a-\lambda)(b - \lambda) - c^2 = 0 $ but Im not sure how do I prove the only possible values are in $\ \mathbf R $ .
$\ \lambda^2 - \lambda a - \lambda b + ab - c^2 = 0 $
|
Answer
$x =2 n \pi \pm \dfrac{\pi}{3}$
Work Step by Step
Re-arrange the given equation as: $2 \cos x=1$ $\cos x=\dfrac{1}{2}$ $\cos x=\dfrac{\pi}{3}$ Therefore, the general solution of $\cos x$ is: $x =2 n \pi \pm \dfrac{\pi}{3}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
This is a heuristic explanation of Witten's statement, without going into the subtleties of axiomatic quantum field theory issues, such as vacuum polarization or renormalization.
A particle is characterized by a definite momentum plus possible other quantum numbers. Thus, one particle states are by definition states with a definite eigenvalues of the momentum operator, they can have further quantum numbers. These states should exist even in an interactiong field theory, describing a single particle away from any interaction.In a local quantum field theory, these states are associated with local field operators: $$| p, \sigma \rangle = \int e^{ipx} \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x$$Where $\psi $ is the field corresponding to the particle and $\sigma$ describes the set of other quantum numbers additional to the momentum.A symmetry generator $Q$ being the integral of a charge density according to the Noether's theorem$$Q = \int j_0(x') d^3x'$$should generate a local field when it acts on a local field:$[Q, \psi_1(x)] = \psi_2(x)$(In the case of internal symmetries $\psi_2$ depends linearly on the components of $\psi_1(x)$, in the case of space time symmetries it depends on the derivatives of the components of $\psi_1(x)$)
Thus in general:
$$[Q, \psi_{\sigma}(x)] = \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x)])$$
Where the dependence of the coefficients $ C_{\sigma\sigma'}$ on the momentum operator $\nabla$ is due to the possibility that $Q$ contains a space-time symmetry.Thus for an operator $Q$ satisfying $Q|0\rangle = 0$, we have$$ Q | p, \sigma \rangle = \int e^{ipx} Q \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x = \int e^{ipx} [Q , \psi_{\sigma}^{\dagger}(x)] |0\rangle d^4x = \int e^{ipx} \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) \int e^{ipx} \psi_{\sigma'}^{\dagger}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) | p, \sigma' \rangle $$Thus the action of the operator $Q$ is a representation in the one particle states. The fact that $Q$ commutes with the Hamiltonian is responsible for the energy degeneracy of its action, i.e., the states $| p, \sigma \rangle$ and $Q| p, \sigma \rangle$ have the same energy.This post imported from StackExchange Physics at 2015-06-16 14:50 (UTC), posted by SE-user David Bar Moshe
|
(confined pressure: 41 MPa) at 40°C for high-density liquid CO2 through deep-sea 2 weeks at 4 and 40°C (4°C: 4°C14
Lotus seed starch (LS), dispersed (3%, w/v) in deionized water was homogenized (0–180 MPa) with high-pressure homogenization (HPH) for 15
high pressure–low temperature conditions, and (±0.1°C), and two pistons with a /s) under varying compressional stresses (MPa
0-1.2Mpa digital pressure meter gauge Air Water Pressure transmitter SPD G1/8 40mm 5V Battery-Powered Pressure Gauge Manometer 450psi RS232
°C temperature and up to 7.5 MPa pressure. (\psi\) is the constant with iron ball in Zs2, Zs1, Zr2, Zr1 are the compressibility
(190° C./2.16 kg); the tackifying resin viscosity of less than 60,000 mPa s at 177°psi, preferably greater than 2 psi, and most
psi to about 750 psi, and the pulp is exposed0.2 hours to about 3 hours at a temperature pressure drop homogenizer or other high shear
Isothermal Vapor-Liquid Equilibrium (VLE) data for the binary mixture of CO 2 + R-1234ze(E)(trans-1,3,3,3-tetrafluoroprop-1-ene) were measured
10 AN 1/2 Manifold Heater Hose Fitting by Vintage Air®. Vintage Air 36033-MPA
strength of not less than 0.025 kgf; ii. MPa, preferably higher than 800 MPa, and more and pressures of 200 psi to 3,000 psi. The
having a flexural modulus of about 51,000 psi.two or more high acid (i.e. at least 16 Tensile @ 34.4 MPa 22.5 MPa 4600 psi 24
1/4 BSP Male Thread 2KG 0.2MPA 29PSI Brass Safety Release Valve 1/4 BSP Male Thread 4KG 0.4MPA 58PSI Safety Release Valve Pressure
High-Pressure Processing (HPP) and Pulsed Electric Fields (PEF) processing technologies are being used increasingly on a commercial basis, with high-quality
MPa for 2 hours to give a hydrogenated polymer hydrogen was introduced up to 1000 psi (6,8 high pressure reactor and 25 ml of the NBR
high-pressure or ultra-high-pressure homogenizationtwo different homogenization pressures (10 MPa anddoi: 10.1111/1750-3841.13649. [PubMed] [
high pressure and external alternating load, {\psi } - \frac{{\partial ^{2} \user2 increasing an internal pressure of 2 MPa in
2015311- package, manual and two years, the equipment HT-PD 0-25MPa PFED-43029/016 PVT-210/35 PK-30-30/S205-P3.Nr.PSI-30-30/3-P PTK-
(ii) an olefin-maleic anhydride copolymer (on 70 to about 900 MPA as measured by ASTM D256as measured by ASTM D648 at 66 psi. 16. A
2016713- coax 5-VMK 32 NC / 54 32C111/2BD 24L / 16B 522639/ G 11/2 KELLER pressure sensor PR-33X range 0 to 0.3MPa, the
[1,2], but little has been done to study MPa (116 ksi), 38.6 MPa (5.6 ksi), and pressure of 7 bar (100 psi) to allow excess
New type 2.17L CE certified 30Mpa 4500psi high pressure gas cylinder diving tank filled by compressor-M, You can get more details about high pressure
high (G = 1200 kg/m2s, q/G = 0.216 kJ/kg, as seen in Figure 2pressure and mass flux of 7.5 MPa and 200 kg/m2s (Rein = 10,400–11
2 Plates, ASTM Alloy 387 Grade 9 Class 2 Features Level/Pressure Indicator, Insulated VesselStrength (0.2%Offset) Psi – 21500 , MPa –
jet washer high pressure hydraulic hose for car spare parts, You can get more details about rubber hydraulic hose used
The effects of ultra-high-pressure homogenization (UHPH) at 200 MPa, inThe two other portions were UHPH treated at two different inlet temperature
mPa·s, an ISO brightness of at least about aldehyde content is at least about 2 meq/100 g upon compression at 20 psi gauge pressure, has
about 50 MPa, a thickness of from about 0.1 to about 2 mm, and wherein the pressure exerted at the nip is from about 750 to about 4,000 psi
PUWJP-3.2 Stainless Steel - Dual Compression Ring Model - Powerful Lock (Plug Union) from FUJIKIN. MISUMI offers free CAD downloads, short lead times,
( \psi ={C}_1\left(\overset{\sim }{I(2) the goal of optimization can be Peak value of the pressure (MPa) 1 A1
2015112-001-2 TYP:2LM8100-7 230V 100NM 50Hz 0.27AZ A01+C20+Y01+Y15(Y01: -0.1 to 4.5MPa) Knauer K-1001 High pressure sealing ring Knauer
2017610-DM462 4“ * 10TX 80U-2,5GV(Ra = 0.4 A99-N4500/DP2380-A4001-F2-L17 0-0.16Mpa(G(A 105 NPT),DN 25 High Pressure Ball Valve (
pressure between about 750 psi and 25,000 psi, FIGS. 13 and 14 further show that high (2 MPa, 290 psi), microwave power range of 0
pressure of 1, 2, 3, and 4 MPa was analyzed with the application of In the study presented, the combined effects of high pressure and
high level assessment of the CAES potential (2)where \({r_{\text{w}}}\) is the is the pressure at the wellbore (MPa), \
pressure and 10 s spray time with 10 cm China) and imaged in a high-resolution SEM at5.152 MPa and 37.22 ± 4.954 MPa,
Copyright © 2018.All rights reserved. sitemap
|
Difference between revisions of "Power function"
Line 3: Line 3:
Why are there several notions of power in Haskell, namely <hask>(^)</hask>, <hask>(^^)</hask>, <hask>(**)</hask>?
Why are there several notions of power in Haskell, namely <hask>(^)</hask>, <hask>(^^)</hask>, <hask>(**)</hask>?
+
<haskell>
+
-- typically for integers
+
(^) :: (Num a, Integral b) => a -> b -> a
+ +
-- typically for rationals
+
(^^) :: (Fractional a, Integral b) => a -> b -> a
+ +
-- typically for floating-point numbers
+
(**) :: Floating a => a -> a -> a
+
</haskell>
== Answer ==
== Answer ==
−
The reason is that there is no
+
The reason is that there is no the power function all exotic choices for basis and exponent.
−
It is even sensible to refine the set of power functions as it is done in the [[Numeric Prelude]] project.
−
In mathematical notation we don't respect types and we do not distinguish between powers of different types.
−
However if we assume the most general types for both basis and exponent, the result of the power is no longer unique.
−
Actually all possible solutions of say <math>1^x</math>,
−
where <math>x</math> is irrational is dense in the complex unit circle.
−
In the past I needed the power of two complex numbers only once, namely for the [http://www.math.uni-bremen.de/~thielema/Research/cwt.pdf Cauchy wavelet] (see also: [http://ieeexplore.ieee.org/iel5/78/18506/00852022.pdf?arnumber=852022]):
−
: <math> f(t) = (1- i\cdot k\cdot t) ^ {-\frac{1}{2} + \frac{\mu_2}{k} + i\cdot \mu_1 } </math>
−
However, I could not use the built-in complex power function
−
because the resulting function became discontinuous.
−
Of course, powers of complex numbers have the problem of branch cuts and
−
the choice of the branch built into the implementation of the complex power is quite arbitrary and
−
might be inappropriate.
−
But also for real numbers there are problems:
+
See this [https://stackoverflow.com/q/6400568 StackOverflow question] for details.
⚫ + ⚫ +
=== Type inference reasons ===
−
If it does so it returns <hask>(-1)</hask>, otherwise it fails.
+ −
However, why shall <hask>0.333333333333333</hask> represent <math>\frac{1}{3}</math>?
+
In mathematical notation, the human reader is clever enough to to tell which definition of the power function is applicable in a given context. In Haskell, doing so would drastically complicate type inference. In some other languages such as C++, operator overloading is used to work around this problem, but [https://stackoverflow.com/a/6402880 this approach does not work for Haskell's numeric type classes].
−
It may be really meant as <hask>333333333333333/10^15</hask>,
+ −
and a real <math>10^{15}</math>th root of <math>-1</math> does not exist.
+
=== Mathematical reasons ===
−
Fortunately, the Haskell implementation does not try to be too clever here.
+ −
But it does so at another point:
+
If we assume the most general types for both basis and exponent, namely complex numbers, the result of the power is no longer unique. In particular, all possible solutions of say 1<sup>''x''</sup>, where ''x'' is irrational, is dense in the complex unit circle.
+ ⚫ ⚫ +
If it does so it returns <hask>(-1)</hask>, otherwise it fails. However, why shall <hask>0.333333333333333</hask> represent ⅓? It may be really meant as <hask>333333333333333/10^15</hask>, and a real 10<sup>15</sup>th root of −1 does not exist. Fortunately, the Haskell implementation does not try to be too clever here. But it does so at another point:
<haskell>
<haskell>
Prelude> (-1)**2 :: Double
Prelude> (-1)**2 :: Double
Line 23: Line 37:
NaN
NaN
</haskell>
</haskell>
− +
both expressions should be evaluated to <hask>1.0</hask>,
− + +
==Power function in Numeric Prelude==
− +
more general the basis the less general the exponent and vice versa
−
I also think the following symbols are more systematic and intuitive.
−
They are used in NumericPrelude.
{|
{|
| basis type || provides || symbol || exponent type || definition ||
| basis type || provides || symbol || exponent type || definition ||
|-
|-
−
| any ring || <hask>
+
| any ring || <hask>*</hask> || <hask>^</hask> || cardinal || repeated multiplication || <math>a^b = \prod_{i=1}^b a </math>
|-
|-
−
| any field || <hask>
+
| any field || <hask>/</hask> || <hask>^-</hask> || integer || multiplication and division || <math>a^b = \begin{cases} a^b & b\ge 0 \\ \frac{1}{a^{-b}} & b<0 \end{cases} </math>
|-
|-
−
| an algebraic field || <hask>root</hask> || <hask>
+
| an algebraic field || <hask>root</hask> || <hask>^/</hask> || rational || list of polynomial zeros (length = denominator of the exponent) || <math> a^{\frac{p}{q}} = \{ x : a^p = x^q \} </math>
|-
|-
−
| positive real || <hask>
+
| positive real || <hask>log</hask> || ^? || any ring of characteristic zero with inverses for integers and a notion of limit || exponential series and logarithm || <math>a^b = \exp(b \log a) = \sum_{k \geq 0} \frac{(b \log a)^k}{k!}</math>
|}
|}
Line 43: Line 57: −
That is <hask>(^-)</hask> replaces <hask>(^^)</hask>,
+
That is <hask>(^-)</hask> replaces <hask>(^^)</hask>,
<hask>(^?)</hask> replaces <hask>(**)</hask>,
<hask>(^?)</hask> replaces <hask>(**)</hask>,
<hask>(^)</hask> remains and <hask>(^/)</hask> is new.
<hask>(^)</hask> remains and <hask>(^/)</hask> is new.
Latest revision as of 21:34, 10 March 2016 Contents Question
Why are there several notions of power in Haskell, namely
(^),
(^^),
(**)?
-- typically for integers(^) :: (Num a, Integral b) => a -> b -> a-- typically for rationals(^^) :: (Fractional a, Integral b) => a -> b -> a-- typically for floating-point numbers(**) :: Floating a => a -> a -> a
Answer
The reason is that there is no implementation of the power function that can cover all exotic choices for basis and exponent while being both efficient and accurate.
See this StackOverflow question for details.
Type inference reasons
In mathematical notation, the human reader is clever enough to to tell which definition of the power function is applicable in a given context. In Haskell, doing so would drastically complicate type inference. In some other languages such as C++, operator overloading is used to work around this problem, but this approach does not work for Haskell's numeric type classes.
Mathematical reasons
If we assume the most general types for both basis and exponent, namely complex numbers, the result of the power is no longer unique. In particular, all possible solutions of say 1
, where x xis irrational, is dense in the complex unit circle.
But even for real numbers there are problems: to calculate
(-1)**(1/3::Double) the power implementation would have to decide whether
(1/3::Double) is close enough to ⅓.If it does so it returns
(-1), otherwise it fails. However, why shall
0.333333333333333 represent ⅓? It may be really meant as
333333333333333/10^15, and a real 10
15th root of −1 does not exist. Fortunately, the Haskell implementation does not try to be too clever here. But it does so at another point:
Prelude> (-1)**2 :: Double1.0Prelude> (-1)**(2 + 1e-15 - 1e-15) :: DoubleNaN
While both expressions should be evaluated to
1.0, a reliable check for integers is not possible with floating-point numbers.
Power function in Numeric Prelude
One can refine the set of power functions further as it is done in the Numeric Prelude. In this library, the more general the basis the less general the exponent and vice versa:
basis type provides symbol exponent type definition any ring
(*)
(^)
cardinal repeated multiplication any field
(/)
(^-)
integer multiplication and division an algebraic field
root
(^/)
rational list of polynomial zeros (length = denominator of the exponent) positive real
log
(^?)
any ring of characteristic zero with inverses for integers and a notion of limit exponential series and logarithm examples for rings are: Polynomials, Matrices, Residue classes examples for fields: Fractions of polynomials (rational functions), Residue classes with respect to irreducible divisors, in fact we do not need fields, we only need the division and associativity, thus invertible Matrices are fine That is,
(^-) replaces
(^^),
(^?) replaces
(**),
(^) remains and
(^/) is new.
See also Haskell-Cafe: Proposal for restructuring Number classes
|
Your question is,
If the average of the first $n$ terms of a sequence tends to a limit, does the sequence itself tend to a limit?
The answer is no in general, as is discussed in the comments. The simplest counterexamples are the sequences which oscillate between two different values $\alpha$ and $\beta$; we would expect that the average of the first $n$ terms of such a sequence will tend to the average of $\alpha$ and $\beta$. As a concrete example let's define
$$x_n = \frac{1+(-1)^n}{2},$$
so that $x_n$ alternates between $0$ (when $n$ is odd) and $1$ (when $n$ is even). We then have
$$n \, z_n = x_1 + x_2 + \cdots + x_n = \begin{cases}\frac{n}{2} & \text{if } n \text{ is even}, \\\frac{n-1}{2} & \text{if } n \text{ is odd},\end{cases}$$
from which we can deduce that
$$\frac{1}{2} - \frac{1}{2n} \leq z_n \leq \frac{1}{2}$$
and thus
$$\lim_{n \to \infty} z_n = \frac{1}{2}.$$
There are, however, many cases where one can deduce the convergence of the original sequence from the convergence of this average.
For example, if $(x_n)$ is a positive, monotonic sequence, then the convergence of $(z_n)$ implies the convergence of $(x_n)$.
A slightly more difficult (and more useful) example is discussed in this thread.
Results of this general shape are called Tauberian theorems. A nice reference is the book
Divergent Series by G. H. Hardy.
|
How to Implement the Fourier Transformation from Computed Solutions
We previously learned how to calculate the Fourier transform of a rectangular aperture in a Fraunhofer diffraction model in the COMSOL Multiphysics® software. In that example, the aperture was given as an analytical function. The procedure is a bit different if the source data for the Fourier transformation is a computed solution. In this blog post, we will learn how to implement the Fourier transformation for computed solutions with an electromagnetic simulation of a Fresnel lens.
Fourier Transformation with Fourier Optics
Implementing the Fourier transformation in a simulation can be useful in Fourier optics, signal processing (for use in frequency pattern extraction), and noise reduction and filtering via image processing. In Fourier optics, the Fresnel approximation is one of the approximation methods used for calculating the field near the diffracting aperture. Suppose a diffracting aperture is located in the (x,y) plane at z=0. The diffracted electric field in the (u,v) plane at the distance z=f from the diffracting aperture is calculated as
where, \lambda is the wavelength and E(x,y,0), \ E(u,v,f) account for the electric field at the (x,y) plane and the (u,v) plane, respectively. (See Ref. 1 for more details.)
In this approximation formula, the diffracted field is calculated by Fourier transforming the incident field multiplied by the quadratic phase function {\rm exp}\{-i\pi (x^2+y^2)/(\lambda f)\}.
The sign convention of the phase function must follow the sign convention of the time dependence of the fields. In COMSOL Multiphysics, the time dependence of the electromagnetic fields is of the form {\rm exp}(+i\omega t). So, the sign of the quadratic phase function is negative.
Fresnel Lenses
Now, let’s take a look at an example of a Fresnel lens. A Fresnel lens is a regular plano-convex lens except for its curved surface, which is folded toward the flat side at every multiple of m \lambda/(n-1) along the lens height, where
m is an integer and n is the refractive index of the lens material. This is called an m th-order Fresnel lens.
The shift of the surface by this particular height along the light propagation direction only changes the phase of the light by 2m \pi (roughly speaking and under the paraxial approximation). Because of this, the folded lens fundamentally reproduces the same wavefront in the far field and behaves like the original unfolded lens. The main difference is the diffraction effect. Regular lenses basically don’t show any diffraction (if there is no vignetting by a hard aperture), while Fresnel lenses always show small diffraction patterns around the main spot due to the surface discontinuities and internal reflections.
When a Fresnel lens is designed digitally, the lens surface is made up of discrete layers, giving it a staircase-like appearance. This is called a multilevel Fresnel lens. Due to the flat part of the steps, the diffraction pattern of a multilevel Fresnel lens typically includes a zeroth-order background in addition to the higher-order diffraction.
Why are we using a Fresnel lens as our example? The reason is similar to why lighthouses use Fresnel lenses in their operations. A Fresnel lens is folded into m \lambda/(n-1) in height. It can be extremely thin and therefore of less weight and volume, which is beneficial for the optics of lighthouses compared to a large, heavy, and thick lens of the conventional refractive type. Likewise, for our purposes, Fresnel lenses can be easier to simulate in COMSOL Multiphysics and the add-on Wave Optics Module because the number of elements are manageable.
Modeling a Focusing Fresnel Lens in COMSOL Multiphysics®
The figure below depicts the optics layout that we are trying to simulate to demonstrate how we can implement the Fourier transformation, applied to a computed solution solved for by the
Wave Optics, Frequency Domain interface. Focusing 16-level Fresnel lens model.
This is a first-order Fresnel lens with surfaces that are digitized in 16 levels. A plane wave E_{\rm inc} is incident on the incidence plane. At the exit plane at z=0, the field is diffracted by the Fresnel lens to be E(x,y,0). This process can be easily modeled and simulated by the
Wave Optics, Frequency Domain interface. Then, we calculate the field E(u,v,f) at the focal plane at z=f by applying the Fourier transformation in the Fresnel approximation, as described above.
The figures below are the result of our computation, with the electric field component in the domains (top) and on the boundary corresponding to the exit plane (bottom). Note that the geometry is not drawn to scale in the vertical axis. We can clearly see the positively curved wavefront from the center and from every air gap between the saw teeth. Note that the reflection from the lens surfaces leads to some small interferences in the domain field result and ripples in the boundary field result. This is because there is no antireflective coating modeled here.
The computed electric field component in the Fresnel lens and surrounding air domains (vertical axis is not to scale). The computed electric field component at the exit plane. Implementing the Fourier Transformation from a Computed Solution
Let’s move on to the Fourier transformation. In the previous example of an analytical function, we prepared two data sets: one for the source space and one for the Fourier space. The parameter names that were defined in the Settings window of the data set were the spatial coordinates (x,y) in the source plane and the spatial coordinates (u,v) in the image plane.
In today’s example, the source space is already created in the computed data set, Study 1/Solution 1
(sol1){dset1}, with the computed solutions. All we need to do is create a one-dimensional data set, Grid1D {grid1}, with parameters for the Fourier space; i.e., the spatial coordinate u in the focal plane. We then relate it to the source data set, as seen in the figure below. Then, we define an integration operator
intop1 on the exit plane.
Settings for the data set for the transformation.
The intop1 operator defined on the exit plane (vertical axis is not to scale).
Finally, we define the Fourier transformation in a 1D plot, shown below. It’s important to specify the data set we previously created for the transformation and to let COMSOL Multiphysics know that u is the destination independent variable by using the
dest operator.
Settings for the Fourier transformation in a 1D plot.
The end result is shown in the following plot. This is a typical image of the focused beam through a multilevel Fresnel lens in the focal plane (see Ref. 2). There is the main spot by the first-order diffraction in the center and a weaker background caused by the zeroth-order (nondiffracted) and higher-order diffractions.
Electric field norm plot of the focused beam through a 16-level Fresnel lens. Concluding Remarks
In this blog post, we learned how to implement the Fourier transformation for computed solutions. This functionality is useful for long-distance propagation calculation in COMSOL Multiphysics and extends electromagnetic simulation to Fourier optics.
Next Steps
Download the model files for the Fresnel lens example by clicking the button below.
Read More About Simulating Wave Optics Simulating Holographic Data Storage in COMSOL Multiphysics How to Simulate a Holographic Page Data Storage System How to Implement the Fourier Transformation in COMSOL Multiphysics References J.W. Goodman, Introduction to Fourier Optics, The McGraw-Hill Company, Inc. D. C. O’Shea, Diffractive Optics, SPIE Press.
Comments (10) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
|
I'm trying to make something like this:
But I have just can't figure out how. I have tried with empheq and
tikz, but I can't get it to work with only some boxes inside
align. I have also tried with
\boxed and
\Aboxed, no success either.
I have a few example of what I have tried:
\documentclass[12pt]{article}\usepackage[danish]{babel}\usepackage [utf8]{inputenc}\usepackage{moreverb}\usepackage{listings}\usepackage{graphicx}\usepackage{verbatim}\usepackage{amsmath}\usepackage{empheq}\begin{document}\begin{align*}& 1 &(p \wedge q) \wedge r && \text{premise} \\& 2 & p \wedge q && \wedge_e_1 1 \\& 3 & r && \wedge_e_2 1 \\& 4 & p && \wedge_e_1 2 \\& 5 & q && \wedge_e_2 2 \\& 6 & q \wedge r && \wedge_i 5,3 \\& 7 & p \wedge (q \wedge r) && \wedge_1 4,6\end{align*}\end{document}
This gives me something like the format but no boxes. Then I thought I could ditch the first column by just numerate the equations on the left with:
\documentclass[12pt, leqno]{article}, but I have no idea on have I'm going to box some of the lines.
So I hope one of you experts can help me make something like this in Latex.
EDIT: I'm getting a bit closer my goal, I have now succesfully mad one box:
$$\begin{tabular}{ccl}1 & \neg p \rightarrow p & \text{premise} \\\cline{2-3}2 & \multicolumn{1}{|c}{\neg p} & \multicolumn{1}{l|}{\text{assumption}} \\3 & \multicolumn{1}{|c}{p} & \multicolumn{1}{l|}{\rightarrow_e 2,1} \\4 & \multicolumn{1}{|c}{\bot} & \multicolumn{1}{l|}{\neg_e 3,2} \\\cline{2-3}5 & \neg \neg p & \neg_i 2-4 \\6 & p & \neg \neg_e 5 \end{tabular}$$
So the question is now. How do I make a Box inside the Box?
|
@DavidReed the notion of a "general polynomial" is a bit strange. The general polynomial over a field always has Galois group $S_n$ even if there is not polynomial over the field with Galois group $S_n$
Hey guys. Quick question. What would you call it when the period/amplitude of a cosine/sine function is given by another function? E.g. y=x^2*sin(e^x). I refer to them as variable amplitude and period but upon google search I don't see the correct sort of equation when I enter "variable period cosine"
@LucasHenrique I hate them, i tend to find algebraic proofs are more elegant than ones from analysis. They are tedious. Analysis is the art of showing you can make things as small as you please. The last two characters of every proof are $< \epsilon$
I enjoyed developing the lebesgue integral though. I thought that was cool
But since every singleton except 0 is open, and the union of open sets is open, it follows all intervals of the form $(a,b)$, $(0,c)$, $(d,0)$ are also open. thus we can use these 3 class of intervals as a base which then intersect to give the nonzero singletons?
uh wait a sec...
... I need arbitrary intersection to produce singletons from open intervals...
hmm... 0 does not even have a nbhd, since any set containing 0 is closed
I have no idea how to deal with points having empty nbhd
o wait a sec...
the open set of any topology must contain the whole set itself
so I guess the nbhd of 0 is $\Bbb{R}$
Btw, looking at this picture, I think the alternate name for these class of topologies called British rail topology is quite fitting (with the help of this WfSE to interpret of course mathematica.stackexchange.com/questions/3410/…)
Since as Leaky have noticed, every point is closest to 0 other than itself, therefore to get from A to B, go to 0. The null line is then like a railway line which connects all the points together in the shortest time
So going from a to b directly is no more efficient than go from a to 0 and then 0 to b
hmm...
$d(A \to B \to C) = d(A,B)+d(B,C) = |a|+|b|+|b|+|c|$
$d(A \to 0 \to C) = d(A,0)+d(0,C)=|a|+|c|$
so the distance of travel depends on where the starting point is. If the starting point is 0, then distance only increases linearly for every unit increase in the value of the destination
But if the starting point is nonzero, then the distance increases quadratically
Combining with the animation in the WfSE, it means that in such a space, if one attempt to travel directly to the destination, then say the travelling speed is 3 ms-1, then for every meter forward, the actual distance covered by 3 ms-1 decreases (as illustrated by the shrinking open ball of fixed radius)
only when travelling via the origin, will such qudratic penalty in travelling distance be not apply
More interesting things can be said about slight generalisations of this metric:
Hi, looking a graph isomorphism problem from perspective of eigenspaces of adjacency matrix, it gets geometrical interpretation: question if two sets of points differ only by rotation - e.g. 16 points in 6D, forming a very regular polyhedron ...
To test if two sets of points differ by rotation, I thought to describe them as intersection of ellipsoids, e.g. {x: x^T P x = 1} for P = P_0 + a P_1 ... then generalization of characteristic polynomial would allow to test if our sets differ by rotation ...
1D interpolation: finding a polynomial satisfying $\forall_i\ p(x_i)=y_i$ can be written as a system of linear equations, having well known Vandermonde determinant: $\det=\prod_{i<j} (x_i-x_j)$. Hence, the interpolation problem is well defined as long as the system of equations is determined ($\d...
Any alg geom guys on? I know zilch about alg geom to even start analysing this question
Manwhile I am going to analyse the SR metric later using open balls after the chat proceed a bit
To add to gj255's comment: The Minkowski metric is not a metric in the sense of metric spaces but in the sense of a metric of Semi-Riemannian manifolds. In particular, it can't induce a topology. Instead, the topology on Minkowski space as a manifold must be defined before one introduces the Minkowski metric on said space. — baluApr 13 at 18:24
grr, thought I can get some more intuition in SR by using open balls
tbf there’s actually a third equivalent statement which the author does make an argument about, but they say nothing about substantive about the first two.
The first two statements go like this : Let $a,b,c\in [0,\pi].$ Then the matrix $\begin{pmatrix} 1&\cos a&\cos b \\ \cos a & 1 & \cos c \\ \cos b & \cos c & 1\end{pmatrix}$ is positive semidefinite iff there are three unit vectors with pairwise angles $a,b,c$.
And all it has in the proof is the assertion that the above is clearly true.
I've a mesh specified as an half edge data structure, more specifically I've augmented the data structure in such a way that each vertex also stores a vector tangent to the surface. Essentially this set of vectors for each vertex approximates a vector field, I was wondering if there's some well k...
Consider $a,b$ both irrational and the interval $[a,b]$
Assuming axiom of choice and CH, I can define a $\aleph_1$ enumeration of the irrationals by label them with ordinals from 0 all the way to $\omega_1$
It would seemed we could have a cover $\bigcup_{\alpha < \omega_1} (r_{\alpha},r_{\alpha+1})$. However the rationals are countable, thus we cannot have uncountably many disjoint open intervals, which means this union is not disjoint
This means, we can only have countably many disjoint open intervals such that some irrationals were not in the union, but uncountably many of them will
If I consider an open cover of the rationals in [0,1], the sum of whose length is less than $\epsilon$, and then I now consider [0,1] with every set in that cover excluded, I now have a set with no rationals, and no intervals.One way for an irrational number $\alpha$ to be in this new set is b...
Suppose you take an open interval I of length 1, divide it into countable sub-intervals (I/2, I/4, etc.), and cover each rational with one of the sub-intervals.Since all the rationals are covered, then it seems that sub-intervals (if they don't overlap) are separated by at most a single irrat...
(For ease of construction of enumerations, WLOG, the interval [-1,1] will be used in the proofs) Let $\lambda^*$ be the Lebesgue outer measure We previously proved that $\lambda^*(\{x\})=0$ where $x \in [-1,1]$ by covering it with the open cover $(-a,a)$ for some $a \in [0,1]$ and then noting there are nested open intervals with infimum tends to zero.
We also knew that by using the union $[a,b] = \{a\} \cup (a,b) \cup \{b\}$ for some $a,b \in [-1,1]$ and countable subadditivity, we can prove $\lambda^*([a,b]) = b-a$. Alternately, by using the theorem that $[a,b]$ is compact, we can construct a finite cover consists of overlapping open intervals, then subtract away the overlapping open intervals to avoid double counting, or we can take the interval $(a,b)$ where $a<-1<1<b$ as an open cover and then consider the infimum of this interval such that $[-1,1]$ is still covered. Regardless of which route you take, the result is a finite sum whi…
W also knew that one way to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ is to take the union of all singletons that are rationals. Since there are only countably many of them, by countable subadditivity this give us $\lambda^*(\Bbb{Q}\cap [-1,1]) = 0$. We also knew that one way to compute $\lambda^*(\Bbb{I}\cap [-1,1])$ is to use $\lambda^*(\Bbb{Q}\cap [-1,1])+\lambda^*(\Bbb{I}\cap [-1,1]) = \lambda^*([-1,1])$ and thus deducing $\lambda^*(\Bbb{I}\cap [-1,1]) = 2$
However, what I am interested here is to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ and $\lambda^*(\Bbb{I}\cap [-1,1])$ directly using open covers of these two sets. This then becomes the focus of the investigation to be written out below:
We first attempt to construct an open cover $C$ for $\Bbb{I}\cap [-1,1]$ in stages:
First denote an enumeration of the rationals as follows:
$\frac{1}{2},-\frac{1}{2},\frac{1}{3},-\frac{1}{3},\frac{2}{3},-\frac{2}{3}, \frac{1}{4},-\frac{1}{4},\frac{3}{4},-\frac{3}{4},\frac{1}{5},-\frac{1}{5}, \frac{2}{5},-\frac{2}{5},\frac{3}{5},-\frac{3}{5},\frac{4}{5},-\frac{4}{5},...$ or in short:
Actually wait, since as the sequence grows, any rationals of the form $\frac{p}{q}$ where $|p-q| > 1$ will be somewhere in between two consecutive terms of the sequence $\{\frac{n+1}{n+2}-\frac{n}{n+1}\}$ and the latter does tends to zero as $n \to \aleph_0$, it follows all intervals will have an infimum of zero
However, any intervals must contain uncountably many irrationals, so (somehow) the infimum of the union of them all are nonzero. Need to figure out how this works...
Let's say that for $N$ clients, Lotta will take $d_N$ days to retire.
For $N+1$ clients, clearly Lotta will have to make sure all the first $N$ clients don't feel mistreated. Therefore, she'll take the $d_N$ days to make sure they are not mistreated. Then she visits client $N+1$. Obviously the client won't feel mistreated anymore. But all the first $N$ clients are mistreated and, therefore, she'll start her algorithm once again and take (by suposition) $d_N$ days to make sure all of them are not mistreated. And therefore we have the recurence $d_{N+1} = 2d_N + 1$
Where $d_1$ = 1.
Yet we have $1 \to 2 \to 1$, that has $3 = d_2 \neq 2^2$ steps.
|
So the electron in the hydrogen atom is just a particle in a spherically-symmetric 1/
r potential… you’ve got a ladder of energy eigenvalues indexed by a quantum number n. The n th eigenvalue has degeneracy n 2, but that’s cool; picking an axis z, the total angular momentum operator and the L z-axis angular momentum operator Lgive a complete set of commuting observables (together with z H), so you get yer n, l, meigenstates. l
And you think
everything’s cool, everything’s ok. Wrong, because physics gets in the way of all this math fun. A number of physical effects (in various environments & regimes) break spherical symmetry and perturb our energy levels from their sweetly degenerate state. Here are some of them. Normal Zeeman effect An orbital with \(L_z\neq 0\) has a non-zero magnetic moment about the z-axis (spinning electron ⇒ little loop of current ⇒ magnetic dipole). This means that it interacts with the z-component of a magnetic field. The potential of this interaction is \(V=-\mu B_z\) where μis the dipole moment. How to calculate this? \(L_z = m_e v r\) and \(\mu = I A = \frac{-e v}{2\pi r} \pi r^2 = -\frac{e}{2} v r\): comparing these gives \(\mu = -\frac{e}{2 m_e} L_z\). (The coefficient in front is the “Bohr magneton”, within a factor of \(\hbar\).)
Since the spacing between
L-values is \(\hbar\), the spacing between the split energy levels will be \((e\hbar B_z)/(2 m_e)\), which is the Lamor frequency times \(\hbar\). z
Also interesting: Due to some symmetry magic, the eigenstates
remain the same; only the eigenvalues change. Huh. Anomalous Zeeman effect Thing is, we’ve been talking about orbitalangular momentum this whole time. There’s also the electron’s intrinsic “spin” angular momentum. This has can have magnitude \(\pm\hbar/2\) in the zdirection. WEIRDLY enough, however, when computing a magnetic moment from this, you have to multiply by a “ g-factor” of two. This shouldn’t surprise you, though, because spin was really weird in the first place.
The end result of all of this: Each state you get from normal Zeeman splitting splits \(\pm (e\hbar B_z)/(2 m_e)\), which was the spacing between these states in the first place. We get two extra energy levels.
Spin–orbit interaction / Fine structure An orbiting electron sees a magnetic field produced by the movement of the nucleus. This magnetic field interacts with the spin dipole: when the field is parallel to the dipole (that is, anti-parallel to the spin), energy is minimized. Two possible values of spin ⇒ a new splitting. This produces a feature of atomic spectra known as fine structure doublets. Stark effect Just like an external magnetic field breaks symmetry and causes splitting in the Zeeman effect, an external electricfield will cause a form of splitting known as the Stark effect. The reason why I put this in a second, subsidiary position is because it is way more complicated: our Hamiltonian now has a 1/ r+ zpotential, which is going to completely screw up the old energy eigenstates. So you do perturbation theory, and get first/second-order corrections in the small-field limit. That’s quantum mechanics for you, I guess. Hyperfine structure This is real small. It includes things like electron-dipole/nuclear-dipole and electric-field/nuclear-quadrupole interactions.
Oh, and when it comes to computing spectra from these: have I mentioned selection rules? No? Oh, well those are pretty important too better look them up.
|
Reineke’s observation that any projective variety can be realized as a quiver Grassmannian is bad news: we will have to look at special representations and/or dimension vectors if we want the Grassmannian to have desirable properties. Some people still see a silver lining: it can be used to define a larger class of geometric objects over the elusive field with one element $\mathbb{F}_1$.
In a comment to the previous post Markus Reineke recalls motivating discussions with Javier Lopez Pena and Oliver Lorscheid (the guys responsable for the map of $\mathbb{F}_1$-land above) and asks about potential connections with $\mathbb{F}_1$-geometry. In this post I will ellaborate on Javier’s response.
The Kapranov-Smirnov $\mathbb{F}_1$-floklore tells us that an $n$-dimensional vectorspace over $\mathbb{F}_1$ is a pointed set $V^{\bullet}$ consisting of $n+1$ points, the distinguished point playing the role of the zero-vector. Linear maps $V^{\bullet} \rightarrow W^{\bullet}$ between $\mathbb{F}_1$-spaces are then just maps of pointed sets (sending the distinguished element of $V^{\bullet}$ to that of $W^{\bullet}$). As an example, the base-change group $GL_n(\mathbb{F}_1)$ of an $n$-dimensional $\mathbb{F}_1$-space $V^{\bullet}$ is isomorphic to the symmetric group $S_n$.
This allows us to make sense of quiver-representations over $\mathbb{F}_1$. To each vertex we associate a pointed set and to each arrow a map of pointed sets between the vertex-pointed sets. The dimension-vector $\alpha$ of quiver-representation is defined as before and two representations with the same dimension-vector are isomorphic is they lie in the same orbit under the action of the product of the symmetric groups determined by the components of $\alpha$. All this (and a bit more) has been worked out by Matt Szczesny in the paper Representations of quivers over $\mathbb{F}_1$.
Roughly speaking a blueprint $B = A // \mathcal{R}$ is a commutative monoid $A$ together with an equivalence relation $\mathcal{R}$ on the monoid semiring $\mathbb{N}[A]$ compatible with addition and multiplication. Any commutative ring $R$ is a blueprint by taking $A$ the multiplicative monoid of $R$ and $\mathcal{R}(\sum_i a_i,\sum_j b_j)$ if and only if the elements $\sum_i a_i$ and $\sum_j b_j$ in $R$ are equal.
One can extend the usual notions of prime ideals, Zariski topology and structure sheaf from commutative rings to blueprints and hence define a notion of “blue schemes” which are then taken to be the schemes over $\mathbb{F}_1$.
What’s the connection with Reineke’s result? Well, for quiver-representations $V$ defined over $\mathbb{F}_1$ they can show that the corresponding quiver Grassmannians $Gr(V,\alpha)$ are blue projective varieties and hence are geometric objects defined over $\mathbb{F}_1$.
For us, old-fashioned representation theorists, a complex quiver-representation $V$ is defined over $\mathbb{F}_1$ if and only if there is an isomorphic representation $V’$ with the property that all its arrow-matrices have at most one $1$ in every column, and zeroes elsewhere.
Remember from last time that Reineke’s representation consisted of two parts : the Veronese-part encoding the $d$-uple embedding $\mathbb{P}^n \rightarrow \mathbb{P}^M$ and a linear part describing the subvariety $X \rightarrow \mathbb{P}^n$ as the intersection of the image of $\mathbb{P}^n$ in $\mathbb{P}^M$ with a finite number of hyper-planes in $\mathbb{P}^M$.
We have seen that the Veronese-part is always defined over $\mathbb{F}_1$, compatible with the fact that all approaches to $\mathbb{F}_1$-geometry allow for projective spaces and $d$-uple embeddings. The linear part does not have to be defined over $\mathbb{F}_1$ in general, but we can look at the varieties we get when we force the linear-part matrices to be of the correct form.
For example, by modifying the map $h$ of last time to $h=x_0+x_7+x_9$ we get that the quiver-representation
is defined over $\mathbb{F}_1$ and hence that Reineke’s associated quiver Grassmannian, which is the smooth plane elliptic curve $\mathbb{V}(x^3+y^2z+z^3)$, is a blue variety. This in sharp contrast with other approaches to $\mathbb{F}_1$-geometry which do not allow elliptic curves!
Oliver will give a talk at the 6th European Congress of Mathematics in the mini-symposium Absolute Arithmetic and $\mathbb{F}_1$-Geometry. Judging from his abstract,he will also mention quiver Grassmannians.
|
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
|
The question is $\lim_\limits{x\to 3}\frac{x^2-9-3+\sqrt{x+6}}{x^2-9}$.
I hope you guys understand why I have written the numerator like that. So my progress is nothing but $1+\frac{\sqrt{x+6}-3}{x^2-9}$.
Now how do I rationalize the numerator?
It is giving the $\frac{0}{0}$ form after plugging in $3$.
|
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.