text
stringlengths 256
16.4k
|
|---|
Revista Matemática Iberoamericana Rev. Mat. Iberoamericana Volume 23, Number 2 (2007), 513-536. On a Parabolic Symmetry Problem Abstract
In this paper we prove a symmetry theorem for the Green function associated to the heat equation in a certain class of bounded domains $\Omega\subset\mathbb{R}^{n+1}$. For $T>0$, let $\Omega_T=\Omega\cap[\mathbb{R}^n\times (0,T)]$ and let $G$ be the Green function of $\Omega_T$ with pole at $(0,0)\in\partial_p\Omega_T$. Assume that the adjoint caloric measure in $\Omega_T$ defined with respect to $(0,0)$, $\hat\omega$, is absolutely continuous with respect to a certain surface measure, $\sigma$, on $\partial_p\Omega_T$. Our main result states that if $$\frac {d\hat\omega}{d\sigma}(X,t)=\lambda\frac {|X|}{2t}$$ for all $(X,t)\in \partial_p\Omega_T\setminus\{(X,t): t=0\}$ and for some $\lambda>0$, then $\partial_p\Omega_T\subseteq\{(X,t):W(X,t)=\lambda\}$ where $W(X,t)$ is the heat kernel and $G=W-\lambda$ in $\Omega_T$. This result has previously been proven by Lewis and Vogel under stronger assumptions on $\Omega$.
Article information Source Rev. Mat. Iberoamericana, Volume 23, Number 2 (2007), 513-536. Dates First available in Project Euclid: 26 September 2007 Permanent link to this document https://projecteuclid.org/euclid.rmi/1190831220 Mathematical Reviews number (MathSciNet) MR2371436 Zentralblatt MATH identifier 1242.35130 Subjects Primary: 35K05: Heat equation Citation
Lewis, John L.; Nyström, Kaj. On a Parabolic Symmetry Problem. Rev. Mat. Iberoamericana 23 (2007), no. 2, 513--536. https://projecteuclid.org/euclid.rmi/1190831220
|
Search
Now showing items 1-10 of 22
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
|
The question is:
"Show how the nonlinear regression equation $y=a(x-b)+c(x-b)^2$ can be converted to a linear regression equation solvable by the method of least squares."
if we take that \begin{align} a(x-b) - c(x-b)^2 &= ax - ab + cx^2 -2cbx - cb^2\\ & = \underbrace{-ab-cb^2}_{\beta_0} + x\underbrace{(a-2cb)}_{\beta_1} + x^2\underbrace{c}_{\beta_2}\\ \end{align} thus, $$ y_i = \beta_0+\beta_1 x_i + \beta_2 x_i^2 + \epsilon_i, \quad i=1,...,n. $$
$ \beta_0$ and $\beta_1 $ are dependent each other ,so if you take the partial derivative of $y_i$ for $\beta_0$ you got dependence on $\beta_1$ ,so the gradient of Y by given X dependent on unknown parameters and we got non linear model.
or if you take the next model
\begin{align} xb + (xb)^2 +(xb)^3 &= \underbrace{xb}_{\beta_0} + x^2\underbrace{(b^2)}_{\beta_1} + x^3\underbrace{b^3}_{\beta_2}\\ \end{align} thus, $$ y_i = \beta_0x+\beta_1 x_i^2 + \beta_2 x_i^3 + \epsilon_i, \quad i=1,...,n. $$
and you got a linear model but of course is not a linear model because $\beta_0$ and $\beta_1$ are dependent.
Any help explaining this would be greatly appreciated!
|
Let's say that I want to find solutions $f\in C(\Bbb R)$ to the equation
$$ f(x+1) + f(x) = g(x) $$
for some $g\in C(\Bbb R)$. I can write $f(x+1) = (Tf)(x)$ where $T$ is the right shift operator and rewrite the equation suggestively as $$ (I+ T)f=g. $$ Formally, I can say that the solution of this equation is
$$ f= (I+ T)^{-1} g. $$
Of course, I am aware that there are infinitely many solutions to the equation but please bear with me for a moment here.
By the theory of operator algebra,
if $f,g$ are from some nice Banach space $X$ and our linear operator $T:X\to X$ satisfies $\|T\|<1$, then we have $$f = \left(\sum_{n=0}^\infty (-T)^n \right)g.$$
However, it is not unreasonable to expect that we should have $\|T\|=1$ for a right-shift operator in most reasonable function spaces so let's try to solve the equation $$ f= (I+\lambda T)^{-1} g. $$ for $\lambda <1$ first then we'll take $\lambda\to 1$. Note that all the steps until now is purely formal since $C(\Bbb R)$ is not a normed space.
For a concrete example, let's say we take $g(x) = (x+2)^2$. The previous method says that we first calculate (for $\lambda<1$) $$\begin{align} f(x) &= \left(I - \lambda T + \lambda^2 T^2 - \dots \right) g(x) \\ &= (x+2)^2 -\lambda (x+3)^1 + \lambda^2 (x+4)^2 + \dots \\ &= \left(1-\lambda+\lambda^2-\dots \right)x^2 + \left(2-3\lambda+4\lambda^2-\dots \right)2x + \left(2^2-3^2\lambda+4^2\lambda^2-\dots \right) \\ &= \frac{1}{1+\lambda} x^2 + 2 \frac{2+\lambda}{(1+\lambda)^2} x + \frac{4+3\lambda + \lambda^2}{(1+\lambda)^3}. \end{align}$$ We shall be brave here and substitute $\lambda=1$ even though the series doesn't converge there. This gives $$ f(x) = \frac 12 x^2 + \frac 32 x + 1 $$ but voilà, for some mysterious reasons unknown to me, this $f$ actually solves our original equation $f(x+1) + f(x) = (x+2)^2$ !
My question is simply:
What are the hidden theories behind the miracle we observe here? How can we justify all these seemingly unjustifiable steps?
I can't give you a reference to this method because I just conjured it up, thinking that it wouldn't work. To my greatest surprise, the answer actually makes sense. I am sure that similar method is probably practiced somewhere, probably by physicists.
Some points worth mentioning:
1.) $C(\Bbb R)$ is probably not the right space to work with since it's not normed. However, I want my answer to be a continuous function on $\Bbb R$ so some form of continuity assumption is needed for our space $X$.
2.) Norming $C(\Bbb R)$ with $L^\infty$ norm is not the way to go since it is possible that $f$ is unbounded, as our example shows.
3.) The solution $f$ is not unique since the kernel of $(I+T)$ consists of all $C(\Bbb R)$ functions $h$ such that $h(x+1)=-h(x)$, e.g. $\sin(\pi x)$.
4.) All the series expansions for $\lambda$ in my example converges when $|\lambda| <1$ but not at $\lambda=1$ but for some reason the result checks out.
|
I am testing Logistic Regression with stochastic gradient using
sklearn.linear_model.SGDClassifier. I have 2D independent variables and corresponding labels as below:
X = array([[-2.58733628, 2.26126322], [ 1.97831473, 2.03510032], [ 2.48324069, -2.17901384], ... ])Y = array([[ 1.], [ 1.], [-1.], [-1.], ...])
Here my objective function is $\sum \log( 1 + \exp(-y^{(i)}(wx^{(i)}+ w_{0}))) + \lambda||w||_2^2$, and I first tested with no regularization term.
mySGDlr = linear_model.SGDClassifier(loss = 'log')mySGDlr.fit(X,ravel(Y))print mySGDlr.score(X, ravel(Y))
1) Then, the score is 1, which is not possible. Could you point me where is the wrong part in my code? Also, how can I check the optimized $w, w_0$ after having fitted the model?
2) I read the documentation of SGDClassifier and LogisticRegression function, and it seems that the main difference is that SDGClassifier uses SGD, and LogisticRegression uses some other fancy solvers. Is there any other importance differences?
|
#
1
Problem with understanding the proof of Sauer Lemma
I will replicate the proof here which is from the book "Learning from Data"
Sauer Lemma:
$B(N,K) \leq \sum_{i=0}^{k-1}{n\choose i}$
Proof:
The statement is true whenever k = 1 or N = 1 by inspection. The proof is by induction on N. Assume the statement is true for all $N \leq N_o$ and for all k. We need to prove that the statement for $N = N_0 + 1$ and fpr all k. Since the statement is already true when k = 1(for all values of N) by the initial condition, we only need to worry about $k \geq 2$. By (proven in the book), $B(N_0 + 1, k) \leq B(N_0, k) + B(N_0, k-1)$ and applying induction hypothesis on each therm on the RHS, we get the result.
**My Concern** From what I see this proof only shows that if $B(N, K)$ implies $B(N+1, K)$. I can't see how it shows $B(N, K)$ implies $B(N, K+1)$. This problem arises because the $k$ in $B(N_0 + 1, K)$ and $B(N_0, K)$ are the same, so i think i need to prove the other induction too. Why the author is able to prove it this way?
#
2
Re: Problem with understanding the proof of Sauer Lemma
OK i think i will just post it below. I can't find an edit button. I mean for 2 variable induction, shouldn't we prove B(N,k) implies B(N+1,k) and B(N, K+1)?
#
3
Re: Problem with understanding the proof of Sauer Lemma
You can imaging that the induction hypothesis to be satisfying the inequality for "all k", and then, satisfies the inequality for "all k" too.
Hope this helps.
__________________
When one teaches, two learn.
Thread Tools Display Modes
|
The production of two high-p_T jets in the interactions of quasi-real photons in e+e- collisions at sqrt{s_ee} from 189 GeV to 209 GeV is studied with data corresponding to an integrated e+e- luminosity of 550 pb^{-1}. The jets reconstructed by the k_T cluster algorithm are defined within the pseudo-rapidity range -1 < eta < 1 and with jet transverse momentum, p_T, above 3 GeV/c. The differential di-jet cross-section is measured as a function of the mean transverse momentum ptmean of the jets and is compared to perturbative QCD calculations.
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at center-of-mass energy s=7 TeV using the ALICE detector at the LHC. Jets are reconstructed from charged particle momenta in the midrapidity region using the sequential recombination kT and anti-kT as well as the SISCone jet finding algorithms with several resolution parameters in the range R=0.2–0.6. Differential jet production cross sections measured with the three jet finders are in agreement in the transverse momentum (pT) interval 20<pTjet,ch<100 GeV/c. They are also consistent with prior measurements carried out at the LHC by the ATLAS Collaboration. The jet charged particle multiplicity rises monotonically with increasing jet pT, in qualitative agreement with prior observations at lower energies. The transverse profiles of leading jets are investigated using radial momentum density distributions as well as distributions of the average radius containing 80% (⟨R80⟩) of the reconstructed jet pT. The fragmentation of leading jets with R=0.4 using scaled pT spectra of the jet constituents is studied. The measurements are compared to model calculations from event generators (PYTHIA, PHOJET, HERWIG). The measured radial density distributions and ⟨R80⟩ distributions are well described by the PYTHIA model (tune Perugia-2011). The fragmentation distributions are better described by HERWIG.
We present $\Lambda\Lambda$ correlation measurements in heavy-ion collisions for Au+Au collisions at $\sqrt{s_{NN}}= 200$ GeV using the STAR experiment at the Relativistic Heavy-Ion Collider (RHIC). The Lednick\'{y}-Lyuboshitz analytical model has been used to fit the data to obtain a source size, a scattering length and an effective range. Implications of the measurement of the $\Lambda\Lambda$ correlation function and interaction parameters for di-hyperon searches are discussed.
A search for supersymmetry is presented based on proton-proton collision events containing identified hadronically decaying top quarks, no leptons, and an imbalance pTmiss in transverse momentum. The data were collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV, and correspond to an integrated luminosity of 35.9 fb−1. Search regions are defined in terms of the multiplicity of bottom quark jet and top quark candidates, the pTmiss, the scalar sum of jet transverse momenta, and the mT2 mass variable. No statistically significant excess of events is observed relative to the expectation from the standard model. Lower limits on the masses of supersymmetric particles are determined at 95% confidence level in the context of simplified models with top quark production. For a model with direct top squark pair production followed by the decay of each top squark to a top quark and a neutralino, top squark masses up to 1020 GeV and neutralino masses up to 430 GeV are excluded. For a model with pair production of gluinos followed by the decay of each gluino to a top quark-antiquark pair and a neutralino, gluino masses up to 2040 GeV and neutralino masses up to 1150 GeV are excluded. These limits extend previous results.
Single and multi-photon events with missing energy are analysed using data collected with the L3 detector at LEP at a centre-of-mass energy of 189 GeV, for a total of 176 pb^{-1} of integrated luminosity. The cross section of the process e+e- -> nu nu gamma (gamma) is measured and the number of light neutrino flavours is determined to be N_\nu = 3.011 +/- 0.077 including lower energy data. Upper limits on cross sections of supersymmetric processes are set and interpretations in supersymmetric models provide improved limits on the masses of the lightest neutralino and the gravitino. Graviton-photon production in low scale gravity models with extra dimensions is searched for and limits on the energy scale of the model are set exceeding 1 TeV for two extra dimensions.
We report measurements of single- and double- spin asymmetries for $W^{\pm}$ and $Z/\gamma^*$ boson production in longitudinally polarized $p+p$ collisions at $\sqrt{s} = 510$ GeV by the STAR experiment at RHIC. The asymmetries for $W^{\pm}$ were measured as a function of the decay lepton pseudorapidity, which provides a theoretically clean probe of the proton's polarized quark distributions at the scale of the $W$ mass. The results are compared to theoretical predictions, constrained by recent polarized deep inelastic scattering measurements, and show a preference for a sizable, positive up antiquark polarization in the range $0.05<x<0.2$.
We present a measurement of b jet transverse momentum (pT) spectra in proton-lead (pPb) collisions using a dataset corresponding to about 35 nb−1 collected with the CMS detector at the LHC. Jets from b quark fragmentation are found by exploiting the long lifetime of hadrons containing a b quark through tagging methods using distributions of the secondary vertex mass and displacement. Extracted cross sections for b jets are scaled by the effective number of nucleon–nucleon collisions and are compared to a reference obtained from pythia simulations of pp collisions. The pythia -based estimate of the nuclear modification factor is found to be 1.22±0.15(stat+syst pPb)±0.27(syst pythia) averaged over all jets with pT between 55 and 400 GeV/c and with |ηlab|<2 . We also compare this result to predictions from models using perturbative calculations in quantum chromodynamics.
A search for long-lived particles decaying into jets is presented. Data were collected with the CMS detector at the LHC from proton-proton collisions at a center-of-mass energy of 13 TeV in 2016, corresponding to an integrated luminosity of 35.9 fb-1. The search examines the distinctive topology of displaced tracks and secondary vertices. The selected events are found to be consistent with standard model predictions. For a simplified model in which long-lived neutral particles are pair produced and decay to two jets, pair production cross sections larger than 0.2 fb are excluded at 95% confidence level for a long-lived particle mass larger than 1000 GeV and proper decay lengths between 3 and 130 mm. Several supersymmetry models with gauge-mediated supersymmetry breaking or R-parity violation, where pair-produced long-lived gluinos or top squarks decay to several final-state topologies containing displaced jets, are also tested. For these models, in the mass ranges above 200 GeV, gluino masses up to 2300–2400 GeV and top squark masses up to 1350–1600 GeV are excluded for proper decay lengths approximately between 10 and 100 mm. These are the most restrictive limits to date on these models.
|
In trigonometry and geometry,
triangulation is the process of determining the location of a point by measuring angles to it from known points at either end of a fixed baseline, rather than measuring distances to the point directly (trilateration). The point can then be fixed as the third point of a triangle with one known side and two known angles.
Triangulation can also refer to the accurate surveying of systems of very large triangles, called
triangulation networks. This followed from the work of Willebrord Snell in 1615–17, who showed how a point could be located from the angles subtended from three known points, but measured at the new unknown point rather than the previously fixed points, a problem called resectioning. Surveying error is minimized if a mesh of triangles at the largest appropriate scale is established first. Points inside the triangles can all then be accurately located with reference to it. Such triangulation methods were used for accurate large-scale land surveying until the rise of global navigation satellite systems in the 1980s.
Contents Applications 1 Distance to a point by measuring two fixed angles 2 History 3 Gemma Frisius and triangulation for mapmaking 3.1 Willebrord Snell and modern triangulation networks 3.2 See also 4 References 5 Further reading 6 Applications
Optical 3D measuring systems use this principle as well in order to determine the spatial dimensions and the geometry of an item. Basically, the configuration consists of two sensors observing the item. One of the sensors is typically a digital camera device, and the other one can also be a camera or a light projector. The projection centers of the sensors and the considered point on the object’s surface define a (spatial) triangle. Within this triangle, the distance between the sensors is the base
b and must be known. By determining the angles between the projection rays of the sensors and the basis, the intersection point, and thus the 3D coordinate, is calculated from the triangular relations. Distance to a point by measuring two fixed angles
Triangulation may be used to calculate the coordinates
and distance
from the shore to the ship. The observer at
A
measures the angle
α
between the shore and the ship, and the observer at
B
does likewise for
β
. With the distance between
A
and
B
or the coordinates
of
A
and
B
known, then the law of sines
can be applied to find the coordinates of the ship and the distance
d
.
The coordinates and distance to a point can be found by calculating the length of one side of a triangle, given measurements of angles and sides of the triangle formed by that point and two other known reference points.
The following formula apply in flat or Euclidean geometry. They become inaccurate if distances become appreciable compared to the curvature of the Earth, but can be replaced with more complicated results derived using spherical trigonometry.
Calculation
With
l being the distance between A and B we have \ell = \frac{d}{\tan \alpha} + \frac{d}{\tan \beta}
Using the trigonometric identities tan α = sin α / cos α and sin(α + β) = sin α cos β + cos α sin β, this is equivalent to:
\ell= d \left(\frac{\cos \alpha}{\sin \alpha} + \frac{\cos \beta}{\sin \beta}\right) \ell = d\ \frac{ \sin(\alpha + \beta)}{\sin\alpha \sin\beta }
therefore:
d = \ell\ \frac{ \sin\alpha \sin\beta }{ \sin(\alpha + \beta)}
From this, it is easy to determine the distance of the unknown point from either observation point, its north/south and east/west offsets from the observation point, and finally its full coordinates.
History Liu Hui
(c. 263), How to measure the height of a sea island. Illustration from an edition of 1726
Nineteenth-century triangulation network for the triangulation of Rhineland-Hesse
Triangulation today is used for many purposes, including surveying, navigation, metrology, astrometry, binocular vision, model rocketry and gun direction of weapons.
The use of triangles to estimate distances goes back to antiquity. In the 6th century BC the Greek philosopher Thales is recorded as using similar triangles to estimate the height of the pyramids by measuring the length of their shadows and that of his own at the same moment, and comparing the ratios to his height (intercept theorem);
[1] and to have estimated the distances to ships at sea as seen from a clifftop, by measuring the horizontal distance traversed by the line-of-sight for a known fall, and scaling up to the height of the whole cliff. [2] Such techniques would have been familiar to the ancient Egyptians. Problem 57 of the Rhind papyrus, a thousand years earlier, defines the seqt or seked as the ratio of the run to the rise of a slope, i.e. the reciprocal of gradients as measured today. The slopes and angles were measured using a sighting rod that the Greeks called a dioptra, the forerunner of the Arabic alidade. A detailed contemporary collection of constructions for the determination of lengths from a distance using this instrument is known, the Dioptra of Hero of Alexandria (c. 10–70 AD), which survived in Arabic translation; but the knowledge became lost in Europe. In China, Pei Xiu (224–271) identified "measuring right angles and acute angles" as the fifth of his six principles for accurate map-making, necessary to accurately establish distances; [3] while Liu Hui (c. 263) gives a version of the calculation above, for measuring perpendicular distances to inaccessible places. [4] [5]
In the field, triangulation methods were apparently not used by the Roman specialist land surveyors, the
agromensores; but were introduced into medieval Spain through Arabic treatises on the astrolabe, such as that by Ibn al-Saffar (d. 1035). [6] Abu Rayhan Biruni (d. 1048) also introduced triangulation techniques to measure the size of the Earth and the distances between various places. [7] Simplified Roman techniques then seem to have co-existed with more sophisticated techniques used by professional surveyors. But it was rare for such methods to be translated into Latin (a manual on geometry, the eleventh century Geomatria incerti auctoris is a rare exception), and such techniques appear to have percolated only slowly into the rest of Europe. [6] Increased awareness and use of such techniques in Spain may be attested by the medieval Jacob's staff, used specifically for measuring angles, which dates from about 1300; and the appearance of accurately surveyed coastlines in the Portolan charts, the earliest of which that survives is dated 1296. Gemma Frisius and triangulation for mapmaking
On land, the cartographer Gemma Frisius proposed using triangulation to accurately position far-away places for map-making in his 1533 pamphlet
Libellus de Locorum describendorum ratione ( Booklet concerning a way of describing places), which he bound in as an appendix in a new edition of Peter Apian's best-selling 1524 Cosmographica. This became very influential, and the technique spread across Germany, Austria and the Netherlands. The astronomer Tycho Brahe applied the method in Scandinavia, completing a detailed triangulation in 1579 of the island of Hven, where his observatory was based, with reference to key landmarks on both sides of the Øresund, producing an estate plan of the island in 1584. [8] In England Frisius's method was included in the growing number of books on surveying which appeared from the middle of the century onwards, including William Cuningham's Cosmographical Glasse (1559), Valentine Leigh's Treatise of Measuring All Kinds of Lands (1562), William Bourne's Rules of Navigation (1571), Thomas Digges's Geometrical Practise named Pantometria (1571), and John Norden's Surveyor's Dialogue (1607). It has been suggested that Christopher Saxton may have used rough-and-ready triangulation to place features in his county maps of the 1570s; but others suppose that, having obtained rough bearings to features from key vantage points, he may have estimated the distances to them simply by guesswork. [9] Willebrord Snell and modern triangulation networks
The modern systematic use of triangulation networks stems from the work of the Dutch mathematician Willebrord Snell, who in 1615 surveyed the distance from Alkmaar to Bergen op Zoom, approximately 70 miles (110 kilometres), using a chain of quadrangles containing 33 triangles in all. The two towns were separated by one degree on the meridian, so from his measurement he was able to calculate a value for the circumference of the earth – a feat celebrated in the title of his book
Eratosthenes Batavus ( The Dutch Eratosthenes), published in 1617. Snell calculated how the planar formulae could be corrected to allow for the curvature of the earth. He also showed how to resection, or calculate, the position of a point inside a triangle using the angles cast between the vertices at the unknown point. These could be measured much more accurately than bearings of the vertices, which depended on a compass. This established the key idea of surveying a large-scale primary network of control points first, and then locating secondary subsidiary points later, within that primary network.
Snell's methods were taken up by Jean Picard who in 1669–70 surveyed one degree of latitude along the Paris Meridian using a chain of thirteen triangles stretching north from Paris to the clocktower of Sourdon, near Amiens. Thanks to improvements in instruments and accuracy, Picard's is rated as the first reasonably accurate measurement of the radius of the earth. Over the next century this work was extended most notably by the Cassini family: between 1683 and 1718 Jean-Dominique Cassini and his son Jacques Cassini surveyed the whole of the Paris meridian from Dunkirk to Perpignan; and between 1733 and 1740 Jacques and his son César Cassini undertook the first triangulation of the whole country, including a re-surveying of the meridian arc, leading to the publication in 1745 of the first map of France constructed on rigorous principles.
Triangulation methods were by now well established for local mapmaking, but it was only towards the end of the 18th century that other countries began to establish detailed triangulation network surveys to map whole countries. The Principal Triangulation of Great Britain was begun by the Ordnance Survey in 1783, though not completed until 1853; and the Great Trigonometric Survey of India, which ultimately named and mapped Mount Everest and the other Himalayan peaks, was begun in 1801. For the Napoleonic French state, the French triangulation was extended by Jean Joseph Tranchot into the German Rhineland from 1801, subsequently completed after 1815 by the Prussian general Karl von Müffling. Meanwhile, the famous mathematician Carl Friedrich Gauss was entrusted from 1821 to 1825 with the triangulation of the kingdom of Hanover, for which he developed the method of least squares to find the best fit solution for problems of large systems of simultaneous equations given more real-world measurements than unknowns.
Today, large-scale triangulation networks for positioning have largely been superseded by the global navigation satellite systems established since the 1980s, but many of the control points for the earlier surveys still survive as valued historical features in the landscape, such as the concrete triangulation pillars set up for retriangulation of Great Britain (1936–1962), or the triangulation points set up for the Struve Geodetic Arc (1816–1855), now scheduled as a UNESCO World Heritage Site.
See also References ^ I, 27 ^ Proclus, In Euclidem ^ Joseph Needham (1986). Science and Civilization in China: Volume 3, Mathematics and the Sciences of the Heavens and the Earth. Taipei: Caves Books Ltd. pp. 539–540 ^ Liu Hui, The Sea Island Mathematical Manual ^ Kurt Vogel (1983; 1997), A Surveying Problem Travels from China to Paris, in Yvonne Dold-Samplonius (ed.), From China to Paris, Proceedings of a conference held July, 1997, Mathematisches Forschungsinstitut, Oberwolfach, Germany. ISBN 3-515-08223-9. ^ a b Donald Routledge Hill (1984), A History of Engineering in Classical and Medieval Times, London: Croom Helm & La Salle, Illinois: Open Court. ISBN 0-87548-422-0. pp. 119–122 ^ . ^ Michael Jones (2004), "Tycho Brahe, Cartography and Landscape in 16th Century Scandinavia", in Hannes Palang (ed), European Rural Landscapes: Persistence and Change in a Globalising Environment, p.210 ^ Martin and Jean Norgate (2003), Saxton's Hampshire: Surveying, University of Portsmouth Further reading
Bagrow, L. (1964) History of Cartography; revised and enlarged by R.A. Skelton. Harvard University Press. Crone, G.R. (1978 [1953]) Maps and their Makers: An Introduction to the History of Cartography (5th ed). Tooley, R.V. & Bricker, C. (1969) A History of Cartography: 2500 Years of Maps and Mapmakers Keay, J. (2000) The Great Arc: The Dramatic Tale of How India Was Mapped and Everest Was Named. London: Harper Collins. ISBN 0-00-257062-9. Murdin, P. (2009) Full Meridian of Glory: Perilous Adventures in the Competition to Measure the Earth. Springer. ISBN 978-0-387-75533-5.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
I'm trying to compare 2 types of data in programs: floating point decimals ($doubles$) and fractions (let's say $pair<long,long>$), but it doesn't really matter for the question.
So here is what I can't find nor do: I would like a exact expression, a good aproximation or a fast calculation O(1) of how many fractions are between numers 's' and 't'. To make this easy, let's say $0 \leq s,t \leq M$ where M is the max number both numerator and denominator can be.
Something like "$\#\{\frac{a}{b}\}$ with $0\leq a,b \leq M \land \frac{a}{b} \geq s \land \frac{a}{b} <t$ " (0 < b).
I can't find anything like that because I need to discard the different representations of the same number ( $\#\{\frac{1}{2}, \frac{2}{4}\} = 1$ ) and gcd(a,b) is very slow to make a code with it.
Another naive idea is $\#\{\frac{a}{b}\} = \frac{[primes]_1}{[primes]_2}$ where $[primes1]\cap[primes2] = \emptyset \land 0 \leq \prod[primes]_{1,2} \leq M \land s \leq \frac{[primes]_1}{[primes]_2} < t$
Again, i don't think i can do a good expression with that and to have a code checking for primes or sets is very slow.
Any ideas?
|
Many textbooks present expressions for superfields in $4$ dimensions. For my current project, I have to find out how things work in $2$ dimensions.
Let me summarise in short what we know about $4$d (the equations below are taken from the book by Muller-Kirsten and Wiedemann).
The most general form of a superfield is: \begin{equation}\begin{gathered} \Phi(x, \theta, \bar{\theta}) = f(x) + \theta \varphi(x) + \bar{\theta} \bar{\chi} (x) \\ + (\theta \sigma^\mu \bar{\theta}) V_\mu(x) + \theta \theta m(x) + \bar{\theta} \bar{\theta} n(x) \\ + (\theta \theta) \bar{\theta} \bar{\lambda}(x) + (\bar{\theta} \bar{\theta}) \theta \psi(x) + (\theta \theta) (\bar{\theta} \bar{\theta}) d(x) \quad. \end{gathered}\end{equation} Here and hereafter, I am using the two-component formalism (i.e. Weyl in $4$d and Majorana in $2$d). If we set a superfield to be left-chiral by requiring \begin{equation} \bar{D} \Phi = 0 \quad, \end{equation} with \begin{equation} \bar{D}_{\dot{a}} = - \bar{\partial}_{\dot{a}} - i \theta_b \sigma^\mu_{b \dot{a}} \partial_\mu \quad, \end{equation} then the superfield will take the form of \begin{equation}\begin{gathered} \Phi = A(y) + \sqrt{2} \theta \psi(y) + \theta \theta F(y) \\ = A(x) + i (\theta \sigma^\mu \bar{\theta}) \partial_\mu A(x) - \dfrac{1}{4} (\theta \theta) (\bar{\theta}\bar{\theta}) \Box A(x) + \sqrt{2} \theta \psi(x) \\+ \dfrac{i}{\sqrt{2}} (\theta \theta) \bar{\theta}_{\dot{a}}\partial_\mu \psi^a(x) \sigma^\mu_{a \dot{b}} \epsilon^{\dot{b}\dot{a}} + (\theta \theta) F(x) \quad. \end{gathered}\end{equation} As ever, we here use the notation \begin{equation} y^\mu = x^\mu + i \theta \sigma^\mu \bar{\theta} \quad. \end{equation}
Imposition of the reality condition, \begin{equation} V (x, \theta, \bar{\theta}) = V^\dagger (x, \theta, \bar{\theta}) \quad, \end{equation} will render us the vector superfield \begin{equation}\begin{gathered} V (x, \theta, \bar{\theta}) = C(x) + \theta \varphi(x) + \bar{\theta} \bar{\varphi} (x) \\ + (\theta \sigma^\mu \bar{\theta}) V_\mu(x) + \theta \theta M(x) + \bar{\theta} \bar{\theta} M^*(x) \\ + (\theta \theta) \bar{\theta} \bar{\psi}(x) + (\bar{\theta} \bar{\theta}) \theta \psi(x) + (\theta \theta) (\bar{\theta} \bar{\theta}) D(x) \quad, \end{gathered}\end{equation} where the conditions of reality for the component fields are imposed as well: \begin{equation} C(x) = C^*(x), \quad,\qquad V^*_\mu(x) = V_\mu(x) \quad,\qquad D^*(x) = D(x) \quad. \end{equation}
I want to know how all these equations look in $2$d. So, let me state the questions once again:
What is the most general form of the superfield? (assuming it is a Lorentz scalar)
What is the most general form of the left-chiral superfield?
What is the most general form of the vector superfield?
I know that there emerge some differences between the $4$d and $2$d cases, as one starts constructing Lorentz scalars. Indeed, in the former case the charge conjugation does not preserve handedness, while in the latter it does. How exactly does this difference affect the number of ways in which we can write a Lorentz-invariant expression? (Basically, this should guide us on how to choose the component fields, right?..)
Most papers on $2$d SUSY begin with the statement that the most general superfield has the shape of \begin{equation} \Phi(x, \theta, \bar{\theta}) = \phi(x) + \sqrt{2} \bar{\theta}\psi(x) + \theta \bar{\theta} F(x) \quad. \end{equation} Is this the most general form of an unconstrained superfield? What precludes us from adding to it a term of the form $\sqrt{2}\theta \bar{\chi}(x)$? Or is it left-chiral?... I am confused.
Any comments or references will be greatly appreciated.
|
Following from my previous question I am trying to apply boundary conditions to this non-uniform finite volume mesh,
I would like to apply a Robin type boundary condition to the l.h.s. of the domain ($x=x_L)$, such that,
$$ \sigma_L = \left( d u_x + a u \right) \bigg|_{x=x_L} $$
where $\sigma_L$ is the boundary value; $a, d$ are coefficients defined on the boundary, advection and diffusion respectively; $u_x = \frac{\partial u}{\partial x}$, is the derivative of $u$ evaluated at the boundary and $u$ is the variable for which we are solving.
Possible approaches
I can think of two ways to implement this boundary condition on the above finite volume mesh:
A ghost cell approach.
Write $u_x$ as a finite difference including a ghost cell.$$ \sigma_L = d \frac{u_1 - u_0}{h_{-}} + a u(x_L)$$
A. Then use linear
interpolationwith points $x_0$ and $x_1$ to find the intermediate value, $u(x_L)$.
B. Alternatively find $u(x_L)$ by averaging over the cells, $u(x_L) = \frac{1}{2}(u_0 + u_1)$
In either case, the dependence on ghost cell can be eliminated in the usual way (via substitution into the finite volume equation).
An extrapolation approach.
Fit a linear (or quadratic) function to $u(x)$ by using the values at points $x_1, x_2$ ($x_3$). This will provide the value at $u(x_L)$. The linear (or quadratic) function can then be differentiated to find an expression for the value of the derivative, $u_x(x_L)$, at the boundary. This approach does
notuse a ghost cell. Questions Which approach of the three, (1A, 1B or 2) is "standard" or you would recommend? Which approach introduces the smallest error or is the most stable? I think I can implement the ghost cell approach myself, however, how can the extrapolation approach be implemented, does this approach have a name? Are there any stability difference between fitting a linear function or a quadratic equation? Specific equation
I wish to apply this boundary to the advection-diffusion equation (in conservation form) with non-linear source term,
$$ u_t = -au_x + du_{xx} + s(x,u,t) $$
Discretising this equation on the above mesh using the $\theta$-method gives,
$$ w_{j}^{n+1} - \theta r_a w_{j-1}^{n+1} - \theta r_b w_{j}^{n+1} - \theta r_c w_{j+1}^{n+1} = w_j^n + (1-\theta) r_a w_{j-1}^n + (1-\theta) r_b w_j^n + (1-\theta) r_c w_{j+1}^n + s(x_j,t_n) $$
However for the boundary point ($j=1$) I prefer to use a fully implicit scheme ($\theta=1$) to reduce the complexity,
$$ w_{1}^{n+1} - r_a w_{0}^{n+1} - r_b w_{1}^{n+1} - r_c w_{2}^{n+1} = w_1^n + s_1^n $$
Notice the ghost point $w_0^{n+1}$, this will be removed by applying the boundary condition.
The coefficients have the definitions,
$$ r_a = \frac{\Delta t}{h_j}\left( \frac{ah_j}{2h_{-}} + \frac{d}{h_{-}} \right) $$
$$ r_b = - \frac{\Delta t}{h_j}\left( \frac{a}{2}\left[ \frac{h_{j-1}}{h_{-}} - \frac{h_{j+1}}{h_{+}} \right] + d\left[-\frac{1}{h_{-}} - \frac{1}{h_{+}} \right]\right) $$
$$ r_c = \frac{\Delta t}{h_j}\left(- \frac{ah_j}{2h_{+}} + \frac{d}{h_{+}} \right) $$
All the "$h$" variables are defined as in the above diagram. Finally, $\Delta t$ which is the time step (
N.B. this is a simplified case with constant $a$ and $d$ coefficients, in practice the "$r$" coefficients are slightly more complicated for this reason).
|
Learning Objectives
In this section students will:
Simplify rational expressions. Multiply rational expressions. Divide rational expressions. Add and subtract rational expressions. Simplify complex rational expressions.
A pastry shop has fixed costs of \($280\) per week and variable costs of \($9\) per box of pastries. The shop’s costs per week in terms of \(x\) , the number of boxes made, is \(280 +9x\). We can divide the costs per week by the number of boxes made to determine the cost per box of pastries.
Notice that the result is a polynomial expression divided by a second polynomial expression. In this section, we will explore quotients of polynomial expressions.
Simplifying Rational Expressions
The quotient of two polynomial expressions is called a rational expression. We can apply the properties of fractions to rational expressions, such as simplifying the expressions by canceling common factors from the numerator and the denominator. To do this, we first need to factor both the numerator and denominator. Let’s start with the rational expression shown.
We can factor the numerator and denominator to rewrite the expression.
Then we can simplify that expression by canceling the common factor \((x+4)\).
Howto: Given a rational expression, simplify it
Factor the numerator and denominator. Cancel any common factors.
Simplify \(\dfrac{x^2-9}{x^2+4x+3}\)
Solution
\[\begin{align*} &\dfrac{(x+3)(x-3)}{(x+3)(x+1)}\qquad \text{Factor the numerator and the denominator}\\ &\dfrac{x-3}{x+1}\qquad \text{Cancel common factor } (x+3) \end{align*}\]
Analysis
We can cancel the common factor because any expression divided by itself is equal to \(1\).
Q&A
Can the \(x^2\) term be cancelled in the last example?
No. A factor is an expression that is multiplied by another expression. The \(x^2\) term is not a factor of the numerator or the denominator.
Exercise \(\PageIndex{1}\)
Simplify \(\dfrac{x-6}{x^2-36}\)
Answer
\(\dfrac{1}{x+6}\)
Multiplying Rational Expressions
Multiplication of rational expressions works the same way as multiplication of any other fractions. We multiply the numerators to find the numerator of the product, and then multiply the denominators to find the denominator of the product. Before multiplying, it is helpful to factor the numerators and denominators just as we did when simplifying rational expressions. We are often able to simplify the product of rational expressions.
Howto: Given two rational expressions, multiply them
Factor the numerator and denominator. Multiply the numerators. Multiply the denominators. Simplify.
Multiply the rational expressions and show the product in simplest form:
\(\dfrac{(x+5)(x-1)}{3(x+6)}\times\dfrac{(2x-1)}{(x+5)}\)
Solution
\[\begin{align*} &\dfrac{(x+5)(x-1)}{3(x+6)}\times\dfrac{(2x-1)}{(x+5)}\qquad \text{Factor the numerator and denominator.}\\ &\dfrac{(x+5)(x-1)(2x-1)}{3(x+6)(x+5)}\qquad \text{Multiply numerators and denominators}\\ &\dfrac{(x-1)(2x-1)}{3(x+6)}\qquad \text{Cancel common factors to simplify} \end{align*}\]
Exercise \(\PageIndex{2}\)
Multiply the rational expressions and show the product in simplest form:
\(\dfrac{x^2+11x+30}{x^2+5x+6}\times\dfrac{x^2+7x+12}{x^2+8x+16}\)
Answer
\(\dfrac{(x+5)(x+6)}{(x+2)(x+4)}\)
Dividing Rational Expressions
Division of rational expressions works the same way as division of other fractions. To divide a rational expression by another rational expression, multiply the first expression by the reciprocal of the second. Using this approach, we would rewrite \(\dfrac{1}{x}÷\dfrac{x^2}{3}\) as the product \(\dfrac{1}{x}⋅\dfrac{3}{x^2}\). Once the division expression has been rewritten as a multiplication expression, we can multiply as we did before.
Howto: Given two rational expressions, divide them
Rewrite as the first rational expression multiplied by the reciprocal of the second. Factor the numerators and denominators. Multiply the numerators. Multiply the denominators. Simplify.
Exercise \(\PageIndex{3}\)
Divide the rational expressions and express the quotient in simplest form:
\[\dfrac{9x^2-16}{3x^2+17x-28}÷\dfrac{3x^2-2x-8}{x^2+5x-14} \nonumber \]
Answer
\(0\)
Adding and Subtracting Rational Expressions
Adding and subtracting rational expressions works just like adding and subtracting numerical fractions. To add fractions, we need to find a common denominator. Let’s look at an example of fraction addition.
We have to rewrite the fractions so they share a common denominator before we are able to add. We must do the same thing when adding or subtracting rational expressions.
The easiest common denominator to use will be the
least common denominator, or LCD. The LCD is the smallest multiple that the denominators have in common. To find the LCD of two rational expressions, we factor the expressions and multiply all of the distinct factors. For instance, if the factored denominators were \((x+3)(x+4)\) and \((x+4)(x+5)\), then the LCD would be \((x+3)(x+4)(x+5)\).
Once we find the LCD, we need to multiply each expression by the form of \(1\) that will change the denominator to the LCD. We would need to multiply the expression with a denominator of \((x+3)(x+4)\) by \(\dfrac{x+5}{x+5}\) and the expression with a denominator of \((x+4)(x+5)\) by \(\dfrac{x+3}{x+3}\).
Howto: Given two rational expressions, add or subtract them
Factor the numerator and denominator. Find the LCD of the expressions. Multiply the expressions by a form of 1 that changes the denominators to the LCD. Add or subtract the numerators. Simplify.
Add the rational expressions: \[\dfrac{5}{x}+\dfrac{6}{y} \nonumber \]
Solution
First, we have to find the LCD. In this case, the LCD will be \(xy\). We then multiply each expression by the appropriate form of \(1\) to obtain \(xy\) as the denominator for each fraction.
\[\begin{align*} &\dfrac{5}{x}\times\dfrac{y}{y}+\dfrac{6}{y}\times\dfrac{x}{x}\\ &\dfrac{5y}{xy}+\dfrac{6x}{xy} \end{align*}\]
Now that the expressions have the same denominator, we simply add the numerators to find the sum.
\[\dfrac{6x+5y}{xy} \nonumber \]
Analysis
Multiplying by \(\dfrac{y}{y}\) or \(\dfrac{x}{x}\) does not change the value of the original expression because any number divided by itself is \(1\), and multiplying an expression by \(1\) gives the original expression.
Subtract the rational expressions: \[\dfrac{6}{x^2+4x+4}-\dfrac{2}{x^2-4}\]
Solution
\[\begin{align*}
&\dfrac{6}{{(x+2)}^2}-\dfrac{2}{(x+2)(x-2)}\qquad \text{Factor}\\ &\dfrac{6}{{(x+2)}^2}\times\dfrac{x-2}{x-2}-\dfrac{2}{(x+2)(x-2)}\times\dfrac{x+2}{x+2}\qquad \text{Multiply each fraction to get LCD as denominator}\\ &\dfrac{6(x-2)}{{(x+2)}^2(x-2)}-\dfrac{2(x+2)}{{(x+2)}^2}(x-2)\qquad \text{Multiply}\\ &\dfrac{6x-12-(2x+4)}{{(x+2)}^2(x-2)}\qquad \text{Apply distributive property}\\ &\dfrac{4x-16}{{(x+2)}^2(x-2)}\qquad \text{Subtract}\\ &\dfrac{4(x-4)}{{(x+2)}^2(x-2)}\qquad \text{Simplify} \end{align*}\]
Q&A
Do we have to use the LCD to add or subtract rational expressions?
No. Any common denominator will work, but it is easiest to use the LCD.
Exercise \(\PageIndex{4}\)
Subtract the rational expressions: \(\dfrac{3}{x+5}-\dfrac{1}{x-3}\)
Answer
\(\dfrac{2(x-7)}{(x+5)(x-3)}\)
Simplifying Complex Rational Expressions
A complex rational expression is a rational expression that contains additional rational expressions in the numerator, the denominator, or both. We can simplify complex rational expressions by rewriting the numerator and denominator as single rational expressions and dividing. The complex rational expression \(\dfrac{a}{\dfrac{1}{b}+c}\) can be simplified by rewriting the numerator as the fraction \(\dfrac{a}{1}\) and combining the expressions in the denominator as \(\dfrac{1+bc}{b}\). We can then rewrite the expression as a multiplication problem using the reciprocal of the denominator. We get \(\dfrac{a}{1}⋅\dfrac{b}{1+bc}\), which is equal to \(\dfrac{ab}{1+bc}\).
Howto: Given a complex rational expression, simplify it
Combine the expressions in the numerator into a single rational expression by adding or subtracting. Combine the expressions in the denominator into a single rational expression by adding or subtracting. Rewrite as the numerator divided by the denominator. Rewrite as multiplication. Multiply. Simplify.
Simplify: \(\dfrac{y+\dfrac{1}{x}}{\dfrac{x}{y}}\)
Solution
Begin by combining the expressions in the numerator into one expression.
\[\begin{align*} &y\times\dfrac{x}{x}+\dfrac{1}{x}\qquad \text{Multiply by } \dfrac{x}{x} \text{ to get LCD as denominator}\\ &\dfrac{xy}{x}+\dfrac{1}{x}\\ &\dfrac{xy+1}{x}\qquad \text{Add numerators} \end{align*}\]
Now the numerator is a single rational expression and the denominator is a single rational expression.
\[\begin{align*} &\dfrac{\dfrac{xy+1}{x}}{\dfrac{x}{y}}\\ \text{We can rewrite this as division, and then multiplication.}\\ &\dfrac{xy+1}{x}÷\dfrac{x}{y}\\ &\dfrac{xy+1}{x}\times\dfrac{y}{x}\qquad \text{Rewrite as multiplication}\\ &\dfrac{y(xy+1)}{x^2}\qquad \text{Multiply} \end{align*}\]
Exercise \(\PageIndex{5}\)
Simplify: \(\dfrac{\dfrac{x}{y}-\dfrac{y}{x}}{y}\)
Answer
\(\dfrac{x^2-y^2}{xy^2}\)
Q&A
Can a complex rational expression always be simplified?
Yes. We can always rewrite a complex rational expression as a simplified rational expression.
Key Concepts Rational expressions can be simplified by cancelling common factors in the numerator and denominator. See Example. We can multiply rational expressions by multiplying the numerators and multiplying the denominators. See Example. To divide rational expressions, multiply by the reciprocal of the second expression. See Example. Adding or subtracting rational expressions requires finding a common denominator. See Example and Example. Complex rational expressions have fractions in the numerator or the denominator. These expressions can be simplified. See Example.
|
Let $F: \mathbb{R} \rightarrow \mathbb{R}$ be a linear map. I want to evaluate an expression of the type $$F((ax+b)^k)$$ in terms of $F(x)$ for some fixed value of $x$ (I already know $F(x)^r$ for $r=1,...,k$).
$x$ is typically small ($0<x<1$), and $k$ is typically about $15$. $a$ is always positive (about $30$) and $b$ is always negative (about $-30$) so that $ax+b$ is always between $0$ and $1$. Further it is also known that the range of the map $F$ is $[0,1]$.
Using the binomial theorem to expand $(ax+b)^k$ involves very large numbers, which cause errors due to overflow (in Matlab).
Since it is already known that the argument $(ax+b)^k$ as well as its image $F((ax+b)^k)$ are always small, I am interested to know if there are methods to evaluate $F((ax+b)^k)$ in terms of $F(x)$ stably or without involving large numbers. Any help will be much appreciated.
EDIT: The actual problem I want to solve is not quite the same. I am describing it below.
I have two functions $f,g:\mathbb{R} \to \mathbb{R}$. $g$ is positive and unit normalized, i.e. $\int_{-\infty}^{\infty} g(x)dx=1$ The function $f$ is not known, but it is known that $f(x) \in [0,1]$ for all $x$. Also the quantity $\int_{-\infty}^{\infty} g(x) f(x)^r dx$ is known for $r=1,...,k$ (which is, of course, in $[0,1]$). I want to calculate the quantity $$I = \int_{-\infty}^{\infty} g(x) (a f(x) + b)^k dx$$ where $a,b \in \mathbb{R}$. It is known that $a>0,b<0$ and $a,b$ are chosen such that $af(x)+b \in [0,1]$. Hence $I \in [0,1]$. $a$ is typically about $30$, and $k$ is about $15$.
Since only $\int_{-\infty}^{\infty} g(x) f(x)^r dx$ is known I have no option but to expand $(a f(x) + b)^k = \sum_{r=0}^k \binom{k}{r} a^r f(x)^r b^{k-r}$ using the binomial theorem. However this involves the product of large numbers like $\binom{k}{r}$ and powers of $a$ and $b$. Since $I \in [0,1]$, compuing $I$ by adding and subtracting such large nos. does not seem like a good idea.
In the original question I intended the linear map $F$ to correspond to the map $f \mapsto \int_{-\infty}^{\infty} g(x) f(x) dx$, but that was clearly not a good example.
|
1976 IMO Problems/Problem 3 Problem
A box whose shape is a parallelepiped can be completely filled with cubes of side If we put in it the maximum possible number of cubes, each of volume , with the sides parallel to those of the box, then exactly percent from the volume of the box is occupied. Determine the possible dimensions of the box.
Solution
We name a,b,c the sides of the parallelepiped, which are positive integers. We also put
\[x = \left\lfloor\frac{a}{\sqrt[3]{2}}\right\rfloor \ \ \ \ y = \left\lfloor\frac{b}{\sqrt[3]{2}}\right\rfloor \ \ \ \ z = \left\lfloor\frac{c}{\sqrt[3]{2}}\right\rfloor \ \ \ \\] (Error compiling LaTeX. ! LaTeX Error: \begin{equation*} on input line 20 ended by \end{document}.)
It is clear that is the maximal number of cubes with sides of length that can be put into the parallelepiped with sides parallels to the sides of the box. Hence the corresponding volume is . We need , hence We give the values of and for . The same table is valid for and . By simple inspection we obtain two solutions of : and . We now show that they are the only solutions.
We can assume . So necessarily . Note that the definition of implies hence If then and since . So we have only left the cases and . But for we have and so necessarily and . It follows
Note that the definitions of imply Moreover we have from (2) and from that
If then and we would have , which contradicts .
On the other hand, if then and since as . So we have only left the cases . But for we have and for we have and so necessarily and ()
So we arrive finally at and . If then and
since . On the other hand, for there are the only two possible values and which yield the known solutions.
See also
1976 IMO ( Problems) • Resources) Preceded by Problem 2 1 • 2 • 3 • 4 • 5 • 6 Followed by Problem 4 All IMO Problems and Solutions
|
Consider the product of two simple functions, say \( f(x)=(x^2+1)(x^3-3x)\). An obvious guess for the derivative of \(f\) is the product of the derivatives of the constituent functions: \( (2x)(3x^2-3)=6x^3-6x\).
Is this correct? We can easily check, by rewriting \(f\) and doing the calculation in a way that is known to work. First, \( f(x)=x^5-3x^3+x^3-3x=x^5-2x^3-3x\), and then \( f'(x)=5x^4-6x^2-3\). Not even close! What went "wrong''? Well, nothing really, except the guess was wrong.
So the derivative of \(f(x)g(x)\) is NOT as simple as \(f'(x)g'(x)\). Surely there is some rule for such a situation? There is, and it is instructive to "discover'' it by trying to do the general calculation even without knowing the answer in advance.
\[\eqalign{ {d\over dx}(&f(x)g(x)) = \lim_{\Delta x \to0} {f(x+\Delta x)g(x+\Delta x) - f(x)g(x)\over \Delta x}\cr& =\lim_{\Delta x \to0} {f(x+\Delta x)g(x+\Delta x)-f(x+\Delta x)g(x) + f(x+\Delta x)g(x)- f(x)g(x)\over \Delta x}\cr & =\lim_{\Delta x \to0} {f(x+\Delta x)g(x+\Delta x)-f(x+\Delta x)g(x)\over \Delta x} + \lim_{\Delta x \to0} {f(x+\Delta x)g(x)- f(x)g(x)\over \Delta x}\cr & =\lim_{\Delta x \to0} f(x+\Delta x){ g(x+\Delta x)-g(x)\over \Delta x} + \lim_{\Delta x \to0} {f(x+\Delta x)- f(x)\over \Delta x}g(x)\cr & =f(x)g'(x) + f'(x)g(x)\cr }\]
A couple of items here need discussion. First, we used a standard trick, "add and subtract the same thing'', to transform what we had into a more useful form. After some rewriting, we realize that we have two limits that produce \(f'(x)\) and \(g'(x)\). Of course, \(f'(x)\) and \(g'(x)\) must actually exist for this to make sense. We also replaced \( \lim_{\Delta x\to0}f(x+\Delta x)\) with \(f(x)\)---why is this justified?
What we really need to know here is that \( \lim_{\Delta x\to 0}f(x+\Delta x)=f(x)\), or in the language of section
2.5, that \(f\) is continuous at \(x\). We already know that \(f'(x)\) exists (or the whole approach, writing the derivative of \(fg\) in terms of \(f'\) and \(g'\), doesn't make sense). This turns out to imply that \(f\) is continuous as well. Here's why:
\[ \eqalign{ \lim_{\Delta x\to 0} f(x+\Delta x) &= \lim_{\Delta x\to 0} (f(x+\Delta x) -f(x) + f(x))\cr& = \lim_{\Delta x\to 0} {f(x+\Delta x) -f(x)\over \Delta x}\Delta x + \lim_{\Delta x\to 0} f(x)\cr& =f'(x)\cdot 0 + f(x) = f(x)\cr }\]
To summarize: the product rule says that
\[{d\over dx}(f(x)g(x)) = f(x)g'(x) + f'(x)g(x). \]
Returning to the example we started with, let
\[ f(x)=(x^2+1)(x^3-3x).\]
Then
\[ f'(x)=(x^2+1)(3x^2-3)+(2x)(x^3-3x)=3x^4-3x^2+3x^2-3+2x^4-6x^2= 5x^4-6x^2-3,\]
as before. In this case it is probably simpler to multiply \(f(x)\) out first, then compute the derivative; here's an example for which we really need the product rule.
Example \(\PageIndex{1}\)
Compute the derivative of \( f(x)=x^2\sqrt{625-x^2}\).
Solution
\[ {d\over dx}\sqrt{625-x^2}={-x\over\sqrt{625-x^2}}.\nonumber\]
Now
\[\begin{align*} f'(x)&=x^2{-x\over\sqrt{625-x^2}}+2x\sqrt{625-x^2} \\[4pt]&= {-x^3+2x(625-x^2)\over \sqrt{625-x^2}} \\[4pt] &= {-3x^3+1250x\over \sqrt{625-x^2}}. \end{align*}\]
|
Updated 07.11
We can chose the model to discuss the problem and so let us chose:
Model: Newtonian mechanics/Newtonian gravity, with the Universe filled with uniformly dense matter, interacting only gravitationally (in cosmology this called “dust matter”), and at the initial time of our spaceship journey all this matter is at rest.
Hence my spaceship should start accelerating toward ×. By choosing the sphere large enough, I should be able to make it accelerate arbitrarily fast, and by choosing the location of × I can make it accelerate in any direction.
Absolutely!
Of course this doesn't work, but why?.
It does work. If we assume that initially the spaceship was at rest together with the whole universe it will reach the point × in time needed for the ship to fall into a point mass equal to the mass of the pink sphere.
The problem is that by that time all of the pink sphere also falls toward that same point as well, as do all other colored spheres and the rest of the universe also. If our astronaut checks its distance to the point × before the spaceship falls into it she would notice that this distance has decreased, but at the same time is she checks her surroundings she would notice that the spaceship is surrounded by precisely the same matter particles that when the journey started only they are closer to each other and to the spaceship. This distance contraction is simply a Newtonian version of Big Crunch event.
If the universe is filled with matter interacting only gravitationally and we assume that the density of matter will stay uniform throughout the universe, then the only conclusion would be that such universe is not static. It has either (Newtonian version of) Big Bang in its past or Big Crunch in its future (or in our model, since we chose initial moment as a turning point from expansion to contraction, it has both).
It may seem that the whole Universe falling toward our chosen point × is an absurdity, since we have chosen this point arbitrarily. But in this situation there is
no paradox, the acceleration of all matter toward this point is due to the fact that in our setup there is no “absolute space”, no set of outside stationary inertial observers which could give us absolute accelerations, instead we can only choose a reference point × (or rather specify an observer located at this point and at rest with respect to surrounding matter) and calculate relative accelerations toward this point.
Recall, that the first principle of Newtonian mechanics states that
every particle continues in its state of rest or uniform motion in a straightline unless it is acted upon by some exterior force. For an isolated system, for example collection of gravitating objects of finite total mass we could (at least in principle) place an observer at rest so far away that it could be considered an inertial object. This would allow us to define a reference frame with respect to which we would measure accelerations. But in our Newtonian cosmology matter is filling the whole Universe, there is no observer on which gravity is not acting, so there is no set of reference frames defined by observers “at infinity” only observers inside the matter concentrations that are affected by the gravitational forces.
While there is no absolute accelerations, the relative positions ($\mathbf{d}_{AB}(t)= \mathbf{x}_A(t)-\mathbf{x}_B(t)$ between objects $A$ and $B$ comoving with the matter of the universe) do have a meaning independent of the choice of reference point. This relative positions, relative velocities ($\dot{\mathbf{d}}_{AB}$), relative accelerations, etc. constitute the set of unambiguously defined quantities measurable within our universe.
then my intuition tells me that I can just choose a sufficiently static universe.
This intuition is wrong, if there is a gravitational force that would accelerate your spaceship toward ×, then it would also be acting on a nearby matter (call them dust particles or planets or stars) producing the same acceleration, so all of the universe would be falling toward ×.
Note on Newtonian cosmology it may seems that Newtonian theory of gravitation is ill suited to handle homogeneous spatially infinite distributions of matter. But one can try to separate the physics of the situation from the deficiencies of particular formalism and possibly to overcome them. As a motivation we could note that over large, cosmological distances our universe to a high degree of accuracy could be considered spatially flat, and the velocities of most massive objects relative to each other and to the frame of CMB are very small compared with the speed of light, meaning that Newtonian approximation may be appropriate. While we do know that general relativity provides a better description for the gravitation, Newtonian gravity is computationally and conceptually much simpler. This seems to suggest that it is worthwhile to “fix” whatever problems one encounters while attempting to formalize cosmological solutions of Newtonian gravity.
The most natural approach is to “geometrize” Newtonian gravity and instead of “force” consider it a part of geometry, dynamical connection representing gravity and inertia. This is done within the framework of Newton–Cartan theory.
As a more detailed reference, with an emphasis on cosmology, see this paper (knowledge of general relativity is required):
Newton–Cartan theory underscores conceptual similarities between Newtonian gravity and general relativity, with Galilei group replacing the Lorentz group of GR. The general approach is coordinate-free and is closely related to the machinery of general relativity, but a specific choice of local Galilei coordinates would produce the usual equations for acceleration ($\mathop{\mathrm{div}} \mathbf{g} = - 4\pi \rho$), with gravitational acceleration now being part of Newtonian connection. Homogeneous and isotropic cosmological solutions are a straightforward lifts of FLRW cosmologies.
While equations are the same, we may already answer some conceptual questions.
Since gravitational acceleration is part of the connection, there is no reason to expect it to be an “absolute” object, there would be gauge transformations that would alter it. We can have multiple charts on which we define the physics with the normally defined transition maps between.
We can have a
closed FRW cosmology, the “space” does not has to be a Euclidean space, it could be torus $T_3$ (field equations require that locally the space is flat). Since the spatial volume of a closed universe varies, and tend to zero as the universe approaches the Big Crunch, this asserts that not just matter but space itself collapses during the Big Crunch (to answer one of the comments).
It is quite simple to include the cosmological constant / dark energy thus making the models more realistic.
Note on answer by user105620: If we formulate a regularization procedure by introducing a window function $W(\epsilon,x_0)$ that would make potential well behaved. This provides us with an another way to “fix” problems of our cosmological model. The acceleration of our spaceship computed with this regularization is indeed dependent on the choice of $x_0$ in the limit $\epsilon\to 0$, which is the consequence of the same freedom in choosing the reference point ×. But he/she just should not have stopped there. Divergences requiring the use of regulators and ambiguities remaining after regularization are quite normal features in developing physical models. The next step would be identifying the physically meaningful quantities and checking that those are independent on the regulator artifacts. In our case neither potential $\Phi$ nor gravitational acceleration $\mathbf{g}$ are directly observable in this model. Relative positions, relative velocities and relative accelerations are observable and those are turning to be independent of the regulator parameter $x_0$.
|
Lie algebras over rings (Lie rings) are important in group theory. For instance, to every group $G$ one can associate a Lie ring
$$L(G)=\bigoplus _{i=1}^\infty \gamma _i(G)/\gamma _{i+1}(G),$$
where $\gamma _i(G)$ is the $i$-th term of the lower central series of $G$. The addition is defined by the additive structure of $\gamma _i{G}/\gamma _{i+1}(G)$, and the Lie product is defined on homogeneous elements by $[x\gamma _{i+1}(G),y\gamma _{j+1}(G)]=[x,y]\gamma _{i+j+1}(G)$, and then extended to L(G) by linearity.
There are several other ways of constructing Lie rings associated to groups, and there are numerous applications of these. One of the most notorious ones is the solution of the Restricted Burnside Problem by Zelmanov, see the book M. R. Vaughan-Lee, "The Restricted Burnside Problem". There's other books related to these rings, for example,Kostrikin, "Around Burnside",Huppert, Blackburn, "Finite groups II",Dixon, du Sautoy, Mann, Segal, "Analytic pro-$p$ groups".
|
It would depend on damping effects being taken into account or not.
Invoking Newton's 2nd Law of motion, a differential equation for the motion of a damped harmonic oscillator can be written (including an external, sinusoidal driving force term):
$m\frac{d^2x}{dt^2}+2m\xi\omega_0\frac{dx}{dt}+m\omega_0^2x=F_0\sin\left(\omega t\right)$
Where $m$ is the inertial mass of the system, $\omega_0$ is its characteristic frequency, $\xi$ a dimensionless damping factor... And, last but not least, where $F_0$ is the amplitude of the driving force and $\omega$ its frequency.
The stationary ($t\rightarrow\infty$) solution takes the shape $x\left(t\right)=A_0\sin\left(\omega t-\varphi_0\right)$, where $A_0$ is an amplitude factor (whose particular expression in terms of the particular parameters is not relevant to this question) and $\varphi_0$ is phase lag, which is this phase difference you are asking about.
This phase difference can be calculated to be $\varphi_0=\left|\arctan\left(\xi\dfrac{2\omega\omega_0}{\omega^2-\omega_0^2}\right)\right|$. It
is a phase lag, so with the (implicitly) chosen phase convention, it has to be positive.
If there was no damping whatsoever in the system, $\xi$ would be zero, and you would be right: $\varphi_0=0$. The stationary motion of the oscillator would be in phase with the driving force (regardless of which is the relationship between $\omega$ and $\omega_0$).
But in an undamped resonant situation the amplitude $A_0$ diverges, which means that the stationary solution is never reached (starting from reasonable, finite initial conditions for the system). Also, in a physical down-to-earth situation, the system would eventually breakdown
somewhere, somehow, since energy is being introduced into the system with perfect efficiency (that is what 'resonance' is all about) and without any means to dissipate it. Somewhere, sooner or later, something would go boom or crash. That is how nasty undamped resonances are.
On the other hand, for a non-zero damping, in the resonant case $\omega=\omega_0$, the argument of the $\arctan$ function diverges, so the phase difference turns out in this case to be $\frac{\pi}{2}$.
To sum up, the $\frac{\pi}{2}$ phase appears as an effect of damping in the system, and just a little bit of it is enough to offset the oscillatory response from the system. As it happens, every realistic, down-to-earth harmonic system has some kind of damping in its dynamics. Even if the damping is so small that the induced dephasing in an out-of-resonance situation is negligible for every purpose that the model has, damping
has to be taken into account in resonant and closely resonant motion, otherwise the model yields highly unphysical results.
|
DISCLAIMER: I've edited the question repeatedly for clarity and to target the most relevant answer.
I have the following general problem $$ \min \|h_1\cdot h_2\|^2 $$ such that $$\|g_1\wedge g_2-h_1\wedge h_2\|^2 = 0,$$ where $h_i,g_i\in \mathbb{R}^n\wedge \mathbb{R}^n$ are skew-symmetric matrices, $A\cdot B$ denotes matrix multiplication of matrices $A$ and $B$, and $\lambda\in \mathbb{R}$. $A\wedge B$ is the geometric product of $A$ and $B$.
In order to make the problem solve nicer, i.e., no infeasibility trap at $h_1=0$, for instance, I've reformulated the problem with the following constraints. $$\|g_1\wedge g_2-\lambda h_1\wedge h_2\|^2 = 0,\\ \lambda\ge 0,\\ \|h_1\|^2=1,\\ \|h_2\|^2=1$$
My test case is $n=6$ and $g_1=g_2=e_{12}+e_{34}+e_{56}$ (a worst case scenario). Obviously, the problem is degenerate due to symmetry as well as unitarily invariant on $\mathbb{R}^6$. I've introduced $\lambda$ and constraints 2 and 3 in order to exclude some slack in the solution. My expected outcome is $h_1 =\frac{1}{\sqrt{12}}\left(2e_{12}+e_{34}+e_{56}\right)$ and $h_2=\frac{1}{\sqrt{2}}\left(e_{34}+e_{56}\right)$ with a minimum of 1/12 and $\lambda=\sqrt{24}$.
I've used IPOPT to implement the minimzation, but it fails to find this solution. Instead a solution is found that is slightly worse. I'm a little perplexed as all functions are convex in each variable, but I'm no expert in NLP.
EDIT: As per @Geoff Oxberry, the problem is not convex. As a matter of fact, the initial guess of $h_1=g_1$ and $h_2=g_2$ is a maximum of the objective function under the constraints.
So how do I improve my initial guess or change constraints to improve solutions?
|
Suppose I define the mapping torus $M_f$ in the usual way by identifying $(x, 0)$ and $(f(x), 1)$. If I have a homeomorphism $f: X \rightarrow X$ and another homeomorphism $f': X \rightarrow X$ that are homotopic, are the spaces $M_f$ and $M_{f'}$ necessarily homeomorphic?
Asking for homotopic homeomorphisms isn't enough: if one takes $X=[0,1]$ and $f=\mathrm{id}$ and $g=1-f$, then $f$ and $g$ are homotopic homeomorphisms, but the mapping cylinder of $f$ is homeomorphic to $S^1\times[0,1]$ while the mapping cylinder of $g$ is homeomorphic to the Moebius strip. These spaces aren't homeomorphic.
Suppose $X$ locally compact Hausdorff and $Y$ Hausdorff, then a homotopy $$h:X\times [0,1]\to Y$$ is the same as a continuous path $$\widehat{h}:[0,1]\to C(X,Y)$$ (This is a special case of the exponential law for spaces where the set $C(X,Y)$ of continuous maps $X\to Y$ is equipped with the compact open topology. By a result of Arens, if $X$ is locally compact locally connected (or more trivially, if $X$ is compact), the subset of homeomorphisms $\mathrm{Aut}(X)\subset C(X,X)$ forms a topological group (composition is easily seen to be continuous, but the operation $\mathrm{inv}:f\mapsto f^{-1}$ may not be).
From now on $X$ is either compact or locally compact locally connected, so that the map $\mathrm{inv}:f\mapsto f^{-1}$ is a continuous one, and that the homotopy $H$ through homeomorphisms corresponds to a continuous path in $\mathrm{Aut}(X)$. Suppose $H$ is a homotopy through homeomorphisms from $f$ to $g$. Consider the map $$C:X\times [0,1]\to X\times[0,1],(x,t)\mapsto(H_t(x),t)$$ is a homeomorphism with inverse $$C':X\times [0,1]\to X\times[0,1],(x,t)\mapsto(\mathrm{inv}(H_t)(x),t)$$ then the map $D=C\circ(f^{-1}\times\mathrm{id}_{[0,1]})$ is a homeomorphism of $X\times[0,1]$, which satisfies for all $x\in X$, $D(x,0)=(x,0)$ and $D(f(x),1)=(g(x),1)$. This homeomorphism induces a homeomorphism between the mapping tori.
|
Proof for logical implication : $\exists_x A(x) \rightarrow \forall_x B(x) \implies \forall_x [A(x) \rightarrow B(x)]$
Can you please see whether the proof is fine or not?
We know that $ {\forall_x B(x) \implies \exists_x B(x)} \\ [\exists_x A(x) \rightarrow \forall_x B(x)] \implies [\exists_x A(x) \rightarrow \exists_x B(x)] \implies \exists_x \neg A(x) \lor \exists_x B(x) \implies \\\exists_x [A(x) \rightarrow B(x)]$
Please correct me if I am wrong. I have doubt in that bold alphabet step (can use or not, what happens when we have for all of $x$ in front of $A(x)$. I am doubtful if it is valid or not ... )
$\forall_x A(x) \rightarrow \forall_x B(x) \implies [\forall_x A(x) \rightarrow \exists_xA(x)\rightarrow \forall_x B(x)] $
Can I say $\forall_x A(x) \rightarrow \forall_x B(x) \implies [ \exists_xA(x)\rightarrow \forall_x B(x)] \implies \forall_x [A(x) \rightarrow B(x)]$
|
One thing you can do with tetrads is express quantities everywhere in terms of what "natural" observers would measure at each point in spacetime.
To be more concrete, consider a spacetime foliated by slices of constant timelike coordinate. At each point, one can imagine the "normal observer" whose 4-velocity is the unit timelike normal to the constant-time slice (that is, the 4-velocity components in the coordinate basis are $u_\mu = n_\mu \equiv -\alpha \delta^0_\mu \equiv -(-g^{00})^{-1/2} \delta^0_\mu$, in the language of the ADM formalism). This observer has the nice property that its constant-time surface coincides with the global constant-time surface; things occurring at the same time coordinate appear simultaneous for this observer.
If the stress-energy tensor has components $T^{\mu\nu}$ in the coordinate basis, we might think of $T^{00}$ as the energy density, by analogy with special relativity. However, the normal observer would see an energy density of $n_\mu n_\nu T^{\mu\nu} = \alpha^2 T^{00}$. If instead we expressed stress-energy components in the normal observer's locally flat coordinates (denoted with primes), we would simply have energy density $T^{0'0'}$, with the lapse already accounted for.
This can give a nice physical interpretation to individual components of tensors: $T^{0'0'}$ is the energy density seen by a concrete, sensible observer, while $T^{00}$ is hard to interpret without all the other components of the tensor. With this particular tetrad, one also can carry over any complicated models of local phenomena that have been worked out in special relativity right over to GR.
1
The cost comes in relating quantities at different points, as when moving along trajectories or integrating over regions of spacetime. In many such cases, the fact that the tetrad is evolving with respect to the natural coordinate basis as you move around the spacetime makes these sorts of operations worse with tetrads.
2
1In fact, this happens in my field, where we have more and better approximations to nonlinear behaviors in fluid dynamics in Minkowski spacetimes than in general spacetimes.
2Also, I wouldn't really describe tetrads as "coordinate free" (though a lot of people do). A tetrad is just a choice of bases for the tangent bundle that isn't the one basis you naturally get from the coordinate system at hand. But replacing the coordinate-induced bases isn't the same as getting rid of the coordinates themselves. Nor do you avoid working with components in a specific basis when dealing with tetrads (which is what some people mean when they say "coordinate free").
|
Difference between revisions of "LaTeX:Symbols"
m (→Operators)
(→See Also)
(25 intermediate revisions by 14 users not shown) Line 2: Line 2:
This article will provide a short list of commonly used LaTeX symbols.
This article will provide a short list of commonly used LaTeX symbols.
− − − − − − − − − − −
== Finding Other Symbols ==
== Finding Other Symbols ==
Line 19: Line 8:
<ul>
<ul>
<li>
<li>
−
[http://detexify.kirelabs.org/classify.html Detexify] is an app which allows you to draw the symbol you'd like and shows you the <math>\LaTeX</math> code for it!
+
[http://detexify.kirelabs.org/classify.html Detexify] is an app which allows you to draw the symbol you'd like and shows you the <math>\LaTeX</math> code for it!
<br/><br/></li>
<br/><br/></li>
<li>
<li>
−
MathJax (what allows us to use <math>\LaTeX</math> on the web) maintains a [http://docs.mathjax.org/en/latest/tex.html#supported-latex-commands list of supported commands].
+
MathJax (what allows us to use <math>\LaTeX</math> on the web) maintains a [http://docs.mathjax.org/en/latest/tex.html#supported-latex-commands list of supported commands].
<br/><br/></li>
<br/><br/></li>
Line 98: Line 87:
| <math>\mid</math>||\mid||<math>\bumpeq</math>||\bumpeq||
| <math>\mid</math>||\mid||<math>\bumpeq</math>||\bumpeq||
|}
|}
−
Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a
+
Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a examples, plus other negations; it works for many of the others as well.
−
{| class="latextable"
+ +
{| class="latextable"
!Symbol!!Command!!Symbol!!Command!!Symbol!!Command
!Symbol!!Command!!Symbol!!Command!!Symbol!!Command
|-
|-
Line 106: Line 96:
| <math>\nsim</math>||\nsim||<math>\ncong</math>||\ncong||<math>\nparallel</math>||\nparallel
| <math>\nsim</math>||\nsim||<math>\ncong</math>||\ncong||<math>\nparallel</math>||\nparallel
|-
|-
−
| <math>\not<</math>||\not<||<math>\not></math>||\not>||<math>\not=</math>||\not=
+
| <math>\not<</math>||\not<||<math>\not></math>||\not>||<math>\not=</math>||\not=
|-
|-
| <math>\not\le</math>||\not\le||<math>\not\ge</math>||\not\ge||<math>\not\sim</math>||\not\sim
| <math>\not\le</math>||\not\le||<math>\not\ge</math>||\not\ge||<math>\not\sim</math>||\not\sim
Line 119: Line 109:
|}
|}
−
To use other relations not listed here, such as =, >, and <, in LaTeX, you
+
To use other relations not listed here, such as =, >, and <, in LaTeX, you use the symbols on your keyboard.
==Greek Letters==
==Greek Letters==
Line 275: Line 265:
|-
|-
|<math>\checkmark</math>||\checkmark||
|<math>\checkmark</math>||\checkmark||
+ +
|}
|}
Line 333: Line 325:
And with system of equations:
And with system of equations:
−
<tt>\left\{\begin{array}x+y=3\\2x+y=5\end{array}\right.</tt>
+
<tt>\left\{\begin{array}x+y=3\\2x+y=5\end{array}\right.</tt>
Gives
Gives
Line 340: Line 332:
See that there's a dot after <tt>\right</tt>. You must put that dot or the code won't work.
See that there's a dot after <tt>\right</tt>. You must put that dot or the code won't work.
+ + + + + + + + + Line 389: Line 390:
==See Also==
==See Also==
*[[LaTeX:Commands | Next: Commands]]
*[[LaTeX:Commands | Next: Commands]]
− Latest revision as of 19:05, 24 June 2019
LaTeX About - Getting Started - Diagrams - Symbols - Downloads - Basics - Math - Examples - Pictures - Layout - Commands - Packages - Help
This article will provide a short list of commonly used LaTeX symbols.
Contents Finding Other Symbols
Here are some external resources for finding less commonly used symbols:
Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! MathJax (what allows us to use on the web) maintains a list of supported commands. The Comprehensive LaTeX Symbol List. Operators Relations
Symbol Command Symbol Command Symbol Command \le \ge \neq \sim \ll \gg \doteq \simeq \subset \supset \approx \asymp \subseteq \supseteq \cong \smile \sqsubset \sqsupset \equiv \frown \sqsubseteq \sqsupseteq \propto \bowtie \in \ni \prec \succ \vdash \dashv \preceq \succeq \models \perp \parallel \mid \bumpeq
Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a couple examples, plus many other negations; it works for many of the many others as well.
Symbol Command Symbol Command Symbol Command \nmid \nleq \ngeq \nsim \ncong \nparallel \not< \not> \not= or \neq \not\le \not\ge \not\sim \not\approx \not\cong \not\equiv \not\parallel \nless \ngtr \lneq \gneq \lnsim \lneqq \gneqq
To use other relations not listed here, such as =, >, and <, in LaTeX, you must use the symbols on your keyboard, they are not available in .
Greek Letters
Symbol Command Symbol Command Symbol Command Symbol Command \alpha \beta \gamma \delta \epsilon \varepsilon \zeta \eta \theta \vartheta \iota \kappa \lambda \mu \nu \xi \pi \varpi \rho \varrho \sigma \varsigma \tau \upsilon \phi \varphi \chi \psi \omega
Symbol Command Symbol Command Symbol Command Symbol Command \Gamma \Delta \Theta \Lambda \Xi \Pi \Sigma \Upsilon \Phi \Psi \Omega Arrows
Symbol Command Symbol Command \gets \to \leftarrow \Leftarrow \rightarrow \Rightarrow \leftrightarrow \Leftrightarrow \mapsto \hookleftarrow \leftharpoonup \leftharpoondown \rightleftharpoons \longleftarrow \Longleftarrow \longrightarrow \Longrightarrow \longleftrightarrow \Longleftrightarrow \longmapsto \hookrightarrow \rightharpoonup \rightharpoondown \leadsto \uparrow \Uparrow \downarrow \Downarrow \updownarrow \Updownarrow \nearrow \searrow \swarrow \nwarrow
(For those of you who hate typing long strings of letters, \iff and \implies can be used in place of \Longleftrightarrow and \Longrightarrow respectively.)
Dots
Symbol Command Symbol Command \cdot \vdots \dots \ddots \cdots \iddots Accents
Symbol Command Symbol Command Symbol Command \hat{x} \check{x} \dot{x} \breve{x} \acute{x} \ddot{x} \grave{x} \tilde{x} \mathring{x} \bar{x} \vec{x}
When applying accents to i and j, you can use \imath and \jmath to keep the dots from interfering with the accents:
Symbol Command Symbol Command \vec{\jmath} \tilde{\imath}
\tilde and \hat have wide versions that allow you to accent an expression:
Symbol Command Symbol Command \widehat{7+x} \widetilde{abc} Others Command Symbols
Some symbols are used in commands so they need to be treated in a special way.
Symbol Command Symbol Command Symbol Command Symbol Command \textdollar or $ \& \% \# \_ \{ \} \backslash
(Warning: Using $ for will result in . This is a bug as far as we know. Depending on the version of this is not always a problem.)
European Language Symbols
Symbol Command Symbol Command Symbol Command Symbol Command {\oe} {\ae} {\o} {\OE} {\AE} {\AA} {\O} {\l} {\ss} !` {\L} {\SS} Bracketing Symbols
In mathematics, sometimes we need to enclose expressions in brackets or braces or parentheses. Some of these work just as you'd imagine in LaTeX; type ( and ) for parentheses, [ and ] for brackets, and | and | for absolute value. However, other symbols have special commands:
Symbol Command Symbol Command Symbol Command \{ \} \| \backslash \lfloor \rfloor \lceil \rceil \langle \rangle
You might notice that if you use any of these to typeset an expression that is vertically large, like
(\frac{a}{x} )^2
the parentheses don't come out the right size:
If we put \left and \right before the relevant parentheses, we get a prettier expression:
\left(\frac{a}{x} \right)^2
gives
And with system of equations:
\left\{\begin{array}{l}x+y=3\\2x+y=5\end{array}\right.
Gives
See that there's a dot after
\right. You must put that dot or the code won't work.
In addition to the
\left and
\right commands, when doing floor or ceiling functions with fractions, using
\left\lceil\frac{x}{y}\right\rceil
and
\left\lfloor\frac{x}{y}\right\rfloor
give both
And, if you type this
\underbrace{a_0+a_1+a_2+\cdots+a_n}_{x}
Gives
Or
\overbrace{a_0+a_1+a_2+\cdots+a_n}^{x}
Gives
\left and \right can also be used to resize the following symbols:
Symbol Command Symbol Command Symbol Command \uparrow \downarrow \updownarrow \Uparrow \Downarrow \Updownarrow Multi-Size Symbols
Some symbols render differently in inline math mode and in display mode. Display mode occurs when you use \[...\] or $$...$$, or environments like \begin{equation}...\end{equation}, \begin{align}...\end{align}. Read more in the commands section of the guide about how symbols which take arguments above and below the symbols, such as a summation symbol, behave in the two modes.
In each of the following, the two images show the symbol in display mode, then in inline mode.
Symbol Command Symbol Command Symbol Command \sum \int \oint \prod \coprod \bigcap \bigcup \bigsqcup \bigvee \bigwedge \bigodot \bigotimes \bigoplus \biguplus
|
I have been asking a rather few questions of this nature lately, maybe I'm starting to realise math notation isn't as uniform as I initially thought it would be...
Question: Does this notation$$\frac{\partial(y_1,\dots,y_m)}{\partial(x_1,\dots,x_n)}$$refer to the Jacobian
matrix$$ J = \begin{bmatrix} \dfrac{\partial y_1}{\partial x_1} & \cdots & \dfrac{\partial y_1}{\partial x_n} \\ \vdots & \ddots & \vdots \\ \dfrac{\partial y_m}{\partial x_1} & \cdots & \dfrac{\partial y_m}{\partial x_n} \end{bmatrix},$$or the Jacobian determinant $\det J$?
I am aware of the ambiguity of "Jacobian" being used to refer to either the determinant or the matrix itself, is this a similar case? It's really a bit annoying because when I see things like $$ \left| \frac{\partial(y_1,\dots,y_m)}{\partial(x_1,\dots,x_n)} \right| $$ I don't know if it means the absolute value of the Jacobian determinant, or the determinant of the Jacobian matrix.
|
In driven oscillator it can be explained by the following differential equation $$\ddot{x} + 2\beta \dot {x} + \omega_0 ^2x = A \cos(\omega t)$$ where the $2\beta$ is coefficient of friction, the $\omega_0$ is the frequency of simple harmonic oscillator, and $A \cos(\omega t)$ is the driven force divided by the mass of oscillating object.
The particular solution $x_p$ of the equation is
\begin{align} x_p &= \frac{A}{\sqrt{(\omega_0 ^2 - \omega ^2)^2 + 4 \omega ^2 \beta ^2}}\cos(\omega t - \delta) \\ \tan \delta &= \frac{2\omega \beta}{\omega_0 ^2 - \omega ^2} \end{align}
Now, in classical mechanics of particles and systems(Stephen T. Thornton, Jerry B. Marrion) it finds the amplitude's maximum by \begin{align} \left . \frac{\mathrm{d}}{\mathrm{d}\omega}\frac{A}{\sqrt{(\omega_0 ^2 - \omega ^2)^2 + 4 \omega ^2 \beta ^2}} \right | _{\omega = \omega_R} = 0 \\ \therefore \omega_R = \sqrt{\omega_0^2 - 2\beta ^2} \qquad (\omega_0 ^2 -2\beta^2 >0) \end{align}
and defines Q factor in driven oscillator by $$Q \equiv \frac{\omega_R}{2\beta}$$
Here I have some questions about calculating Q factor in lightly damped driven oscillator. $$Q = \frac{\omega_R}{2\beta} \simeq \frac{\omega_0}{\Delta \omega}$$ $\Delta \omega$ is the width of $\frac{1}{\sqrt{2}}$(amplitude maximum).
I searched Q factor in google, but there are so much confusion on understanding the condition "lightly damped". One says it means $\omega_0 >> 2\beta$, and the other says $\omega_0 >> \beta$. Which is right?
In google, they calculate this very absurdly. They assume that $\omega \simeq \omega_0$ and change the part of amplitude denominator by $$(\omega_0 ^2 - \omega ^2) = (\omega_0 + \omega)(\omega_0 - \omega) \simeq 2\omega_0(\omega_0 - \omega)$$ I don't understand this absurd approximation. Why $(\omega_0 + \omega) \simeq 2\omega_0$ is possible and $(\omega_0 - \omega) \simeq 0$ is not? Also, how can we assume $\omega \simeq \omega_0$?
I want to know how to derive the $Q \simeq \frac{\omega_0}{\Delta \omega}$
|
The answer to your confusion really is that you can't see holes just as "absences" of electrons. You are right: If the electrons up, then the holes have to get up as well (at least if we stick to this picture of electrons being particles):
At first, maybe this answer might be interesting to look at.
To give a brief summary: To understand the phenomenon, you have to use the formalism of quantum mechanics: You will then find out that
electrons in a solid body can have different states with different momentum $k$ and different energy $E$ (and spin, for sake of completeness, but this is not relevant for this answer) for many-particle systems (which your semiconductor is) a state can be occupied by one electron at max You will have a certain relation for your states between the Energy E of a state and the momentum $k$ of your state, which is called dispersion relation: You can see different lines in the plot, those are called "Bands". Imagine you'd only have one electron in your system, it would occupy one state in your band. You'd call this electron "electron". Now imagine your band is completely filled with electrons (remember that only one electron per state is allowed), except one free empty state. This is what is referred to as a "hole". Because of thermodynamic arguments, empty states are mutch more probable in the upper region of a band, arround the maximum peak of the Dispersion-Relation. Electrons in an empty band are, because of the same argument, usually located in the lower region, arround the minimum of the dispersion relation. By "located" I hereby mean their $k$-Value. The last piece of the puzzle: Your system, with your electrons in their respective states evolves (at least for mean values) according to classical equations of motion (this is called semiclassical model of electrons): The force acting will change the the momentum, and the velocity is the derivative of energy with respect to momentum. (This is an analogue to the canonical equations of motion in the Hamiltonian formalism)
\begin{align}v = \frac{\partial E}{\partial k} \hbar \dot{k} = F\end{align}
Now we can calculate how a force affects the change of the velocity:
\begin{align}\dot{v} = \frac{d}{dt} \frac{\partial E}{\partial k} \\= \frac{\partial^2 E}{\partial k^2} \dot{k} = \frac{\partial^2 E}{\partial k^2} \frac{1}{\hbar} F \end{align}
So you see that the curvature of the dispersion relation acts as a proportionality between force an acceleration (that's why it's called "effective mass"). For holes, which usually are located at the maximum, this curvature will usually be negative, while for electrons (located at the minimum), it is positiv. This is the explanation for holes moving into the direction that a positive charge would move to.
To answer to your question: The holes will accumulate at the bottom, as your graphic indicates, because they behave not as "absences of electrons".
|
Is there a general method to work out all irreducible complex representation of a group?
Describe all the the irreducible complex representation of the group $S_4$.
$S_4$ is the symmetric group on four letter.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
In some generality, the irreducible complex representations of $S_n$ are naturally indexed by the partitions of $n$. The irreducible representation associated to a partition $\lambda$ is called the Specht module $S^{\lambda}$. It has a basis indexed by the standard Young tableaux of shape $\lambda$.
In principle, the Specht modules of $S_n$ can be described explicitly. I think the case of $n=4$ should be reasonably straightforward. I would suggest that you look in "Young Tableaux" by Fulton.
Just in case Peter Crooks' excellent answer is more general than you really want, here is a specific argument for $S_4$.
Presumably you can find the representations of $S_3$, or you wouldn't be attempting $S_4$.
Since $S_3$ is a quotient of $S_4$, you have the representations of degrees $1,1$ and $2$ of $S_3$.
Then you have the $3$-dimensional representation $\rho$ which is a constituent of the standard permutation representation, so that is easily calculated. (The permutation representation of any 2-transitive permutation group decomposes as the trivial rep plus an irreducible).
Finally, you get $\rho \otimes -1$, which equals $\rho$ on $A_4$ and $-\rho$ on $S_4 \setminus A_4$. Since $1^2+1^2+2^2+3^2+3^2=24$ that's the lot!
|
Update1
I discovered that the bountary of a series of ellipses consists of the following three parts
Part (I): the first ellipse's
effectiveblack-segment;
Part (II): the
effectiveenvelope-points of the ellipes from the second to last second;
Part (III): the last ellipse's
effectivered-segment;
Given that there are $n$ ellipses $E_1,E_2,\cdots,E_n$ on the plane.
For the first ellipse $E_1$, the black segment that outside the ellipse $E_i(i=2,\cdots,n)$ is
effective;
For the envelope-point that on the ellipse $E_i(i=2,\cdots,n-1)$, the envelope-point $\theta_i$ that outsides the ellipse $E_j(j=1,\cdots,i-1,i+1,\cdots,n)$ is
effective;
For the last ellipse $E_n$, the red segment that outside the ellipse $E_i(i=1,\cdots,n-1)$ is
effective; Data
For a ellipse, which owns the following parametric formula
$\begin{cases} x=a \sin\theta+b \cos\theta +c \\ y=d \sin\theta +e \cos\theta +f \\ \end{cases}$
where, $\theta \in [0,2\pi]$
matThetaList = {{{{-0, -5, 0}, {-5.2203, 0, 1.7945}}, {2.4798, 5.7546}}, {{{-0.8583, -4.9384, 0.1765}, {-5.4189, 0.7822, 2.3088}}, {3.1275, 6.2599}}, {{{-1.8203, -4.7553, 0.2473}, {-5.6022 , 1.5451, 3.0486}}, {0.7316, 3.3481}}, {{{-2.9427, -4.4550, 0.3147}, {-5.7755, 2.2700, 4.0578}}, {1.1944, 3.4426}}};
here, the variable
matThetaList stores the ellipse $E_i$'s coefficient $\{\{a_i,b_i,c_i\},\{d_i,e_i,f_i\}\}$ and envelope-points $\theta_i^1,\theta_i^2$
Namely,
matThetaList=$\{ \\ \{\{\{a_1,b_1,c_1\},\{d_1,e_1,f_1\}\},\{\theta_1^1,\theta_1^2\}\},\\ \{\{\{a_2,b_2,c_2\},\{d_2,e_2,f_2\}\},\{\theta_2^1,\theta_2^2\}\},\\ \cdots \\ \}$
I have implemented this in the Answer, However, owing to the function
FindBoundary[] will be called many times, the performance of my function is very
slow.
So I would like to know:
Is there other more better/efficient algorithm to solve the boundary of the Ellipses $E_1,\cdots,E_n$?. Update2
For the general case(all the sections are the complete ellipse), RunnyKine's solution works well and it was very fast. However, when the section was a
partial ellipse, that solution failed. Here is a partial ellipse case
(*data for ellipse segments*)(*About ellipsePoints[], please see my answer below*)ellipseMat = {{{0.,-5.,0.},{-5.22027,0.,0.294118}}, {{-0.418837,-4.98459,0.228686},{-5.32183,0.392295,-0.033668}}, {{-0.858274,-4.93844,0.325822},{-5.41893,0.782172,-0.364501}}, {{-1.32336,-4.86185,0.291034},{-5.51219,1.16723,-0.688098}}, {{-1.82027,-4.75528,0.123195},{-5.60223,1.54509,-0.994631}}, {{-2.35676,-4.6194,-0.179982},{-5.68973,1.91342,-1.27478}}, {{-2.94275,-4.45503,-0.622558},{-5.77547,2.26995,-1.5198}}, {{-3.59125,-4.2632,-1.2113},{-5.86038,2.61249,-1.72161}}, {{-4.31974,-4.04509,-1.95715},{-5.94562,2.93893,-1.87293}}, {{-5.15241,-3.80203,-2.8775},{-6.0327,3.24724,-1.96744}}, {{-6.12372,-3.53553,-4.00001},{-6.12372,3.53553,-2.00001}}};ellipseDomain = {{2.38622,7.03856},{2.49067,6.93411},{2.57819,6.84659},{2.65607,6.76871}, {2.72819,6.69659},{2.79696,6.62782},{2.86409,6.56069},{2.93095,6.49383}, {2.99873,6.42605},{3.06856,6.35622},{3.1416,6.28318}};Graphics[Line[Append[#, First@#]] & /@ MapThread[ellipsePoints, {ellipseMat, ellipseDomain}]]
When I sampling more sections($300$), I discovered that the boundary should be as below:
|
Can't you just use the Lyapunov convexity theorem directly?
As usual, identify $\ell^\infty(G)$ with $C(\beta G)$, and work with $\beta G$ the Stone-Cech compactification. As this is a compact Hausdorff space, if $\mu$ is a regular measure on $\beta G$ then an atom of $\mu$ must be a point. So we can decompose $\mu$ as something in $\ell^1(\beta G)$ together with an atom-less measure, say a member of $M_c(\beta G)$ (continuous measures).
(Left) translation by members of $G$ give automorphisms of $\beta G$, and hence leave $\ell^1(\beta G)$ and $M_c(\beta G)$ invariant. I claim that nothing in $\ell^1(\beta G)$ can be left invariant. Let $\mu\in\ell^1(\beta G)$ be left invariant. Write $\beta G$ as the disjoint union of $G$-orbits, say $\bigcup_i G u_i$. Then $\mu$ must be supported on finite orbits (else we couldn't sum the coefficients, so we wouldn't be in $\ell^1$). If $u\in\beta G$ with $Gu$ finite, then there is $s\not=e$ in $G$ with $su=u$. Realise $u$ as an ultrafilter. Let $A\subseteq G$ be maximal with $A\cap s^{-1}A=\emptyset$. This means that if $r\not\in A$ then there is $t\in (A\cup\{r\}) \cap (s^{-1}A\cup\{s^{-1}r\})$, which implies that $t=r\in s^{-1}A\cup\{s^{-1}r\}$, that is, $sr\in A$. So $r\not\in A\implies sr\in A \implies r\in s^{-1}A$, so $G=A\cup s^{-1}A$.So Zorn implies there is $A\subseteq G$ with $A \cap s^{-1}A=\emptyset$ and $A\cup s^{-1}A=G$. Then either $A\in u$ so $A\in su$ so $s^{-1}A\in u$, contradiction; or $s^{-1}A\in u$ so $A\in su=u$ contradiction.
So I (hope!) I've shown that actually for any $u\in\beta G$, the orbit map $G\rightarrow\beta G; s\mapsto su$ is injective.
In particular, invariant means live in $M_c(\beta G)$, and so are atom-less, and so now we can just apply Lyapunov.
Edit: As Valerio points out, this shows that $X=\{ (\mu_1(A),\cdots,\mu_n(A)) : A\subseteq\beta G \text{ is Borel}\}$ is a convex set in $[0,1]^n$. Now, each $A\subseteq G$ induces the clopen set $O_A=\{ u\in\beta G: A\in u \}$, and these sets $O_A$ form a base for the topology. Now each $\mu_i$ is regular, so given $\epsilon>0$ and $A\subseteq\beta G$ Borel, we can find $B,C\subseteq G$ with $O_B \subseteq A\subseteq O_C$ and with $\mu_i(C)-\mu_i(B)<\epsilon$, for all $i$ (under the obvious abuse of notation). (This follows as any open set is a union of sets of the form $O_C$, and then approximate with a finite union.) So $Y=\{ (\mu_1(A),\cdots,\mu_n(A)) : A\subseteq G\}$ is a subset of $X$, and is dense in $X$. I don't see right now why $Y$ need be convex.
|
The most general, integral form of Faraday's Law is (see this physics.SE question: Faraday's law for a current loop being deformed)\begin{align} \int_{C_t} (\mathbf E+\mathbf v\times\mathbf B)\cdot d\boldsymbol \ell = - \frac{d}{dt}\int_{\Sigma_t}\mathbf B\cdot d\mathbf a\end{align}Where $C_t$ is some closed curve that can depend on time, $\Sigma_t$ is a surface with $C_t$ as its boundary, $\mathbf E$ and $\mathbf B$ are the electromagnetic fields as measured in some inertial frame, and $\mathbf v$ is the velocity of a point on the curve resulting from its time-dependence.
Now if we consider the situation you describe, then the $\mathbf v\times\mathbf B$ terms goes away if we choose a stationary loop $C=C_t$, and we get\begin{align} \int_{C}\mathbf E\cdot d\boldsymbol \ell = - \frac{d}{dt}\int_{\Sigma}\mathbf B\cdot d\mathbf a\end{align}Now you say that
all the magnetic field and hence flux is confined within the windings.
This is true. However you also say that
Therefore, for a wire making a loop surrounding the toroid and passing through the centre, the fields (electric and magnetic) at the wire is zero
This is not quite right. If the right hand side (the rate of change of the flux) is nonzero, then the line integral of the electric field around the loop must be nonzero.
\begin{align} \int_{C}\mathbf E\cdot d\boldsymbol \ell \neq 0\end{align}In particular, this means that the electric field itself cannot vanish along the loop, otherwise we would have a contradiction. In other words, it may be the case that there is no magnetic field along the loop (at least at the initial instant before any current is generated), but there is an electric field along the loop, and this pushes charges around (if the loop is a conductor with charges in it). As a side note, once the charges start moving, they create their own magnetic field even in the absence of a magnetic field produced by the solenoid.
|
The key to this question is to note that gunpowder doesn't technically explode -- it deflagrates. It doesn't have a super-sonic explosion, but rather a sub-sonic burn. To get the powerful kick needed to project a rifle or handgun bullet, we rely on the fact that gunpowder burns faster in a confined space. The tighter the space, the more temperature and pressure it can achieve.
Fired properly, the gunpowder is confined by the barrel, permitting it to reach the high pressures of a gun shot. Outside of a barrel, the brass case holding the gunpowder and the lightly set bullet provides surprisingly little containment. One can "cook off" a bullet over a fire, and the result is sudden and surprising, but far from lethal.
A single spark on the outside of the case would have a hard time setting off a bullet. Having done this once in a controlled setting to test the safety of such a bullet, it can take several seconds over the top of a torch to reach the critical temperature to deflagrate. A lone spark may have trouble. However, if your pyrokinetic can put the spark on the inside of the bullet, that'd be a very different story. That would be remarkably similar to what the primer actually does when firing the gun!
As mentioned earlier, the bullet would most certainly not escape the magazine. Most magazines are made of steel, and many tests will show just how little momentum the bullet actually picks up. Most of the time it's just shoved out of the case just far enough to give the gunpowder room to burn. However, we have to recognize that this is
still a confined space inside the handle of the gun. While it's not as small and well structured as the space inside a barrel, the gunpowder is still going to have to find an exit. It will build up pressure until it does find enough of an exit. This could be enough to cause damage.
The particular behavior is very dependent on the particular handgun and its construction. A revolver would most likely just shove the bullet out of the front of the cylinder, with little to no damage. An all steel handgun like a 1911, however, may contain the pressure better. This means it may fail in a more spectacular way. The small clip that holds the magazine into the gun would be my guess for "first to fail," causing the entire clip to pop out of the gun. If you had a "plastic" gun like a Glock 19, you could be in worse trouble. The bullets in the magazine are held in by a similar pin, but there's open access from the magazine to pressurize plastic all over. There's a decent chance that the force of the powder could rupture the plastic around the bottom edge of the slide (which is typically at a particularly nasty position for spraying plastic bits all over the gun's wielder).
Another question would be what happens to the other bullets? Depending on the exact mechanics of the rupture, you
might push one of the other bullets out of the way, exposing another case full of gunpowder. This would create a much larger effect, though it's not immediately clear what sort of mechanical topologies might cause this.
Ironically, rifle bullets might have a less extreme effect than handgun bullets. Many rifle calibers involve a large chamber for powder necked down to a smaller bullet. In a barrel that is shaped for this, this allows for devastating power. However, in a magazine, that space would simply be expansion room for the burning powder. That extra expansion room may keep the pressure down enough that the rifle round may never reach the high pressures that could cause serious damage to the handgun, despite having more gunpowder to work with.
All in all, I don't recommend experimenting to find the answer =) While there's still squabbling over whether guns kill people or people kill people,
everyone agrees that a misfired gun is a dangerous device and must be treated with respect until the misfire is resolved.
- From a long discussion in comments, it looks like the question of revolver rounds is of interest. Thanks to Deolater and Supercat for tugging at this thread, and Supercat for bringing data to the table! Edit
The key equation for determining the speed of a bullet is $F=p\cdot A$, the force propelling a bullet forward is the pressure behind the bullet times the cross sectional area of the bullet. Using the formula for work: $W=\int_0^LF\, dx$ where L is the length of the barrel, we can do some comparisons. Then, knowing that $E=\frac{1}{2}mv^2$, we can back out the velocity by noting that the velocity is proportional to the square root of E ($v\varpropto \sqrt E$)
We can consider two idealized cases for the powder burning. The first assumes constant pressure, and the second assumes the powder burns all at once, maximizing pressure at first. A realistic bullet will fall between one of these two extreme cases based on how fast the powder burns.
In the case of a constant pressure, we see $W=\int_0^LpA\, dx$ and thus $W \varpropto L$, where L is the length of the barrel. This means that $v \varpropto \sqrt L$ for the constant pressure case. In the case of an instantaneous burn, the pressure behind the bullet will obey some $p(x)=\frac{P_0}{x+C}$ where $P_0$ is the pressure at the start and $C$ is a constant capturing how much space is behind the bullet where pressure can be built before the bullet starts moving. This gives $W=\int_0^L\frac{P_0}{x+C}A\, dx$. If we cleverly choose units of length such that $C=1$ and thus $W\varpropto \ln(L+1)$.
Now we can put some numbers to this. Thanks to supercat's find, we have a table for a .375 magnum. Now .357 is rather convenient in that $C$ is roughly 1 inch (the case is 1.29" and the bullet rests a bit inside, so 1" is actually probably very close to correct). If the powder were to burn with a constant pressure, the energy after moving 1/2" (escaping the edge of the chamber) would be 1/12th that of the energy when escaping a 6" barrel, and thus a velocity that was .00694 that of the 6" barrel shot. This would put its velocity around 10fps. If we instead assume an instantaneous burn, we can use the second set of equations to see that the energy would be about 20.8% of the energy escaping a 6" barrel. This second equation would instead put its velocity at 65fps. The actual speed would be somewhere between these extreme assumptions.
Sure enough, if we graph muzzle velocity from the page of .357 data, we see a sharp knee in the curve as the barrel length gets smaller. The data doesn't go below 2", but extrapolating the lines confirms that the velocity exiting the chamber would be very small with respect to that of a properly fired bullet.
|
Arrange the given statements involving indices to show whether they are true or false.
\( (x^3)^4 \equiv x^7\)
\( \frac{x^6}{x^3} \equiv x^2\)
\(x^8 \div x^4 \equiv x^2\)
\(x^2 \times x^3 \equiv x^6\)
\( (x^3)^4 \equiv x^{12}\)
\( \frac{x^7}{x^3} \equiv x^4\)
\(x^8 \div x^5 \equiv x^3\)
\(x^2 \times x^3 \equiv x^5\)
Your answer is not correct. Try again.
This is Laws of Indices - True or False? level 1. You can also try:
Level 2 Level 3 Level 4
There are also a set of printable cards for an offline version.
Transum.org
This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available.
Please contact me if you have any suggestions or questions.
Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician?
Comment recorded on the 28 May 'Starter of the Day' page by L Smith, Colwyn Bay:
"An absolutely brilliant resource. Only recently been discovered but is used daily with all my classes. It is particularly useful when things can be saved for further use. Thank you!"
Comment recorded on the 24 May 'Starter of the Day' page by Ruth Seward, Hagley Park Sports College:
"Find the starters wonderful; students enjoy them and often want to use the idea generated by the starter in other parts of the lesson. Keep up the good work"
Numeracy
"Numeracy is a proficiency which is developed mainly in Mathematics but also in other subjects. It is more than an ability to do basic arithmetic. It involves developing confidence and competence with numbers and measures. It requires understanding of the number system, a repertoire of mathematical techniques, and an inclination and ability to solve quantitative or spatial problems in a range of contexts. Numeracy also demands understanding of the ways in which data are gathered by counting and measuring, and presented in graphs, diagrams, charts and tables."
Secondary National Strategy, Mathematics at key stage 3
Go Maths
Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths main page links to more activities designed for students in upper Secondary/High school.
Teachers
If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows:
Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments.
Close
Level 1 - The basic laws of indices
Level 2 - More complex statements including negative indices
Level 3 - More complex statements including fractional indices
Level 4 - Mixed puzzling statements for the expert
Cards - There are also a set of printable cards for an offline version of this activity.
Game - The Indices Pairs game with three levels of difficulty.
Exam Style questions are in the style of GCSE or IB/A-level exam paper questions and worked solutions are available for Transum subscribers.
More on this topic including lesson Starters, visual aids and investigations.
See the National Curriculum page for links to related online activities and resources.
\( 5^a \times 5^b \equiv 5^{a+b} \)
\( 5^a \div 5^b \equiv 5^{a-b} \)
\( (5^a)^b \equiv 5^{ab} \)
\( 5^1 \equiv 5 \)
\( 5^0 \equiv 1 \)
\( 5^{-1} \equiv \frac15 \)
\( 5^{-2} \equiv \frac{1}{25} \)
\( 5^{\frac12} \equiv \sqrt{5} \)
\( 5^{\frac13} \equiv \sqrt[3]{5} \)
\( 5^{\frac23} \equiv \sqrt[3]{5^2} \)
Close
|
We introduced the concept of a limit gently, approximating their values graphically and numerically. Next came the rigorous definition of the limit, along with an admittedly tedious method for evaluating them. The previous section gave us tools (which we call theorems) that allow us to compute limits with greater ease. Chief among the results were the facts that polynomials and rational, trigonometric, exponential and logarithmic functions (and their sums, products, etc.) all behave "nicely.'' In this section we rigorously define what we mean by "nicely.''
In Section 1.1 we explored the three ways in which limits of functions failed to exist:
The function approached different values from the left and right, The function grows without bound, and The function oscillates.
In this section we explore in depth the concepts behind #1 by introducing the
one-sided limit. We begin with formal definitions that are very similar to the definition of the limit given in Section 1.2, but the notation is slightly different and "\(x\neq c\)'' is replaced with either "\(x<c\)'' or "\(x>c\).''
Definition 2: One Sided Limits
Left-Hand Limit
Let \(I\) be an open interval containing \(c\), and let \(f\) be a function defined on \(I\), except possibly at \(c\). The
limit of \(f(x)\), as \(x\) approaches \(c\) from the left, is \(L\), or, the left--hand limit of \(f\) at \(c\) is \(L\), denoted by
\[ \lim\limits_{x\rightarrow c^-} f(x) = L,\]
means that given any \(\epsilon > 0\), there exists \(\delta > 0\) such that for all \(x< c\), if \(|x - c| < \delta\), then \(|f(x) - L| < \epsilon\).
Right-Hand Limit
Let \(I\) be an open interval containing \(c\), and let \(f\) be a function defined on \(I\), except possibly at \(c\). The
limit of \(f(x)\), as \(x\) approaches \(c\) from the right, is \(L\), or, the right--hand limit of \(f\) at \(c\) is \(L\), denoted by
\[ \lim\limits_{x\rightarrow c^+} f(x) = L,\]
means that given any \(\epsilon > 0\), there exists \(\delta > 0\) such that for all \(x> c\), if \(|x - c| < \delta\), then \(|f(x) - L| < \epsilon\).
Practically speaking, when evaluating a left-hand limit, we consider only values of \(x\) "to the left of \(c\),'' i.e., where \(x<c\). The admittedly imperfect notation \(x\to c^-\) is used to imply that we look at values of \(x\) to the left of \(c\). The notation has nothing to do with positive or negative values of either \(x\) or \(c\). A similar statement holds for evaluating right-hand limits; there we consider only values of \(x\) to the right of \(c\), i.e., \(x>c\). We can use the theorems from previous sections to help us evaluate these limits; we just restrict our view to one side of \(c\).
We practice evaluating left and right-hand limits through a series of examples.
Example 17: Evaluating one sided limits
Let \( f(x) = \left\{\begin{array}{cc} x & 0\leq x\leq 1 \\ 3-x & 1<x<2\end{array},\right.\) as shown in Figure 1.21. Find each of the following:
\(\lim\limits_{x\to 1^-} f(x)\) \(\lim\limits_{x\to 1^+} f(x)\) \(\lim\limits_{x\to 1} f(x)\) \(f(1)\) \(\lim\limits_{x\to 0^+} f(x) \) \(f(0)\) \(\lim\limits_{x\to 2^-} f(x)\) \(f(2)\) \(\text{FIGURE 1.21}\): A graph of \(f\) in Example 17. Solution:
For these problems, the visual aid of the graph is likely more effective in evaluating the limits than using \(f\) itself. Therefore we will refer often to the graph.
As \(x\) goes to 1 from the left, we see that \(f(x)\) is approaching the value of 1. Therefore \( \lim\limits_{x\to 1^-} f(x) =1.\) As \(x\) goes to 1 from the right, we see that \(f(x)\) is approaching the value of 2. Recall that it does not matter that there is an "open circle'' there; we are evaluating a limit, not the value of the function. Therefore \( \lim\limits_{x\to 1^+} f(x)=2\). Thelimit of \(f\) as \(x\) approaches 1 does not exist, as discussed in the first section. The function does not approach one particular value, but two different values from the left and the right. Using the definition and by looking at the graph we see that \(f(1) = 1\). As \(x\) goes to 0 from the right, we see that \(f(x)\) is also approaching 0. Therefore \( \lim\limits_{x\to 0^+} f(x)=0\). Note we cannot consider a left-hand limit at 0 as \(f\) is not defined for values of \(x<0\). Using the definition and the graph, \(f(0) = 0\). As \(x\) goes to 2 from the left, we see that \(f(x)\) is approaching the value of 1. Therefore \( \lim\limits_{x\to 2^-} f(x)=1.\) The graph and the definition of the function show that \(f(2)\) is not defined.
Note how the left and right-hand limits were different at \(x=1\). This, of course, causes
the limit to not exist. The following theorem states what is fairly intuitive: the limit exists precisely when the left and right-hand limits are equal.
Theorem 7: Limits and One Sided Limits
Let \(f\) be a function defined on an open interval \(I\) containing \(c\). Then \[\lim\limits_{x\to c}f(x) = L\]if, and only if, \[\lim\limits_{x\to c^-}f(x) = L \quad \text{and} \quad \lim\limits_{x\to c^+}f(x) = L.\]
The phrase "if, and only if'' means the two statements are
equivalent: they are either both true or both false. If the limit equals \(L\), then the left and right hand limits both equal \(L\). If the limit is not equal to \(L\), then at least one of the left and right-hand limits is not equal to \(L\) (it may not even exist).
One thing to consider in Examples 17 - 20 is that the value of the function may/may not be equal to the value(s) of its left/right-hand limits, even when these limits agree.
Example 18: Evaluating limits of a piecewise-defined function
Let \(f(x) = \left\{\begin{array}{cc} 2-x & 0<x<1 \\ (x-2)^2 & 1<x<2 \end{array},\right.\) as shown in Figure 1.22. Evaluate the following.
\( \lim\limits_{x\to 1^-} f(x)\) \( \lim\limits_{x\to 1^+} f(x)\) \( \lim\limits_{x\to 1} f(x)\) \( f(1)\) \( \lim\limits_{x\to 0^+} f(x)\) \(f(0)\) \( \lim\limits_{x\to 2^-} f(x)\) \(f(2)\) \(\text{FIGURE 1.22}\): A graph of \(f\) from Example 18. Solution:
Again we will evaluate each using both the definition of \(f\) and its graph.
As \(x\) approaches 1 from the left, we see that \(f(x)\) approaches 1. Therefore \( \lim\limits_{x\to 1^-} f(x)=1.\) As \(x\) approaches 1 from the right, we see that again \(f(x)\) approaches 1. Therefore \( \lim\limits_{x\to 1+} f(x)=1\). Thelimit of \(f\) as \(x\) approaches 1 exists and is 1, as \(f\) approaches 1 from both the right and left. Therefore \( \lim\limits_{x\to 1} f(x)=1\). \(f(1)\) is not defined. Note that 1 is not in the domain of \(f\) as defined by the problem, which is indicated on the graph by an open circle when \(x=1\). As \(x\) goes to 0 from the right, \(f(x)\) approaches 2. So \( \lim\limits_{x\to 0^+} f(x)=2\). \(f(0)\) is not defined as \(0\) is not in the domain of \(f\). As \(x\) goes to 2 from the left, \(f(x)\) approaches 0. So \( \lim\limits_{x\to 2^-} f(x)=0\). \(f(2)\) is not defined as 2 is not in the domain of \(f\).
Example 19: Evaluating limits of a piecewise-defined function
Let \(f(x) = \left\{\begin{array}{cc} (x-1)^2 & 0\leq x\leq 2, x\neq 1\\ 1 & x=1\end{array},\right.\) as shown in Figure 1.23. Evaluate the following.
\( \lim\limits_{x\to 1^-} f(x)\) \( \lim\limits_{x\to 1^+} f(x)\) \( \lim\limits_{x\to 1} f(x)\) \(f(1)\) \(\text{FIGURE 1.23}\): Graphing \(f\) in Example 19.
It is clear by looking at the graph that both the left and right-hand limits of \(f\), as \(x\) approaches 1, is 0. Thus it is also clear that
the limit is 0; i.e., \( \lim\limits_{x\to 1} f(x) = 0\). It is also clearly stated that \(f(1) = 1\).
Example 20: Evaluating limits of a piecewise-defined function
Let \(f(x) = \left\{\begin{array}{cc} x^2 & 0\leq x\leq 1 \\ 2-x & 1<x\leq 2\end{array},\right.\) as shown in Figure 1.24. Evaluate the following.
\( \lim\limits_{x\to 1^-} f(x)\) \( \lim\limits_{x\to 1^+} f(x)\) \( \lim\limits_{x\to 1} f(x)\) \(f(1)\) \(\text{FIGURE 1.24}\): Graphing \(f\) in Example 20. Solution:
It is clear from the definition of the function and its graph that all of the following are equal:
\[\lim\limits_{x\to 1^-} f(x) = \lim\limits_{x\to 1^+} f(x) =\lim\limits_{x\to 1} f(x) =f(1) = 1.\]
In Examples 17 - 20 we were asked to find both \( \lim\limits_{x\to 1}f(x)\) and \(f(1)\). Consider the following table:
\[\begin{array}{ccc} & \lim\limits_{x\to 1}f(x) & f(1) \\ \hline \text{Example 17} & \text{does not exist} & 1 \\ \text{Example 18} & 1 & \text{not defined} \\ \text{Example 19} & 0 & 1 \\ \text{Example 20} & 1 & 1 \\ \end{array}\]
Only in Example 20 do both the function and the limit exist and agree. This seems "nice;'' in fact, it seems "normal.'' This is in fact an important situation which we explore in the next section, entitled "Continuity.'' In short, a
continuous function is one in which when a function approaches a value as \(x\rightarrow c\) (i.e., when \( \lim\limits_{x\to c} f(x) = L\)), it actually attains that value at \(c\). Such functions behave nicely as they are very predictable. Contributors
Gregory Hartman (Virginia Military Institute). Contributions were made by Troy Siemers and Dimplekumar Chalishajar of VMI and Brian Heinold of Mount Saint Mary's University. This content is copyrighted by a Creative Commons Attribution - Noncommercial (BY-NC) License. http://www.apexcalculus.com/
|
Learning Outcomes
Raise a number to a power using technology. Take the square root of a number using technology. Apply the order of operations when there is root or a power.
It can be a challenge when we first try to use technology to raise a number to a power or take a square root of a number. In this section, we will go over some pointers on how to successfully take powers and roots of a number. We will also continue our practice with the order of operations, remembering that as long as there are no parentheses, exponents always come before all other operations. We will see that taking a power of a number comes up in probability and taking a root comes up in finding standard deviations.
Powers
Just about every calculator, computer, and smartphone can take powers of a number. We just need to remember that the symbol "^" is used to mean "to the power of". We also need to remember to use parentheses if we need to force other arithmetic to come before the exponentiation.
Example \(\PageIndex{1}\)
Evaluate: \(1.04^5\) and round to two decimal places.
Solution
This definitely calls for the use of technology. Most calculators, whether hand calculators or computer calculators, use the symbol "^" (shift 6 on the keyboard) for exponentiation. We type in:
\[1.04^5 = 1.2166529\nonumber \]
We are asked to round to two decimal places. Since the third decimal place is a 6 which is 5 or greater, we round up to get:
\[1.04^5\approx1.22\nonumber \]
Example \(\PageIndex{2}\)
Evaluate: \(2.8^{5.3\times0.17}\) and round to two decimal places.
Solution
First note that on a computer we use "*" (shift 8) to represent multiplication. If we were to put in 2.8 ^ 5.3 * 0.17 into the calculator, we would get the wrong answer, since it will perform the exponentiation before the multiplication. Since the original question has the multiplication inside the exponent, we have to force the calculator to perform the multiplication first. We can ensure that multiplication occurs first by including parentheses:
\[2.8 ^{5.3 \times 0.17} = 2.52865\nonumber \]
Now round to decimal places to get:
\[2.8^{5.3\times0.17}\approx2.53\nonumber \]
Example \(\PageIndex{3}\)
If we want to find the probability that if we toss a six sided die five times that the first two rolls will each be a 1 or a 2 and the last three die rolls will be even, then the probability is:
\[\left(\frac{1}{3}\right)^2\:\times\left(\frac{1}{2}\right)^3\nonumber \]
What is this probability rounded to three decimal places?
Solution
We find:
\[(1 / 3) ^ 2 (1 / 2) ^ 3 \approx 0.013888889\nonumber \]
Now round to three decimal places to get
\[\left(\frac{1}{3}\right)^2\:\times\left(\frac{1}{2}\right)^3 \approx0.014\nonumber \]
Square Roots
Square roots come up often in statistics, especially when we are looking at standard deviations. We need to be able to use a calculator or computer to compute a square root of a number. There are two approaches that usually work. The first approach is to use the \(\sqrt{\:\:}\) symbol on the calculator if there is one. For a computer, using sqrt() usually works. For example if you put 10*sqrt(2) in the Google search bar, it will show you 14.1421356. A second way that works for pretty much any calculator, whether it is a hand held calculator or a computer calculator, is to realize that the square root of a number is the same thing as the number to the 1/2 power. In order to not have to wrap 1/2 in parentheses, it is easier to type in the number to the 0.5 power.
Example \(\PageIndex{3}\)
Evaluate \(\sqrt{42}\) and round your answer to two decimal places.
Solution
Depending on the technology you are using you will either enter the square root symbol and then the number 42 and then close the parentheses if they are presented and then hit enter. If you are using a computer, you can use sqrt(42). The third way that will work for both is to enter:
\[42^0.5 \approx 6.4807407\nonumber \]
You must then round to two decimal places. Since 0 is less than 5, we round down to get:
\[\sqrt{42}\approx6.48\nonumber \]
Example \(\PageIndex{4}\)
The "z-score" is for the value of 28 for a sampling distribution with sample size 60 coming from a population with mean 28.3 and standard deviation 5 is defined by:
\[z=\frac{28-28.3}{\frac{5}{\sqrt{60}}}\nonumber \]
Find the z-score rounded to two decimal places.
Solution
We have to be careful about the order of operations when putting it into the calculator. We enter:
\[ \dfrac{28 - 28.3}{(5 / 60^{0.5}} = -0.464758\nonumber \]
Finally, we round to 2 decimal places. Since 4 is smaller than 5, we round down to get:
\[z=\frac{28-28.3}{\frac{5}{\sqrt{60}}}=-0.46\nonumber \]
Exercise
The standard error, which is an average of how far sample means are from the population mean is defined by:
\[\sigma_\bar x=\frac{\sigma}{\sqrt{n}}\nonumber \]
where \(\sigma_\bar x\) is the standard error, \(\sigma \) is the standard deviation, and \(n\) is the sample size. Find the standard error if the population standard deviation, \(\sigma \), is 14 and the sample size, \(n\), is 11.
|
I have two questions related to rotation of superfluids. Firstly, what is the main reason that superfluid cannot rotate as a whole object ? (I found that it is true in Landau's
Statistical Physics but without explanation.) Second question is the following. Suppose we have a rotating helium and then we cooling it in such a way that it causes transition into superfluid state. What happens with the rotation degrees of freedom ? How to describe such a phase transition ?
I have two questions related to rotation of superfluids. Firstly, what is the main reason that superfluid cannot rotate as a whole object ? (I found that it is true in Landau's
Particle on a rotating ring
For further discussion purpose, let's considere the dynamics of a quantum particle on a $r_0$ radius rotating circle at a constant angular velocity $\mathbf{\Omega}=\Omega\,\hat{e}_z$ . In cylindrical coordinates, we fix $z=0$, and we have the azimuthal angle $\theta$, which is the "
good" degree of freedom describing the dynamics.
One can compute the hamiltonian of the system in the rotating frame, taking into account the angular momentum $\mathbf{\mathcal{L}}$ of the particle : $$ \mathcal{H}=-\frac{\hbar^2}{2m}\Delta-\mathbf{\Omega}\cdot\mathbf{\mathcal{L}}=-\frac{\hbar^2}{2m\,r^2_0}\frac{\mathrm{d}^2}{\mathrm{d}\theta^2}+\mathrm{i}\hbar\frac{\mathrm{d}}{\mathrm{d}\theta}=\frac{\hbar^2}{2m\,r^2_0}\left(\mathrm{i}\frac{\mathrm{d}}{\mathrm{d}\theta}+\frac{\Omega}{\Omega_c}\right)^2-\frac{1}{2}m\Omega^2r_0^2 $$ where $\Omega_c=\frac{\hbar}{m\,r^2_0}$.
It is easy to see that the spectrum is : $$E_n=\frac{\hbar^2}{2m\,r^2_0}\left(n-\frac{\Omega}{\Omega_c}\right)^2,\quad\forall\,n\in\mathbb{Z} $$ For simplicity, let fix $\Omega>0$ for the following. This tells you that the ground state $\Psi_g$ of the system depends on $\Omega$ such that : $$ \text{for}\;\Omega<\Omega_c/2,\quad\Psi_g(\theta)=\frac{1}{\sqrt{2\pi}} $$ $$ \text{for}\;\Omega_c/2<\Omega<3\Omega_c/2,\quad\Psi_g(\theta)=\frac{1}{\sqrt{2\pi}} e^{\mathrm{i}\theta} $$ $$ \text{etc}... $$ In each case, one can calcultate the associated velocity : $$ \textbf{v}=\frac{n\hbar}{mr_0}\,\hat{e}_\theta $$
Dragging a fluid
As a first experiment, we can considere a gas on a thin ring initially at rest. Then we accelerate the ring to a constant angular velocity $\mathbf{\Omega}=\Omega\,\hat{e}_z$. Let say that the ring as some roughness in its inner surface which can interact with the gas.
If the gas in a classical (viscous) gas, it can be shown that the interaction between the gas and the inner surface of the ring will drag the gas so that its velocity field will be : $$ \textbf{v}(\textbf{r})=\mathbf{\Omega}\times\textbf{r} $$
Now, if the gas is a quantum fluid (superfluid), zero viscosity implies that the gas does not interact with the ring anymore. And for a slow rotation $\Omega\ll\Omega_c$, we see that the ground state of the system as
zero velocity, meaning that although the ring is rotating, it cannot drag the fluid which stays at rest.
Probing the classical/superfluid transition
As a second experiment, let us assume that we managed to rotate the superfluid with $\Omega>\Omega_c$, meaning the initial velocity field of the gas is given by : $$ \textbf{v}=\frac{\hbar}{mr_0}\,\hat{e}_\theta $$ What happens if we suddenly stop the rotation of the ring, setting $\Omega=0$?
If the fluid is classical (viscous), interactions between the fluid and the ring leads to zero velocity field and the fluid will stop rotating.
For a superfluid however, there is no viscosity leading to a possible stop of the fluid
although the ground state of the system as a zero velocity field, as shown in the previous experiment. Such feature is typical of metastability, showing here the existence of permanent currents. This also implies that such system should show an hysteresis behavior.
Indeed, for $\Omega=\Omega_c/2$, the system has a degenerate ground state such that : $$ E_0(\Omega=\Omega_c/2)=E_1(\Omega=\Omega_c/2) $$ meaning that if we follow a cyclic scheme crossing that point for the rotation of the ring $$ \Omega=0\rightarrow\Omega>\Omega_c/2\rightarrow\Omega=0 $$ we can produce an hysteresis curve of currents. Reproducing such cycle for different temperature of the fluid should allow you to probe the superfluid transition (by probing the existence of permanent currents).
An example of real life experiment
This hystereris behavior has been observed (Eckel et al.,
Nature, 506.7487 p200-203 (2014)) using a Bose-Einstein condensate of ultracold atoms trapped in a ring shaped dipole trap. They could excite the rotation of the condensate by pushing the atoms with a laser beam in rotation.
|
I want to solve the Laplace Equation with pure Neumann B.C. using Finite Element Method:
$- \Delta u = f \ $ in $ \ \Omega $
$- \partial u/\partial n = g \ $ on $ \ \Gamma = \partial \Omega$
With weak formulation
$\int_{\Omega} \nabla v \cdot \nabla u = \int_{\Omega} v \ f + \int_{\partial \Omega} v \ g $.
To obtain a unique solution, I was following the Lagrange Multiplier method, enforcing a constraint of the type
$\int_{\Omega} v \ dx= 0 $.
The following linear system would then be obtained
$\begin{pmatrix}A & B^T \\ B & 0\end{pmatrix} \begin{pmatrix}U \\ \lambda \end{pmatrix} = \begin{pmatrix}F \\ 0\end{pmatrix}$
Where $A$ corresponds to the LHS (stiffness matrix), $F$ to the RHS (load vector), $U$ to the solution vector and $\lambda$ to the Lagrange Multiplier, which can be discarded from the resulting solution vector.
According to Larson, Bengzon in the book "The Finite Element Method" (p. 95),
The Lagrangian multiplier $\lambda$ may be thought of as a force acting to enforce ̋the constraints. Because the zero mean value on $u_h$ is a constraint, which do not alter the solution to the underlying Neumann problem, the force should vanish or, at least, be very small.
I would like to know how small this value should be, and how to analyze the simulation results based on this (are the results valid if the multiplier is too large? Should I compare it based on the minimum value obtained from my solution? etc.).
Thanks
|
Exercise \(1\)
In precalculus, you learned a formula for the position of the maximum or minimum of a quadratic equation \(y=ax^2+bx+c\), which was \(m=−\frac{b}{(2a)}\). Prove this formula using calculus.
Answer
Under Construction
Exercise \(2\)
If you are finding an absolute minimum over an interval \([a,b],\) why do you need to check the endpoints? Draw a graph that supports your hypothesis.
Answer
Under Construction
Exercise \(3\)
If you are examining a function over an interval \((a,b),\) for \(a\) and \(b\) finite, is it possible not to have an absolute maximum or absolute minimum?
Answer
Under Construction
Exercise \(4\)
When you are checking for critical points, explain why you also need to determine points where \(f(x)\) is undefined. Draw a graph to support your explanation.
Answer
Under Construction
Exercise \(5\)
Can you have a finite absolute maximum for \(y=ax^2+bx+c\) over \((−∞,∞)\)? Explain why or why not using graphical arguments.
Answer
Under Construction
Exercise \(6\)
Can you have a finite absolute maximum for \(y=ax^3+bx^2+cx+d\) over \((−∞,∞)\) assuming a is non-zero? Explain why or why not using graphical arguments.
Answer
No
Exercise \(7\)
Let \(m\) be the number of local minima and \(M\) be the number of local maxima. Can you create a function where \(M>m+2\)? Draw a graph to support your explanation.
Answer
Under Construction
Exercise \(8\)
Is it possible to have more than one absolute maximum? Use a graphical argument to prove your hypothesis.
Answer
Since the absolute maximum is the function (output) value rather than the x value, the answer is no
Exercise \(9\)
Is it possible to have no absolute minimum or maximum for a function? If so, construct such a function. If not, explain why this is not possible.
Answer
Under Construction
Exercise \(10\)
Graph the function \(y=e^{ax}.\) For which values of \(a\), on any infinite domain, will you have an absolute minimum and absolute maximum?
Answer
When \(a=0\)
Exercise \(11\)
For the following exercises, determine where the local and absolute maxima and minima occur on the graph given. Assume domains are closed intervals unless otherwise specified.
1)
2)
3)
4)
Answers to even numbered questions
2. Absolute minimum at 3; Absolute maximum at −2.2; local minima at −2, 1; local maxima at −1, 2
4. Absolute minima at −2, 2; absolute maxima at −2.5, 2.5; local minimum at 0; local maxima at −1, 1
Exercise \(12\)
For the following problems, draw graphs of \(f(x),\) which is continuous, over the interval \([−4,4]\) with the following properties:
1) Absolute maximum at \(x=2\) and absolute minima at \(x=±3\)
2) Absolute minimum at \(x=1\) and absolute maximum at \(x=2\)
3) Absolute maximum at \(x=4,\) absolute minimum at \(x=−1,\) local maximum at \(x=−2,\) and a critical point that is not a maximum or minimum at \(x=2\)
4) Absolute maxima at \(x=2\) and \(x=−3\), local minimum at \(x=1\), and absolute minimum at \(x=4\)
Answer
Under Construction
Exercise \(13\)
For the following exercises, find the critical points in the domains of the following functions.
1) \(y=4x^3−3x\)
2) \(y=4\sqrt{x}−x^2\)
3) \(y=\frac{1}{x−1}\)
4) \(y=ln(x−2)\)
5) \(y=tan(x)\)
6) \(y=\sqrt{4−x^2}\)
7) \(y=x^{3/2}−3x^{5/2}\)
8) \(y=\frac{x^2−1}{x^2+2x−3}\)
9) \(y=sin^2(x)\)
10) \(y=x+\frac{1}{x}\)
Answers to even numbered questions
2. \(x=1\)
4. None
6. \(x=0\)
8. None
10. \(x=−1,1\)
Exercise \(14\)
For the following exercises, find the local and/or absolute maxima for the functions over the specified domain.
1) \(f(x)=x2^+3\) over \([−1,4]\)
2) \(y=x^2+\frac{2}{x}\) over \([1,4]\)
3) \(y=(x−x^2)^2\) over \([−1,1]\)
4) \(y=\frac{1}{(x−x^2)}\) over \([0,1]\)
5) \(y=\sqrt{9−x}\) over \([1,9]\)
6) \(y=x+sin(x)\) over \([0,2π]\)
7) \(y=\frac{x}{1+x}\) over \([0,100]\)
8) \(y=|x+1|+|x−1|\) over \([−3,2]\)
9) \(y=\sqrt{x}−\sqrt{x^3}\) over \([0,4]\)
10) \(y=sinx+cosx\) over \([0,2π]\)
11) \(y=4sinθ−3cosθ\) over \([0,2π]\)
Answers to even numbered questions
2. Absolute maximum: \(x=4, y=\frac{33}{2}\); absolute minimum: \(x=1, y=3\)
4. Absolute minimum: \(x=\frac{1}{2}, y=4\)
6. Absolute maximum: \(x=2π, y=2π;\) absolute minimum: \(x=0, y=0\)
8. Absolute maximum: \(x=−3;\) absolute minimum: \(−1≤x≤1, y=2\)
10. Absolute maximum: \(x=\frac{π}{4}, y=\sqrt{2}\); absolute minimum: \(x=\frac{5π}{4}, y=−\sqrt{2}\)
Exercise \(15\)
For the following exercises, find the local and absolute minima and maxima for the functions over \((−∞,∞).\)
1) \(y=x^2+4x+5\)
2) \(y=x^3−12x\)
3) \(y=3x^4+8x^3−18x^2\)
4) \(y=x^3(1−x)^6\)
5) \(y=\frac{x^2+x+6}{x−1}\)
6) \(y=\frac{x^2−1}{x−1}\)
Answers to odd numbered questions
1. Absolute minimum: \(x=−2, y=1\)
3. Absolute minimum: \(x=−3, y=−135;\) local maximum: \(x=0, y=0\); local minimum: \(x=1, y=−7\)
5. Local maximum: \(x=1−2\sqrt{2}, y=3−4\sqrt{2}\); local minimum: \(x=1+2\sqrt{2}, y=3+4\sqrt{2}\)
Exercise \(16\)
For the following functions, use a calculator to graph the function and to estimate the absolute and local maxima and minima. Then, solve for them explicitly.
1) \(y=3x\sqrt{1−x^2}\)
2) \(y=x+sin(x)\)
3) \(y=12x^5+45x^4+20x^3−90x^2−120x+3\)
4) \(y=\frac{x^3+6x^2−x−30}{x−2}\)
5) \(y=\frac{\sqrt{4−x^2}}{\sqrt{4+x^2}}\)
Answers to odd numbered questions
1. Absolute maximum: \(x=\frac{\sqrt{2}}{2}, y=\frac{3}{2};\) absolute minimum: \(x=−\frac{\sqrt{2}}{2}, y=−\frac{3}{2}\)
3. Local maximum: \(x=−2,y=59\); local minimum: \(x=1, y=−130\)
5. Absolute maximum: \(x=0, y=1;\) absolute minimum: \(x=−2,2, y=0\)
Exercise \(17\)
A company that produces cell phones has a cost function of \(C=x^2−1200x+36,400,\) where \(C\) is cost in dollars and \(x\) is number of cell phones produced (in thousands). How many units of cell phone (in thousands) minimizes this cost function?
Answer
Under Construction
Exercise \(18\)
A ball is thrown into the air and its position is given by \(h(t)=−4.9t^2+60t+5m.\) Find the height at which the ball stops ascending. How long after it is thrown does this happen?
Answer
\(h=\frac{9245}{49}m, t=\frac{300}{49}s\)
Exercise \(19\)
For the following exercises, consider the production of gold during the California gold rush (1848–1888). The production of gold can be modeled by \(G(t)=\frac{(25t)}{(t^2+16)}\), where t is the number of years since the rush began \((0≤t≤40)\) and \(G\) is ounces of gold produced (in millions). A summary of the data is shown in the following figure.
1) Find when the maximum (local and global) gold production occurred, and the amount of gold produced during that maximum.
2) Find when the minimum (local and global) gold production occurred. What was the amount of gold produced during this minimum?
Answer to even numbered questions
2. The global minimum was in 1848, when no gold was produced.
Exercise \(20\)
Find the critical points, maxima, and minima for the following piecewise functions.
1) \(y=\begin{cases}x^2−4x& 0≤x≤1//x^2−4&1<x≤2\end{cases}\)
2) \(y=\begin{cases}x^2+1 & x≤1 // x^2−4x+5 & x>1\end{cases}\)
Answers to even numbered questions
2. Absolute minima: \(x=0, x=2, y=1\); local maximum at \(x=1, y=2\)
Exercise \(21\)
For the following exercises, find the critical points of the following generic functions. Are they maxima, minima, or neither? State the necessary conditions.
1) \(y=ax^2+bx+c,\) given that \(a>0\)
2) \(y=(x−1)^a\), given that \(a>1\)
Answers to even numbered questions
2. No maxima/minima if \(a\) is odd, minimum at \(x=1\) if \(a\) is even
|
Let $\Omega\subset R^2$ be a simply connected bounded domain with infinitely differentiable boundary $\partial\Omega$and unit normal vector $v$ directed into the exterior of $\Omega$ $$\Phi{(x,y)}=\dfrac{i}{4}H^{(1)}_{0}(k|x-y|),x\neq y$$ we denote the fundamental solution to the two-dimensional Helmholtz equation in terms of the first kind Hankel function of order zero
where the Helmholtz equation $$\Delta u+k^2u=0, \mbox{in} R^2\overline{\Omega}$$ and $$(T\psi)(x):=\dfrac{\partial}{\partial v(x)}\int_{\partial\Omega}\dfrac{\partial\Phi{(x,y)}}{\partial v(y)}\psi{(y)}ds(y),x\in\partial\Omega.$$
let: $$x=z(t)=(z_{1}(t),z_{2}(t)),y=z(\tau)=(z_{1}(\tau),z_{2}(\tau)),-1\le t,\tau\le 1$$ show that: $$(T\psi)((z(t))=\dfrac{1}{|z'(t)|}\int_{-1}^{1}\left(\dfrac{1}{2\pi}\cdot\dfrac{1}{\tau-t}\dfrac{d}{d\tau}\psi(z(\tau))+L(t,\tau)\psi(z(\tau))\right)d\tau$$
where $$L(t,\tau)=-\dfrac{i}{2}\dfrac{z'(t)\{z(t)-z(\tau)\}z'(\tau)\{z(t)-z(\tau)\}}{|z(t)-z(\tau)|^2}\left(k^2H^{(1)}_{0}(k|z(t)-z(\tau)|)-\dfrac{2kH^{(1)}_{1}(k|z(t)-z(\tau)|)}{ |z(t)-z(\tau)|}\right)-\dfrac{ik}{2}\dfrac{z'(t)z'(\tau)}{|z(t)-z(\tau)|}H^{(1)}_{1}(k|z(t)-z(\tau)|)-\dfrac{1}{\pi}\dfrac{1}{(\tau-t)^2}+\dfrac{ik^2}{2}H^{(1)}_{0}(k|z(t)-z(\tau)|)z'(t)z'(\tau)$$ This results is from this paper:http://www.sciencedirect.com/science/article/pii/S0377042703005867
and I have Calculate for a few days,at last I failure
since I have post this question:The Helmholtz equation: How prove this $T\psi{(x)}\in\Omega$.
I want use this results $$(T\psi)(x)=\dfrac{\partial}{\partial s(x)}\int_{\partial\Omega}\Phi{(x,y)}\dfrac{\partial \psi}{\partial s}(y)ds(y)+k^2v(x)\cdot\int_{\partial \Omega}\Phi{(x,y)}v(y)\psi{(y)}ds(y),x\in\partial\Omega $$ I know this problem is limit problem,But I feel the pressure to prove this
.Thank you for your help
|
Is there any representation of the Lorentz group where $$U^{-1} f(x) U = f(\lambda^{-1}x)$$ other than the (0,0) representation?
If not then is it possible for a field (with a well defined polynomial basis) to behave like a scalar field under the Lorentz group?
Will such fields still be called the (0,0) representation of the Lorentz group?
It precisely one of the Wightman axioms that the infinite-dimensional unitary representation
1 $U : \mathrm{SO}(1,3)\to\mathrm{U}(\mathcal{H})$ on the space of states $\mathcal{H}$ of the theory upon which the field acts as operator is compatible with the field transformation law under the finite-dimensional representation $\rho_\text{fin}: \mathrm{SO}(1,3)\to\mathrm{GL}(V)$ where $V$ is the target space of the field. For a real scalar field, $V=\mathbb{R}$ and $\rho_\text{fin}$ is the trivial representation. Being "compatible" means that$$ U(\Lambda)^\dagger\phi_i(x)U(\Lambda) = \sum_j\rho_\text{fin}(\Lambda)_{ij}\phi_j(\Lambda^{-1}(x))$$holds as an operator equation on the space of states.
Now, if $\phi$ is scalar, then $\rho_\text{fin}$ is trivial.
However, this does not mean, in any way, that $U$ is trivial. The infinite-dimensional unitary representations of the Poincare group $\mathrm{SO}(1,3)\ltimes \mathbb{R}^4$ are given by Wigner's classification, and the scalar field creates particles with mass and momentum, so the unitary representation is not trivial - the trivial unitary representation is just the vacuum. 1No finite-dimensional representation can be unitary.
|
Transverse momentum spectra and rapidity densities, dN/dy, of protons, anti-protons, and net--protons (p-pbar) from central (0-5%) Au+Au collisions at sqrt(sNN) = 200 GeV were measured with the BRAHMS experiment within the rapidity range 0 < y < 3. The proton and anti-proton dN/dy decrease from mid-rapidity to y=3. The net-proton yield is roughly constant for y<1 at dN/dy~7, and increases to dN/dy~12 at y~3. The data show that collisions at this energy exhibit a high degree of transparency and that the linear scaling of rapidity loss with rapidity observed at lower energies is broken. The energy loss per participant nucleon is estimated to be 73 +- 6 GeV.
$\frac{1}{2\pi p_{\mathrm{T}}}\frac{\mathrm{d}^2N}{\mathrm{d}p_{\mathrm{T}}\mathrm{d}y}$ versus $p_{\mathrm{T}}$ for $\mathrm{p}$,$\overline{\mathrm{p}}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$ . NaN values means no observation.
$\frac{\mathrm{d}N}{\mathrm{d}y}$ versus $y$ for $\mathrm{p}$,$\overline{\mathrm{p}}$,$\mathrm{p}-\overline{\mathrm{p}}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$ . The correction for the $\Lambda$ contribution is not straight forward since BRAHMS does not measure the $\Lambda$s and PHENIX and STAR only measures the $\Lambda$s at mid-rapidity! If one assumes that the mid-rapidity estimated in the paper of $$R=\frac{\Lambda-\bar{\Lambda}}{\mathrm{p}-\bar{\mathrm{p}}} = \frac{\Lambda}{\mathrm{p}} = \frac{\bar{\Lambda}}{\bar{\mathrm{p}}} = 0.93\pm 0.11(\mathrm{stat})\pm 0.25(\mathrm{syst}) $$ and the BRAHMS "acceptance factor" of $A=0.53\pm 0.05$ which includes both that only 64% decays to protons and that some are rejected by the requirement of the track to point back to the IP. The corrected $\mathrm{p}$ ($\bar{\mathrm{p}}$ or net-$\mathrm{p}$) is then : $$\left.\frac{\mathrm{d}N}{\mathrm{d}y}\right|_{\mathrm{corrected}} = \frac{\mathrm{d}N}{\mathrm{d}y}(1/(1+RA))= \frac{\mathrm{d}N}{\mathrm{d}y}\left(0.67\pm 0.05(\mathrm{stat})\pm 0.11(\mathrm{syst})\right)$$ Which can be used at all rapidities if one believes that R is constant. The fact that net-$\mathrm{K}=\mathrm{K}^{+}-\mathrm{K}^{-}$ follows net-$\mathrm{p}$ (see fx. talk by Djamel Ouerdane at QM04), seems to indicate that the net-$\Lambda$ follow the net-$\mathrm{p}$ trend and the correction is reasonable.
We have measured rapidity densities dN/dy of pions and kaons over a broad rapidity range (-0.1 < y < 3.5) for central Au+Au collisions at sqrt(snn) = 200 GeV. These data have significant implications for the chemistry and dynamics of the dense system that is initially created in the collisions. The full phase-space yields are 1660 +/- 15 +/- 133 (pi+), 1683 +/- 16 +/- 135 (pi-), 286 +/- 5 +/- 23 (K+) and 242 +/- 4 +/- 19 (K-). The systematics of the strange to non--strange meson ratios are found to track the variation of the baryo-chemical potential with rapidity and energy. Landau--Carruthers hydrodynamic is found to describe the bulk transport of the pions in the longitudinal direction.
$\frac{1}{2\pi p_{\mathrm{T}}}\frac{\mathrm{d}^2N}{\mathrm{d}p_{\mathrm{T}}\mathrm{d}y}$ versus $p_{\mathrm{T}}$ for $\mathrm{\pi}^{+}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$ near $y=-0.1-0.0$ for $0-5$% central
$\frac{1}{2\pi p_{\mathrm{T}}}\frac{\mathrm{d}^2N}{\mathrm{d}p_{\mathrm{T}}\mathrm{d}y}$ versus $p_{\mathrm{T}}$ for $\mathrm{\pi}^{+}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$ near $y=0.0-0.1$ for $0-5$% central
$\frac{1}{2\pi p_{\mathrm{T}}}\frac{\mathrm{d}^2N}{\mathrm{d}p_{\mathrm{T}}\mathrm{d}y}$ versus $p_{\mathrm{T}}$ for $\mathrm{\pi}^{+}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$ near $y=0.4-0.6$ for $0-5$% central
Particle production of identified charged hadrons, $\pi^{\pm}$, $K^{\pm}$, $p$, and $\bar{p}$ in Au+Au collisions at $\sqrt(snn) =$ 200 GeV has been studied as a function of transverse momentum and collision centrality at $y=0$ and $y\sim1$ by the BRAHMS experiment at RHIC. Significant collective transverse flow at kinetic freeze-out has been observed in the collisions. The magnitude of the flow rises with the collision centrality. Proton and kaon yields relative to the pion production increase strongly as the transverse momentum increases and also increase with centrality. Particle yields per participant nucleon show a weak dependence on the centrality for all particle species. Hadron production remains relatively constant within one unit around midrapidity in Au+Au collisions at $\sqrt(snn) =$ 200 GeV.
$\frac{1}{2\pi p_{\mathrm{T}}}\frac{\mathrm{d}^2N}{\mathrm{d}p_{\mathrm{T}}\mathrm{d}y}$ versus $p_{\mathrm{T}}$ for $\mathrm{\pi}^{+}$,$\mathrm{\pi}^{-}$,$\mathrm{K}^{+}$,$\mathrm{K}^{-}$,$\mathrm{p}$,$\overline{\mathrm{p}}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$
$\langle p_{\mathrm{T}}\rangle$ versus $N_{\mathrm{part}}$ for $\mathrm{\pi}^{+}$,$\mathrm{\pi}^{-}$,$\mathrm{K}^{+}$,$\mathrm{K}^{-}$,$\mathrm{p}$,$\overline{\mathrm{p}}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$
$\beta_{\mathrm{S}}$,$T$,$\chi^2$,$\nu$ versus $\mathrm{Centrality}$ for $\mathrm{h}^{+}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$
Transverse momentum spectra of protons and anti-protons measured in the rapidity range 0<y<3.1 from 0-10% central Au+Au collisions at sqrt{s_{NN}}=62.4 GeV are presented. The rapidity densities, dN/dy, of protons, anti-protons and net-protons N()p-N(pbar) have been deduced from the spectra over a rapidity range wide enough to observe the expected maximum net-baryon density. From mid-rapidity to y=1 the net-proton yield is roughly constant (dN/dy ~ 10),but rises to dN/dy ~25 at 2.3<y<3.1. The mean rapidity loss is 2.01 +-0.16 units from beam rapidity. The measured rapidity distributions are compared to model predictions. Systematics of net-baryon distributions and rapidity loss vs. collision energy are discussed.
$\frac{\mathrm{d}N}{\mathrm{d}y}$ versus $y$ for $\mathrm{p}$,$\overline{\mathrm{p}}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=62.4\,\mathrm{Ge\!V}$
$\delta y$,$\delta E$ versus for $\mathrm{p}$,$\overline{\mathrm{p}}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=62.4\,\mathrm{Ge\!V}$ for $0-10$% central
$\frac{1}{2\pi p_{\mathrm{T}}}\frac{\mathrm{d}^2N}{\mathrm{d}p_{\mathrm{T}}\mathrm{d}y}$ versus $p_{\mathrm{T}}$ for $\mathrm{p}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=62.4\,\mathrm{Ge\!V}$ near $y=-0.1-0.1$
We present spectra of charged hadrons from Au+Au and d+Au collisions at $\sqrt{s_{NN}}=200$ GeV measured with the BRAHMS experiment at RHIC. The spectra for different collision centralities are compared to spectra from ${\rm p}+\bar{{\rm p}}$ collisions at the same energy scaled by the number of binary collisions. The resulting ratios (nuclear modification factors) for central Au+Au collisions at $\eta=0$ and $\eta=2.2$ evidence a strong suppression in the high $p_{T}$ region ($>$2 GeV/c). In contrast, the d+Au nuclear modification factor (at $\eta=0$) exhibits an enhancement of the high $p_T$ yields. These measurements indicate a high energy loss of the high $p_T$ particles in the medium created in the central Au+Au collisions. The lack of suppression in d+Au collisions makes it unlikely that initial state effects can explain the suppression in the central Au+Au collisions.
$R_{\mathrm{CP}}|_{\eta=2.2} / R_{\mathrm{CP}}|_{\eta=0}$ versus $p_{\mathrm{T}}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$
$\frac{1}{2\pi p_{\mathrm{T}}}\frac{\mathrm{d}^2N}{\mathrm{d}p_{\mathrm{T}}\mathrm{d}\eta}$ versus $p_{\mathrm{T}}$ for $\frac{h^{+}+h^{-}}{2}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$ near $\eta=0$, per centrality
$\frac{1}{2\pi p_{\mathrm{T}}}\frac{\mathrm{d}^2N}{\mathrm{d}p_{\mathrm{T}}\mathrm{d}\eta}$ versus $p_{\mathrm{T}}$ for $\frac{h^{+}+h^{-}}{2}$ in $\mathrm{d}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$ near $\eta=0$
The first measurements of $x_F$-dependent single spin asymmetries of identified charged hadrons, $\pi^{\pm}$, $K^{\pm}$, and protons, from transversely polarized proton-proton collisions at 62.4 GeV at RHIC are presented. The measurements extend to high-$x_F$ ($|x_F|\sim 0.6$) in both the forward and backward directions.Large asymmetries are seen in the pion and kaon channels. The asymmetries in inclusive $\pi^{+}$ production, $A_N(\pi^+)$, increase with $x_F$ from 0 to $\sim$0.25 %at $x_F = 0.6$ and $A_N(\pi^{-})$ decrease from 0 to $\sim$$-$0.4. Even though $K^-$ contains no valence quarks, observed asymmetries for $K^-$ unexpectedly show positive values similar to those for $K^+$, increasing with $x_F$, whereas proton asymmetries are consistent with zero over the measured kinematic range. Comparisons of the data with predictions of QCD-based models are presented. The flavor dependent single spin asymmetry measurements of identified hadrons allow for stringent tests of theoretical models of partonic dynamics in the RHIC energy regime.
$A_{N}$ versus $x_{\mathrm{F}}$ for $\mathrm{\pi}^{-}$ in $\mathrm{p}\mathrm{p}$ at $\sqrt{s}=62.4\,\mathrm{Ge\!V}$
$A_{N}$ versus $x_{\mathrm{F}}$ for $\mathrm{\pi}^{-}$ in $\mathrm{p}\mathrm{p}$ at $\sqrt{s}=62.4\,\mathrm{Ge\!V}$
$A_{N}$ versus $x_{\mathrm{F}}$ for $\mathrm{\pi}^{+}$ in $\mathrm{p}\mathrm{p}$ at $\sqrt{s}=62.4\,\mathrm{Ge\!V}$
We present charged particle densities as a function of pseudorapidity and collision centrality for the 197Au+197Au reaction at Sqrt{s_NN}=200 GeV. For the 5% most central events we obtain dN_ch/deta(eta=0) = 625 +/- 55 and N_ch(-4.7<= eta <= 4.7) = 4630+-370, i.e. 14% and 21% increases, respectively, relative to Sqrt{s_NN}=130 GeV collisions. Charged-particle production per pair of participant nucleons is found to increase from peripheral to central collisions around mid-rapidity. These results constrain current models of particle production at the highest RHIC energy.
$\mathrm{d}N/\mathrm{d}\eta$ versus $\eta$ for $x^{\pm}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$ for $0-5$% central, $5-10$% central, $20-30$% central, $40-50$% central
$\left.\frac{\mathrm{d}N}{\mathrm{d}\eta}\right\vert_{200\mathrm{Ge\!V}}\,/\,\left.\frac{\mathrm{d}N}{\mathrm{d}\eta}\right\vert_{130\mathrm{Ge\!V}}$ versus $\eta$ for $x^{\pm}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$ for , per centrality
$\frac{\mathrm{d}N}{\mathrm{d}y}\frac{2}{N_{\mathrm{part}}}$ versus $N_{\mathrm{part}}$ for $x^{\pm}$ in $\mathrm{Au}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$
Charged-particle pseudorapidity densities are presented for the d+Au reaction at sqrt{s_{NN}}=200 GeV with -4.2 <= eta <= 4.2$. The results, from the BRAHMS experiment at RHIC, are shown for minimum-bias events and 0-30%, 30-60%, and 60-80% centrality classes. Models incorporating both soft physics and hard, perturbative QCD-based scattering physics agree well with the experimental results. The data do not support predictions based on strong-coupling, semi-classical QCD. In the deuteron-fragmentation region the central 200 GeV data show behavior similar to full-overlap d+Au results at sqrt{s_{NN}}=19.4 GeV.
$\frac{\mathrm{d}N}{\mathrm{d}\eta}$ versus $\eta$ for $x^{\pm}$ in $\mathrm{d}-\mathrm{Au}$ at $\sqrt{s_{\mathrm{NN}}}=200\,\mathrm{Ge\!V}$ for $0-30$% central, $30-60$% central, $60-80$% central, Min.Bias
We present particle spectra for charged hadrons $\pi^\pm, K^\pm, p$ and $\bar{p}$ from pp collisions at $\sqrt{s}=200$ GeV measured for the first time at forward rapidities (2.95 and 3.3). The kinematics of these measurements are skewed in a way that probes the small momentum fraction in one of the protons and large fractions in the other. Large proton to pion ratios are observed at values of transverse momentum that extend up to 4 GeV/c, where protons have momenta up to 35 GeV. Next-to-leading order perturbative QCD calculations describe the production of pions and kaons well at these rapidities, but fail to account for the large proton yields and small $\bar{p}/p$ ratios.
Invariant cross section for PI+ production in P P collisions at SQRT(S)=200 GeV and rapidity 2.95.
Invariant cross section for PI- production in P P collisions at SQRT(S)=200 GeV and rapidity 2.95.
Invariant cross section for K+ production in P P collisions at SQRT(S)=200 GeV and rapidity 2.95.
|
Question:
Let $K$ and $L$ be extensions of $F$. Show that $KL$ is Galois over $F$ if both $K$ and $L$ are Galois over $F$.
This question has been already asked here. But People provided
incomplete solution to the problem.
I have tried to attempt the problem:
Case $1$: Either $K\subset L$ or $L\subset K$. Then $KL$ is trivially Galois. Case $2$: Neither $K\subset L$ nor $L\subset K$. Consider,
$$R: Gal(KL/F)\rightarrow Gal(K/F)\times Gal(L/F)\\ \text{by}\enspace R(\sigma)=(\sigma |_{K},\sigma |_{H})$$
where $E=L\cap K$
I want to show that the map $R$ is an isomorphism. But I am unable to get started with it.
Can anyone help me, please?
|
This is not really an answer to your question, essentially because there isn't (currently) a question in your post, but it is too long for a comment.
Your statement that
A co-ordinate transformation is linear map from a vector to itself with a change of basis.
is muddled and ultimately incorrect. Take some vector space $V$ and two bases $\beta$ and $\gamma$ for $V$. Each of these bases can be used to establish a representation map $r_\beta:\mathbb R^n\to V$, given by$$r_\beta(v)=\sum_{j=1}^nv_j e_j$$if $v=(v_1,\ldots,v_n)$ and $\beta=\{e_1,\ldots,e_n\}$. The coordinate transformation is
not a linear map from $V$ to itself. Instead, it is the map$$r_\gamma^{-1}\circ r_\beta:\mathbb R^n\to\mathbb R^n,\tag 1$$and takes coordinates to coordinates.
Now, to go to the heart of your confusion, it should be stressed that
covectors are not members of $V$; as such, the representation maps do not apply to them directly in any way. Instead, they belong to the dual space $V^\ast$, which I'm hoping you're familiar with. (In general, I would strongly discourage you from reading texts that pretend to lay down the law on the distinction between vectors and covectors without talking at length about the dual space.)
The dual space is the vector space of all linear functionals from $V$ into its scalar field:$$V=\{\varphi:V\to\mathbb R:\varphi\text{ is linear}\}.$$This has the same dimension as $V$, and any basis $\beta$ has a unique dual basis $\beta^*=\{\varphi_1,\ldots,\varphi_n\}$ characterized by $\varphi_i(e_j)=\delta_{ij}$. Since it is a different basis to $\beta$, it is not surprising that the corresponding representation map is different.
To lift the representation map to the dual vector space, one needs the notion of the adjoint of a linear map. As it happens, there is in general no way to lift a linear map $L:V\to W$ to a map from $V^*$ to $W^*$; instead, one needs to reverse the arrow. Given such a map, a functional $f\in W^*$ and a vector $v\in V$, there is only one combination which makes sense, which is $f(L(v))$. The mapping $$v\mapsto f(L(v))$$ is a linear mapping from $V$ into $\mathbb R$, and it's therefore in $V^*$. It is denoted by $L^*(f)$, and defines the action of the adjoint $$L^*:W^*\to V^*.$$
If you apply this to the representation maps on $V$, you get the adjoints $r_\beta^*:V^*\to\mathbb R^{n,*}$, where the latter is canonically equivalent to $\mathbb R^n$ because it has a canonical basis. The inverse of this map, $(r_\beta^*)^{-1}$, is the representation map $r_{\beta^*}:\mathbb R^n\cong\mathbb R^{n,*}\to V^*$. This is the origin of the 'inverse transpose' rule for transforming covectors.
To get the transformation rule for covectors between two bases, you need to string two of these together:$$\left((r_\gamma^*)^{-1}\right)^{-1}\circ(r_\beta^*)^{-1}=r_\gamma^*\circ (r_\beta^*)^{-1}:\mathbb R^n\to \mathbb R^n,$$which is very different to the one for vectors, (1).
Still think that vectors and covectors are the same thing?
Addendum
Let me, finally, address another misconception in your question:
An inner product is between elements of the same vector space and not between two vector spaces, it is not how it is defined.
Inner products are indeed defined by taking both inputs from the same vector space. Nevertheless, it is still perfectly possible to define a bilinear form $\langle \cdot,\cdot\rangle:V^*\times V\to\mathbb R$ which takes one covector and one vector to give a scalar; it is simple the action of the former on the latter:$$\langle\varphi,v\rangle=\varphi(v).$$This bilinear form is always guaranteed and presupposes strictly
less structure than an inner product. This is the 'inner product' which reads $\varphi_j v^j$ in Einstein notation.
Of course, this does relate to the inner product structure $ \langle \cdot,\cdot\rangle_\text{I.P.}$ on $V$ when there is one. Having such a structure enables one to identify vectors and covectors in a canonical way: given a vector $v$ in $V$, its corresponding covector is the linear functional$$\begin{align}i(v)=\langle v,\cdot\rangle_\text{I.P.} : V&\longrightarrow\mathbb R \\w&\mapsto \langle v,w\rangle_\text{I.P.}.\end{align}$$By construction, both bilinear forms are canonically related, so that the 'inner product' $\langle\cdot,\cdot\rangle$ between $v\in V^*$ and $w\in V$ is exactly the same as the inner product $\langle\cdot,\cdot\rangle_\text{I.P.}$ between $i(v)\in V$ and $w\in V$. That use of language is perfectly justified.
Addendum 2, on your question about the gradient.
I should really try and convince you at this point that the transformation laws are in fact enough to show something is a covector. (The way the argument goes is that one can define a linear functional on $V$ via the form in $\mathbb R^{n*}$ given by the components, and the transformation laws ensure that this form in $V^*$ is independent of the basis; alternatively, given the components $f_\beta,f_\gamma\in\mathbb R^n$ with respect to two basis, the representation maps give the forms $r_{\beta^*}(f_\beta)=r_{\gamma^*}(f_\gamma)\in V^*$, and the two are equal because of the transformation laws.)
However, there is indeed a deeper reason for the fact that the gradient is a covector. Essentially, it is to do with the fact that the equation$$df=\nabla f\cdot dx$$does not actually need a dot product; instead, it relies on the simpler structure of the dual-primal bilinear form $\langle \cdot,\cdot\rangle$.
To make this precise, consider an arbitrary function $T:\mathbb R^n\to\mathbb R^m$. The derivative of $T$ at $x_0$ is defined to be the (unique) linear map $dT_{x_0}:\mathbb R^n\to\mathbb R^m$ such that$$T(x)=T(x_0)+dT_{x_0}(x-x_0)+O(|x-x_0|^2),$$if it exists. The gradient is exactly this map; it was
born as a linear functional, whose coordinates over any basis are $\frac{\partial f}{\partial x_j}$ to ensure that the multi-dimensional chain rule,$$df=\sum_j \frac{\partial f}{\partial x_j}d x_j,$$is satisfied. To make things easier to understand to undergraduates who are fresh out of 1D calculus, this linear map is most often 'dressed up' as the corresponding vector, which is uniquely obtainable through the Euclidean structure, and whose action must therefore go back through that Euclidean structure to get to the original $df$.
Addendum 3.
OK, it is now sort of clear what the main question is (unless that changes again), though it is still not particularly clear in the question text. The thing that needs addressing is stated in the OP's answer in this thread:
the dual vector space is itself a vector space and the fact that it needs to be cast off as a row matrix is based on how we calculate linear maps and not on what linear maps actually are. If I had defined matrix multiplication differently, this wouldn't have happened.
I will also, address, then this question:
given that the dual (/cotangent) space is also a vector space, what forces us to consider it 'distinct' enough from the primal that we display it as row vectors instead of columns, and say its transformation laws are different?
The main reason for this is well addressed by Christoph in his answer, but I'll expand on it. The notion that something is co- or contra-variant is not well defined 'in vacuum'. Literally, the terms mean "varies with" and "varies against", and they are meaningless unless one says
what the object in question varies with or against.
In the case of linear algebra, one starts with a given vector space, $V$. The unstated reference is always, by convention, the basis of $V$: covariant objects transform exactly like the basis, and contravariant objects use the transpose-inverse of the basis transformation's coefficient matrix.
One can, of course, turn the tables, and change one's focus to the dual, $W=V^*$, in which case the primal $V$ now becomes the dual, $W^*=V^{**}\cong V$. In this case, quantities that used to transform with the primal basis now transform against the dual basis, and vice versa. This is exactly why we call it the dual: there exists a full duality between the two spaces.
However, as is the case anywhere in mathematics where two fully dual spaces are considered (example, example, example, example, example), one needs to break this symmetry to get anywhere. There are two classes of objects which behave differently, and a transformation that swaps the two. This has two distinct, related advantages:
Anything one proves for one set of objects has a dual fact which is automatically proved. Therefore, one need only ever prove one version of the statement.
When considering vector transformation laws, one always has (or can have, or should have), in the back of one's mind, the fact that one can rephrase the language in terms of the duality-transformed objects. However, since the
content of the statements is not altered by the transformation, it is not typically useful to perform the transformation: one needs to state some version, and there's not really any point in stating both. Thus, one (arbitrarily, -ish) breaks the symmetry, rolls with that version, and is aware that a dual version of all the development is also possible.
However, this dual version is
not the same. Covectors can indeed be expressed as row vectors with respect to some basis of covectors, and the coefficients of vectors in $V$ would then vary with the new basis instead of against, but then for each actual implementation, the matrices you would use would of course be duality-transformed. You would have changed the language but not the content.
Finally, it's important to note that even though the dual objects are equivalent, it does not mean they are the same. This why we call them dual, instead of simply saying that they're the same! As regards vector spaces, then, one still has to prove that $V$ and $V^*$ are not only dually-related, but also different. This is made precise in the statement that
there is no natural isomorphism between a vector space and its dual, which is phrased, and proved in, the language of category theory. The notion of 'natural' isomorphism is tricky, but it would imply the following:
For each vector space $V$, you would have an isomorphism $\sigma_V:V\to V^*$. You would want this isomorphism to play nicely with the duality structure, and in particular with the duals of linear transformations, i.e. their adjoints. That means that for any vector spaces $V,W\in\mathrm{Vect}$ and any linear transformation $T:V\to W$, you would want the diagram
to commute. That is, you would want $T^* \circ \sigma_W \circ T$ to equal $T$.
This is provably not possible to do consistently. The reason for it is that if $V=W$ and is $T$ an isomorphism, then $T$ and $T^*$ are different, but for a simple counter-example you can just take any real multiple of the identity as $T$. This is precisely the formal statement of the intuition in garyp's great answer.
In apples-and-pears languages, what this means is that a general vector space $V$ and its dual $V^*$ are not only dual (in the sense that there exists a transformation that switches them and puts them back when applied twice), but they are also different (in the sense that there is no consistent way of identifying them), which is why the duality language is justified.
I've been rambling for quite a bit, and hopefully at least some of it is helpful. In summary, though, what I think you need to take away is the fact that
Just because dual objects are equivalent it doesn't mean they are the same.
This is also, incidentally, a direct answer to the question title: no, it is not foolish. They are equivalent, but they are still different.
|
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
2012, 1st ed., ISBN 9780312657758, cm.
Book
2. Highly resolved HSQC experiments for the fast and accurate measurement of homonuclear and heteronuclear coupling constants
Journal of Magnetic Resonance, ISSN 1090-7807, 09/2017, Volume 282, pp. 54 - 61
A number of -upscaled NMR experiments are currently available to measure coupling constants along the indirect F1 dimension of a 2D spectrum. A major drawback...
J-resolved | 1,nJCH/JHH-resolved HSQC | 1,nJCH-resolved HSQC | 1JCH/2JHH-resolved HSQC | J-resolved HSQC | 1JCH-resolved HSQC | Proton-proton coupling constants | Proton-carbon coupling constants | E.COSY | nJCH-resolved HSQC | nJCH/JHH-resolved HSQC | resolved HSQC | RESIDUAL DIPOLAR COUPLINGS | (1.n)J(CH)-resolved HSQC | (n)J(CH)/J(HH)-resolved HSQC | SMALL MOLECULES | (1.n)J(CH)/J(HH)-resolved HSQC | (n)J(CH)-resolved HSQC | NMR-SPECTROSCOPY | BIOCHEMICAL RESEARCH METHODS | PHYSICS, ATOMIC, MOLECULAR & CHEMICAL | J(CH)-resolved HSQC | STRUCTURAL DISCRIMINATION | PREDICTION | CROSS-PEAKS | SPECTROSCOPY | H-1-NMR SPECTRA | J(CH)/J(HH)-resolved HSQC | HSQMBC | EFFICIENT | RANGE
J-resolved | 1,nJCH/JHH-resolved HSQC | 1,nJCH-resolved HSQC | 1JCH/2JHH-resolved HSQC | J-resolved HSQC | 1JCH-resolved HSQC | Proton-proton coupling constants | Proton-carbon coupling constants | E.COSY | nJCH-resolved HSQC | nJCH/JHH-resolved HSQC | resolved HSQC | RESIDUAL DIPOLAR COUPLINGS | (1.n)J(CH)-resolved HSQC | (n)J(CH)/J(HH)-resolved HSQC | SMALL MOLECULES | (1.n)J(CH)/J(HH)-resolved HSQC | (n)J(CH)-resolved HSQC | NMR-SPECTROSCOPY | BIOCHEMICAL RESEARCH METHODS | PHYSICS, ATOMIC, MOLECULAR & CHEMICAL | J(CH)-resolved HSQC | STRUCTURAL DISCRIMINATION | PREDICTION | CROSS-PEAKS | SPECTROSCOPY | H-1-NMR SPECTRA | J(CH)/J(HH)-resolved HSQC | HSQMBC | EFFICIENT | RANGE
Journal Article
3. Measurement of the ratio of the production cross sections times branching fractions of $B_{c}^{\pm} o J/\psi \pi^{\pm}$ and $B^{\pm} o J/\psi K^{\pm}$ and $\mathcal{B}(B_{c}^{\pm} o J/\psi \pi^{\pm}\pi^{\pm}\pi^{\mp})/\mathcal{B}(B_{c}^{\pm} o J/\psi \pi^{\pm})$ in pp collisions at $\sqrt{s} =$ 7 TeV
ISSN 1126-6708, 2015
B/c+ --> J/psi 2pi+ pi | experimental results | rapidity | B: transverse momentum | CMS | B+ --> J/psi K | B: decay modes | kinematics | CERN LHC Coll | cross section: branching ratio: ratio | B: hadronic decay | B/c+ --> J/psi pi | p p: colliding beams | p p: scattering | 7000 GeV-cms | B/c: hadronic decay | B/c: branching ratio: measured | B: branching ratio
Journal Article
4. Cyclic Peptide Design Guided by Residual Dipolar Couplings, J-Couplings and Intramolecular Hydrogen Bond Analysis
The Journal of organic chemistry, ISSN 0022-3263, 01/2019, Volume 84, Issue 8, pp. 4803 - 4813
Cyclic peptides have long tantalized drug designers with their potential ability to combine the best attributes of antibodies and small molecules. An ideal...
COMPLEX | 3D STRUCTURE ELUCIDATION | CYCLOSPORINE-A | PERMEABILITY | RDCS | CHEMISTRY, ORGANIC | PROTON | CONFORMATION | CHLOROFORM | CYCLOPHILIN | DISCOVERY
COMPLEX | 3D STRUCTURE ELUCIDATION | CYCLOSPORINE-A | PERMEABILITY | RDCS | CHEMISTRY, ORGANIC | PROTON | CONFORMATION | CHLOROFORM | CYCLOPHILIN | DISCOVERY
Journal Article
MICROPOROUS AND MESOPOROUS MATERIALS, ISSN 1387-1811, 07/2019, Volume 282, pp. A1 - A2
Journal Article
6. Apolipoprotein J Mimetic Peptide D-[113–122]Apoj Retard Atherosclerosis In Ldlr-Ko Mice Under Atherogenic Diet By Improving Hdl Function And Decreasing Ldl Aggregability
Atherosclerosis, ISSN 0021-9150, 08/2019, Volume 287, pp. e200 - e201
Journal Article
Microporous and Mesoporous Materials, ISSN 1387-1811, 07/2019, Volume 282, pp. A1 - A2
Journal Article
Mediterranean Journal of Mathematics, ISSN 1660-5446, 10/2018, Volume 15, Issue 5
Journal Article
Physics Letters B, ISSN 0370-2693, 07/2013, Volume 727, Issue 4-5, pp. 381 - 402
Phys. Lett. B 727 (2013) 381 The polarizations of prompt J/psi and psi(2S) mesons are measured in proton-proton collisions at sqrt(s) = 7 TeV, using a dimuon...
Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Particle Physics - Experiment | CMS ; Physics ; Quarkonium production ; Quarkonium polarization | Quarkonium production | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Nuclear and High Energy Physics | CMS; Physics; Quarkonium production; Quarkonium polarization | CMS; Physics; Quarkonium polarization; Quarkonium production; Nuclear and High Energy Physics | Quarkonium polarization | Física
Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Particle Physics - Experiment | CMS ; Physics ; Quarkonium production ; Quarkonium polarization | Quarkonium production | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Nuclear and High Energy Physics | CMS; Physics; Quarkonium production; Quarkonium polarization | CMS; Physics; Quarkonium polarization; Quarkonium production; Nuclear and High Energy Physics | Quarkonium polarization | Física
Journal Article
10. Cover Picture: Synthesis of (E)‐4‐Bromo‐3‐methoxybut‐3‐en‐2‐one, the Key Fragment in the Polyhydroxylated Chain Common to Oscillariolide and Phormidolides A–C (Chem. Eur. J. 21/2016)
Chemistry – A European Journal, ISSN 0947-6539, 05/2016, Volume 22, Issue 21, pp. 6989 - 6989
Journal Article
Nature, ISSN 0028-0836, 2015, Volume 526, Issue 7571, pp. 68 - 74
The 1000 Genomes Project set out to provide a comprehensive description of common human genetic variation by applying whole-genome sequencing to a diverse set...
BAYES FACTORS | INDIVIDUALS | COMPLEMENT FACTOR-H | POPULATION HISTORY | MULTIDISCIPLINARY SCIENCES | MUTATION | DISEASE | SUSCEPTIBILITY | MACULAR DEGENERATION | VARIANT | GENOME-WIDE ASSOCIATION | Rare Diseases - genetics | Genome-Wide Association Study | Datasets as Topic | Demography | Disease Susceptibility | Physical Chromosome Mapping | Humans | Genetics, Population - standards | Genomics - standards | Genotype | INDEL Mutation - genetics | Sequence Analysis, DNA | Genome, Human - genetics | Haplotypes - genetics | Exome - genetics | Reference Standards | Genetics, Medical | Internationality | Polymorphism, Single Nucleotide - genetics | Genetic Variation - genetics | High-Throughput Nucleotide Sequencing | Quantitative Trait Loci - genetics | Genetic research | Nucleotide sequencing | Observations | Genetic variation | Methods | DNA sequencing | Studies | Haplotypes | Genotype & phenotype | Accuracy | Genealogy | Population | Genomes | Genetic diversity | Mitochondrial DNA | Binding sites
BAYES FACTORS | INDIVIDUALS | COMPLEMENT FACTOR-H | POPULATION HISTORY | MULTIDISCIPLINARY SCIENCES | MUTATION | DISEASE | SUSCEPTIBILITY | MACULAR DEGENERATION | VARIANT | GENOME-WIDE ASSOCIATION | Rare Diseases - genetics | Genome-Wide Association Study | Datasets as Topic | Demography | Disease Susceptibility | Physical Chromosome Mapping | Humans | Genetics, Population - standards | Genomics - standards | Genotype | INDEL Mutation - genetics | Sequence Analysis, DNA | Genome, Human - genetics | Haplotypes - genetics | Exome - genetics | Reference Standards | Genetics, Medical | Internationality | Polymorphism, Single Nucleotide - genetics | Genetic Variation - genetics | High-Throughput Nucleotide Sequencing | Quantitative Trait Loci - genetics | Genetic research | Nucleotide sequencing | Observations | Genetic variation | Methods | DNA sequencing | Studies | Haplotypes | Genotype & phenotype | Accuracy | Genealogy | Population | Genomes | Genetic diversity | Mitochondrial DNA | Binding sites
Journal Article
12. Erratum to “Real-time tidal volume feedback guides optimal ventilation during simulated CPR” [Am J Emerg Med 35(2) (2017) 292–29]
American Journal of Emergency Medicine, ISSN 0735-6757, 2017, Volume 35, Issue 6, pp. 933 - 933
Journal Article
Science, ISSN 0036-8075, 08/2018, Volume 361, Issue 6403, pp. 661 - 661
An annotated reference sequence representing the hexaploid bread wheat genome in 21 pseudomolecules has been analyzed to identify the distribution and genomic...
DRAFT GENOME | MULTIDISCIPLINARY SCIENCES | ADAPTATION | GENES | TISSUES | LOCUS | KEY | REVEALS | RICE | Multigene Family | Atlases as Topic | Breeding | Molecular Sequence Annotation | Triticum - anatomy & histology | Transcriptome | Triticum - growth & development | Genome, Plant | Phylogeny | Bread | Triticum - genetics | Reference Standards | Gene Expression Regulation, Developmental | Gene Expression Regulation, Plant | Quantitative Trait Loci | Triticum - classification | Usage | Plant breeding | Growth | Genetically modified crops | Genetic aspects | Nucleotide sequencing | Research | Wheat | DNA sequencing | Networks | Agricultural research | Communities | Genomics | Innovations | Editing | Genomes | Biology | Regulatory sequences | Gene sequencing | Consortia | Human populations | Annotations | Developmental stages | Quality | Agronomy | Agricultural production | Assembling | Flowering | Bioinformatics | Chromosomes | Crop yield | Assembly | Deoxyribonucleic acid--DNA | Adaptation | CRISPR | Population growth | Nucleotide sequence | Food sources | Gene families | Crops | Plant protection | Global economy | Gene expression | Loci | Calories | Quantitative trait loci | Insects | Gene mapping | Life Sciences | Biotechnology
DRAFT GENOME | MULTIDISCIPLINARY SCIENCES | ADAPTATION | GENES | TISSUES | LOCUS | KEY | REVEALS | RICE | Multigene Family | Atlases as Topic | Breeding | Molecular Sequence Annotation | Triticum - anatomy & histology | Transcriptome | Triticum - growth & development | Genome, Plant | Phylogeny | Bread | Triticum - genetics | Reference Standards | Gene Expression Regulation, Developmental | Gene Expression Regulation, Plant | Quantitative Trait Loci | Triticum - classification | Usage | Plant breeding | Growth | Genetically modified crops | Genetic aspects | Nucleotide sequencing | Research | Wheat | DNA sequencing | Networks | Agricultural research | Communities | Genomics | Innovations | Editing | Genomes | Biology | Regulatory sequences | Gene sequencing | Consortia | Human populations | Annotations | Developmental stages | Quality | Agronomy | Agricultural production | Assembling | Flowering | Bioinformatics | Chromosomes | Crop yield | Assembly | Deoxyribonucleic acid--DNA | Adaptation | CRISPR | Population growth | Nucleotide sequence | Food sources | Gene families | Crops | Plant protection | Global economy | Gene expression | Loci | Calories | Quantitative trait loci | Insects | Gene mapping | Life Sciences | Biotechnology
Journal Article
|
Model of a crystal and deriving the expected diffraction image
In single-crystal X-ray crystallography, the diffraction image is due to scattering of electrons in a crystalline sample. A convenient way of describing the electron density is to first specify crystal symmetry and mean atomic positions (coordinates), and then to describe in more or less detail how atoms deviate from these mean positions (modeled by atomic displacement factors). Combining this with a spherical description of the electrons "belonging" to each atom (modeled by atomic form factors), you arrive at an electron density.
This description is useful because in the model, the electron density is the convolution of the crystal lattice points with the coordinates in the unit cell with the distribution of atoms around mean positions with the distribution of electrons around the atoms. The convolution theory states that the Fourier transform of a convolution is the product of the Fourier transforms. So in this model, you can separately Fourier-transform the different levels of the model and then multiply them to arrive at the expected diffraction image (via the structure factors).
Atomic displacement and Debye-Waller factor
The atomic displacement around a mean position may be described through a probability density function $p_k(\mathbf{u})$, where $\mathbf{u}$ is the position (3D-vector) with the mean position as origin. The Fourier transform of this function defines the Debye-Waller factor (source: equation 1.4.8):
$$T_k(\mathbf{h}) = \int p_k(\mathbf{u}) e^{2 \pi i \mathbf{h} \cdot \mathbf{u}}d^3 \mathbf{u}\tag{3}$$
Here, $\mathbf{h}$ represents coordinates in reciprocal space (just as Fourier transform of a time series gives you information in frequency space, the Fourier transform connects real space - coordinate $\mathbf{u}$ - with reciprocal space - coordinate $\mathbf{h}$).
If the displacement is modeled by a Gaussian, the integral can be determined as (source: equation 1.4.10)
$$T_k(\mathbf{h}) = e^{-2 \pi^2 \langle(\mathbf{h} \cdot \mathbf{u})^2\rangle}\tag{4}$$
This reflects that the Fourier transform of a Gaussian is a Gaussian again, and the width of one Gaussian is the inverse of the other (i.e. if the atoms are widely distributed, the signal will diminish a lot with increasing resolution). Again using the time series as an example, if a sound is played for a very short time, the pitch will be ill-defined (or if an NMR FID signal decays rapidly, the peaks will be broad).
Displacement parameters $\mathbf{U}$
The displacement parameters $\mathbf{U}$ or $\mathbf{U^{jl}}$ in the OP's equation (1) and (2) are derived from the $\langle(\mathbf{h} \cdot \mathbf{u})^2\rangle$ expression in equation (3), either direction-independent in the case of an isotropic approximation (spherical distribution around the mean) or separately along the three lattice axes (leading to six anisotropic displacement factors). I am not explicitly showing this - it gets a bit complicated because the derivation requires transformation into the sometimes non-orthogonal coordinate system based on the crystal lattice (see here). In any case, the expression is a mean square displacement (analogous to a variance of a one-dimensional function). For that reason, the dimensions are length squared and the typical units are $Å^2$. As stated in the IUCR nomenclature document:
the elements of the tensor U have dimension (length)2 and can be directly associated with the mean-square displacements of the atom considered in the corresponding directions
and
If the atomic pdf is assumed to be a trivariate Gaussian, the characteristic function corresponding to this pdf - by definition, its Fourier transform - can be described by the second moments of the pdf, which in the present context are called anisotropic mean-square displacements.
Second moments (variance) have dimensions that are the square of the dimension of the quantity described by the probability distribution function (pdf). In this case, we are looking at the distribution of the position (dimensions length), and the second moment will have dimensions of length squared.
|
The divisor function $d(n)$, is the number of $(a,b)\in\mathbb {N^+}^2$ such that $a\times b =n$. For example, $d(2)=2$ because $2=1\times 2=2\times 1$ and d(6)=4 because $6=1\times 6=2\times 3=3\times 2=6\times 1$.
The divisor summatory function is defined by : $$D(n)=\sum_{i=1}^n d(i)$$
This is sequence A006218 in OEIS.
Does anyone know the best time complexity algorithm to compute this function ? Are they any results published on the computational complexity of this function ?
|
I want to ask MSE to confirm the correctness of the alternate solution and its mistake.
I know possible solution: https://math.stackexchange.com/a/2557094/456510
If $x,y,z\in {\mathbb R}$, Solve the system equation:
$$ \left\lbrace\begin{array}{ccccccl} x^4 & + & y^2 & + & 4 & = & 5yz \\[1mm] y^{4} & + & z^{2} & + & 4 & = &5zx \\[1mm] z^{4} & + & x^{2} & + & 4 & = & 5xy \end{array}\right. $$
I wrote a solution myself (after more work).
My attempts / solution:
It is obvious that, if $x>0,y>0,z>0$ are solutions, $x<0,y<0,z<0$ are also solutions and it is obvious $x≠0,y≠0,z≠0$.
If the equations have a solution, then $ x = y = z $ should be.
Proof:
I will accept $x,y,z\in {\mathbb R^+}$
a-1)
Let $x≥z>y$
We can write :
$z^4>y^4 \\ x^2≥z^2 \\ z^4+x^2+4>y^4+z^2+4 \\ 5xy > 5zx \\ y>z$
We get the contradiction : $y>z$
Because, it must be $z>y$
a-2)
Let $x>z≥y$
We can write:
$z^4≥y^4 \\ x^2>z^2 \\ z^4+x^2+4>y^4+z^2+4 \\ 5xy > 5zx \\ y>z$
We get the same contradiction : $y>z$
Because, it must be $z≥y$
b)
$y≥x>z$
We can write:
$x^4>z^4 \\ y^2≥x^2 \\ x^4+y^2+4>z^4+x^2+4 \\ 5yz > 5xy \\ z>x$
But, this is contradiction, because it must be $z<x$.
We get the same contradiction for : $y>x≥z$
c)
$y>z≥x$
We can write:
$y^4>z^4 \\ z^2≥x^2 \\ y^4+z^2+4>z^4+x^2+4 \\ 5zx > 5xy \\ z>y$
But, this is contradiction, because it must be $z<y$.
We get the same contradiction for : $y≥z>x$
d)
$z>x≥y$
We can write:
$z^4>x^4 \\ x^2≥y^2 \\ z^4+x^2+4>x^4+y^2+4 \\ 5xy > 5yz \\ x>z$
But, this is contradiction, because it must be $z>x$.
We get the same contradiction for : $z≥x>y$
e)
$z≥y>x$
We can write:
$y^4>x^4 \\ z^2≥y^2 \\ y^4+z^2+4>x^4+y^2+4 \\ 5zx > 5yz \\ x>y$
But, this is contradiction, because it must be $x<y$.
We get the same contradiction for : $z>y≥x$
f)
$x>y≥z$
We can write:
$x^4>y^4 \\ y^2≥z^2 \\ x^4+y^2+4>y^4+z^2+4 \\ 5yz > 5zx \\ y>x$
But, this is contradiction, because it must be $x>y$.
We get the same contradiction for : $x≥y>z$
Then, solution must be $x=y=z$ (if there is a solution).
The proof is completed.
Finally,
$$x^4+x^2+4-5x^2=0 \Rightarrow x^4-4x^2+4=0 \Rightarrow (x^2-2)^2=0 \Rightarrow x=±\sqrt2\Rightarrow x=y=z=±\sqrt2 .$$
Is my proof/ solution correct?
Thanks.
|
I post this answer to check my understanding.
Imagine a wavefunction in 1 dimensions with a known energy and momentum it's wavefunction will be:
$$\Psi(x, t) = e^{i(kx-\omega t)} = e^{i(px-E t)/\hbar}$$
With some calculus and algebra you can derive the momentum operator and get this:
$$-i\hbar \partial_x \Psi = p \Psi$$
There $-i\hbar \partial_x$ is the momentum operator (I used $\partial_x$ for sorthand for partial derivation). The $p$ is the momentum we measured: the eigen-value of the operator.
Since we prepared the state with a known momentum, the measurement of the momentum doesn't have any effect on the state.
Now imagine a state that is a superposition of 3 possible momenta, so it's a sum of 3 states for each momentum:
$$\Psi = \Psi_1 + \Psi_2 + \Psi_3$$
The superposition principle allows this. Applying the momentum operator on them, you'll get this:
$$-i\hbar \partial_x \left( \Psi_1 + \Psi_2 + \Psi_3 \right) = p_1\Psi_1 + p_2\Psi_2 + p_3\Psi_3 $$
That means our state have 3 different momenta at the same time, but the measurement must give one of the 3 possible eigenvalues. You can get the probability of collapse to a particular state by calculating the
$$ \langle \Psi_i| \Psi\rangle = \int_{-\infty}^\infty\Psi_i^*(x,t) \Psi(x,t) dx$$
Where the asterisk means the complex conjugate. And on the bra-side there must be one of the eigenstates of the operator (that is a pure plane wave with known momentum).
So to answer your question (partially):
After the measurement the Copenhagen interpretation says that state immediately changes to one of the eigenstates. The many worlds interpretation says there is no such collapse instead all the eigenstates can coexist simultaneously in parallel worlds. If the nature have chosen $p_1$ as the measurement result, you'll know that the state is now $\Psi_1$ which is then renormalized to ensure $\langle \Psi_1 | \Psi_1 \rangle = 1$. This renormalization just a technical step for convenience since the Schrodinger-equation doesn't care if you multiply the wave function with an arbitrary constant number. You can see states as infinite dimensional vectors (you can use dimensional analogy of the finite dimensional vectors). And only the directions of these vectors matter. Not the length.
An operator doesn't change the direction of an eigenstate.
|
Electronic Communications in Probability Electron. Commun. Probab. Volume 20 (2015), paper no. 53, 11 pp. Gluing lemmas and Skorohod representations Abstract
Let $(\mathcal{X},\mathcal{E})$, $(\mathcal{Y},\mathcal{F})$ and $(\mathcal{Z},\mathcal{G})$ be measurable spaces. Suppose we are given two probability measures $\gamma$ and $\tau$, with $\gamma$ defined on $(\mathcal{X}\times\mathcal{Y},\mathcal{E}\otimes\mathcal{F}$ and $\tau$ on $(\mathcal{X}\times\mathcal{Z},\mathcal{E}\otimes\mathcal{G})$. Conditions for the existence of random variables $X,Y,Z$, defined on the same probability space $(\Omega,\mathcal{A},P)$ and satisfying
$$(X,Y)\sim\gamma\,\text{ and }\,(X,Z)\sim\tau,$$
are given. The probability $P$ may be finitely additive or $\sigma$-additive. As an application, a version of Skorohod representation theorem is proved. Such a version does not require separability of the limit probability law, and answers (in a finitely additive setting) a question raised in preceding works.
Article information Source Electron. Commun. Probab., Volume 20 (2015), paper no. 53, 11 pp. Dates Accepted: 21 July 2015 First available in Project Euclid: 7 June 2016 Permanent link to this document https://projecteuclid.org/euclid.ecp/1465320980 Digital Object Identifier doi:10.1214/ECP.v20-3870 Mathematical Reviews number (MathSciNet) MR3374303 Zentralblatt MATH identifier 1330.60010 Rights This work is licensed under a Creative Commons Attribution 3.0 License. Citation
Berti, Patrizia; Pratelli, Luca; Rigo, Pietro. Gluing lemmas and Skorohod representations. Electron. Commun. Probab. 20 (2015), paper no. 53, 11 pp. doi:10.1214/ECP.v20-3870. https://projecteuclid.org/euclid.ecp/1465320980
|
[FFmpeg-devel] [PATCH] avfilter/vsrc_mandelbrot: avoid sqrt for epsilon calculation Ganesh Ajjanagadde gajjanag at mit.edu Tue Nov 24 15:23:04 CET 2015 On Mon, Nov 23, 2015 at 9:46 PM, Michael Niedermayer <michaelni at gmx.at> wrote:
> On Mon, Nov 23, 2015 at 05:19:52PM -0500, Ganesh Ajjanagadde wrote:
>> This rewrites into the mathematically equivalent expression avoiding sqrt,
>> and results in a very minor speedup.
>>
>> Tested on x86-64, Haswell, GNU/Linux.
>> Command:
>> ffmpeg -v error -f lavfi -i mandelbrot -f null -
>>
>> old (draw_mandelbrot):
>> 3982389425 decicycles in draw_mandelbrot, 256 runs, 0 skips
>> 7634221782 decicycles in draw_mandelbrot, 512 runs, 0 skips
>> 20576449397 decicycles in draw_mandelbrot, 1024 runs, 0 skips
>> 12949998655 decicycles in draw_mandelbrot, 2048 runs, 0 skips
>>
>> new (draw_mandelbrot):
>> 3966406060 decicycles in draw_mandelbrot, 256 runs, 0 skips
>> 7553322112 decicycles in draw_mandelbrot, 512 runs, 0 skips
>> 20454169970 decicycles in draw_mandelbrot, 1024 runs, 0 skips
>> 12822228615 decicycles in draw_mandelbrot, 2048 runs, 0 skips
>>
>> Signed-off-by: Ganesh Ajjanagadde <gajjanagadde at gmail.com>
>> ---
>> libavfilter/vsrc_mandelbrot.c | 6 +++---
>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/libavfilter/vsrc_mandelbrot.c b/libavfilter/vsrc_mandelbrot.c
>> index 950c5c8..20da8ae 100644
>> --- a/libavfilter/vsrc_mandelbrot.c
>> +++ b/libavfilter/vsrc_mandelbrot.c
>> @@ -291,7 +291,7 @@ static void draw_mandelbrot(AVFilterContext *ctx, uint32_t *color, int linesize,
>>
>> use_zyklus= (x==0 || s->inner!=BLACK ||color[x-1 + y*linesize] == 0xFF000000);
>> if(use_zyklus)
>> - epsilon= scale*1*sqrt(SQR(x-s->w/2) + SQR(y-s->h/2))/s->w;
>> + epsilon= SQR(scale/s->w)*(SQR(x-s->w/2) + SQR(y-s->h/2));
>
> if the sqrt is a speed problem, (i had originally thougt it would
> not be)
> you can probably replace this by an approximation like:
> epsilon= scale*CONSTANT*(FFABS(x-s->w/2) + FFABS(y-s->h/2))/s->w;
This is mathematically justifiable in the sense that \forall x, y \ge
0, \sqrt{\frac{1}{2}}(x+y) \le \sqrt{x^2 + y^2} \le 1(x+y), i.e
replacing by a constant is ok as the constant is guaranteed to lie
within bounds not too far apart (~[0.7, 1]). Furthermore, anyway the
"10*epsilon*epsilon" is a heuristic, and so the 10 can absorb any
minor change here.
Will change, and replace by the constant 1. Note that this will avoid
the cost of sqrt as well as squaring, so hopefully it is more readily
measurable.
>
> [...]
> --
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> Why not whip the teacher when the pupil misbehaves? -- Diogenes of Sinope
>
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
More information about the ffmpeg-develmailing list
|
There are at least three interesting features of the problem as currently stated. First of all, it specifies that $x, y \ge 0$. Secondly, apart from $x,y \ge 0$, it specifies neither the domain nor the codomain. And finally, it does not ask for continuity.
We will have to narrow down the question in order to answer it, but we will narrow it down a tiny bit less than in the very thorough answer by Arturo Magidin.
Let's decide that the domain consists of the non-negative reals. In order for the answer $x^p$ to make sense, we cannot have the codomain be $(0,\infty)$, we must at least insist on $[0,\infty)$. (After all, we are told that the functional equation holds for $x,y \ge 0$.)
Thus the functional equation admits the solution $f$ identically equal to $0$.
Assume that the codomain of $f$ is the reals. By putting $x=y=\sqrt{t}$, we can see that $f(t)=(f(\sqrt{t}))^2$, so $f(t)$ is always $\ge 0$.
Suppose that there is a non-zero $a$ such that $f(a)=0$. Then from the functional equation, we find that $f(x)=f(a)f(x/a)=0$, and therefore $f$ is identically $0$.
Thus from now on we can confine attention to functions $f$ such that $f(x)>0$ when $x>0$. It follows that for $x>0$, we can take logarithms freely and proceed along the lines suggested in the hint.
When looking at functional equations, particularly in a contest setting, it is useful to identify $f(0)$. From the fact that $f(0)=(f(0))^2$, we see that $f(0)=0$ or $f(0)=1$. But if $f(0)=1$, then from $f(0 \cdot x)=f(0)f(x)$ we conclude that $f(x)=1$. Thus apart from the case $f$ identically equal to $1$, which is presumably covered by the case $p=0$ of the answer, we can take $f(0)=0$.
Note however that if we forget about continuity, the answer $f(0)=0$, $f(x)=1$ if $x \ne 0$ is perfectly fine. And forgetting about continuity, at least at $0$, has certain advantages. For instance, in the solution by Arturo Magidin, the question of whether $\ln(f(e))$ can be negative is not directly addressed. The answer is that if we specify continuity for $x>0$, but not necessarily at $0$, then $\ln(f(e))$ can be negative.
The conclusion is that the full list of functions that are continuous for $x>0$, but not necessarily at $0$, and that satisfy the functional equation, consists of the following functions.
$1$. The identically $0$ function and the identically $1$ function.
$2$. The function which is $0$ at $0$ and $1$ elsewhere.
$3$. The functions $f(x)=x^p$, where $p$ is positive.
$4$. The functions $f(0)=0$, $f(x)=x^p$ for $x>0$, where $p$ is negative.
Comment: The only Lebesgue measurable functions that satisfy the Cauchy functional equation are the obvious ones, so the above list gives a list of all measurable functions that satisfy the functional equation of the problem. And if, for example, we are willing to jettison the Axiom of Choice in favour of the Axiom of Determinacy, then all functions from $\mathbb{R}$ to $\mathbb{R}$ are Lebesgue measurable, and then the above is a complete list.
|
While reading about polylogarithms, I came across the nice
polylogarithm ladder,
$$6\operatorname{Li}_2(x^{-1})-3\operatorname{Li}_2(x^{-2})-4\operatorname{Li}_2(x^{-3})+\operatorname{Li}_2(x^{-6}) = \frac{7\pi^2}{30}\tag{1}$$
where $x = \phi = \frac{1+\sqrt{5}}{2}$, the
golden ratio, or the root $1<x<2$ of,
$$x^n(2-x) = 1\tag{2}$$
for the case $n=2$. I wondered if there was anything for $n=3$ (the
tribonacci constant $T$, or the real root of $x^3-x^2-x-1 = 0$). After a little expt with Mathematica's integer relations command, I found,
$$4\operatorname{Li}_2(x^{-1})-4\operatorname{Li}_2(x^{-3})-3\operatorname{Li}_2(x^{-4})+\operatorname{Li}_2(x^{-8}) = \frac{\pi^2}{6}\tag{3}$$
However, for general $n$ (including the
tetranacci constant and higher), it seems they obey,
$$4\operatorname{Li}_2(x^{-1})-\operatorname{Li}_2(x^{-2})+\operatorname{Li}_2(x^{-n+1})-2\operatorname{Li}_2(x^{-n}) = 2\operatorname{Li}_2(-x)+\frac{\pi^2}{2}\tag{4}$$
where $x$ is the root $1<x<2$ of $(2)$.
Q: Anybody knows how to prove if $(4)$ is indeed true for all integer $n\geq2$?
|
In Munkres' 'Analysis on Manifolds' on pg. 208 there's a question which reads:
QUESTION: Let $f:\mathbb R^{n+k}\to \mathbb R^n$ be of class $\mathscr C^r$.Let $M$ be the set of all the points $\mathbf x$ such that $f(\mathbf x)=\mathbf 0$ and $N$ be the set of all the points $\mathbf x$ such that $$f_1(\mathbf x)=\cdots=f_{n-1}(\mathbf x)=0\text{ and } f_n(\mathbf x)\geq 0$$Assume $M$ is non-empty. 1) Assume $\text{rank} ~ Df(\mathbf x)=n$ for all $\mathbf x\in M$ and show that $M$ is a $k$-manifold without boundary in $\mathbb R^{n+k}$.
2) Assume that the matrix $\displaystyle\frac{\partial(f_1,\ldots,f_{n-1})}{\partial \mathbf x}$ has rank $n-1$ for all $\mathbf x\in N$ and show that $N$ is a $(k+1)$-manifold with boundary in $\mathbb R^{n+k}$.
I am trying to show $(2)$ and I am not sure if the hypothesis of $(1)$ is required to do that.
I have approached this question using the constant rank theorem which dictates:
Let $U$ be open in $\mathbb R^n$ and $\mathbf a$ be any point in $U$. Let $f:U\to \mathbb R^m$ be a function of class $\mathscr C^p$ such that $\text{rank } Df(\mathbf z) =r$ for all $\mathbf z\in U$. Then there exist open sets $U_1,U_2\subseteq U$ and $V\subseteq \mathbb R^m$ such that $\mathbf a\in U_1$ and $f(\mathbf a)\in V$, and $\mathscr C^p$-diffeomorphisms $\phi:U_1\to U_2$ and $\psi:V\to V$ such that $$(\psi\circ f\circ \phi^{-1})(\mathbf z)=(z_1,\ldots,z_r,0,\ldots,0)$$for all $\mathbf z\in U_2$. Constant Rank Theorem:
My approach to solve $(2)$ shall be clear by my solution of $(1)$:
Let $\mathbf a\in M$. We know that there exists $U$ open in $\mathbb R^{n+k}$ such that $\mathbf a\in U$ and $\text{rank }Df(\mathbf x)=n$ for all $\mathbf x\in U$. By the Constant Rank Theorem there exists open sets $U_1$ and $U_2$ in $\mathbb R^{n+k}$ and $V$ in $\mathbb R^n$ such that $\mathbf a\in U_1\subseteq U_1$ and $f(\mathbf a)=\mathbf \in V$, along with diffeomorphisms $\phi:U_1\to U_2$ and $\psi:V\to V$ satisfying $$(\psi\circ f\circ \phi^{-1})(\mathbf x) =(x_1,\ldots,x_n)$$ for all $\mathbf x\in U_2$. Say $\psi(\mathbf 0)=(t_1,\ldots,t_n)$ and define $S=\{(t_1,\ldots,t_n,z_1,\ldots,z_k):z_i\in \mathbb R\}\cap U_2$.
Claim 1: $\phi^{-1}(S)=M\cap U_1$. Proof: Let $\mathbf q=(t_1,\ldots,t_n,z_1,\ldots,z_k)$ be in $S$. Then $\phi^{-1}(\mathbf q)$ obviously lies in $U_1$. We now show that $\phi^{-1}(\mathbf q)$ lies in $M$. Note that $(\psi\circ f\circ \phi^{-1})(\mathbf q)=(t_1,\ldots,t_n)$. This gives $(f\circ \phi^{-1})(\mathbf q)=\psi^{-1}(t_1,\ldots,t_n)=\mathbf 0$. This means that $f(\phi^{-1}(\mathbf q))=\mathbf 0$ and hence $\phi^{-1}(\mathbf q)$ is in $M$. For the reverse containment assume that $\mathbf q\in M\cap U_1$. Then $\mathbf q=\phi^{-1}(\mathbf s)$ for some $\mathbf s\in U_2$. Also, $f(\mathbf q)=0$ since $\mathbf q\in M$. Thus $(f\circ\phi^{-1})(\mathbf s)=\mathbf 0$. Operating $\psi$ on both the sides we get $(\psi\circ f\circ \phi^{-1})(\mathbf s)=\psi(\mathbf 0)$. But the LHS of the last equation is $(s_1,\ldots,s_n)$ and the RHS is $(t_1,\ldots,t_n)$. Thus $s_i=t_i$ for $1\leq i\leq n$. Therefore $\mathbf s\in S$ and $\mathbf q\in\phi^{-1}(S)$. This settles the claim.
Now define $T=\{(z_1,\ldots,z_k)\in\mathbb R^k: (t_1,\ldots,t_n,z_1,\ldots,z_k)\in S\}$.
Claim 2: $T$ is open in $\mathbb R^k$. Proof: Define $g:\mathbb R^k\to \mathbb R^{n+k}$ as $$g(z_1,\ldots, z_k)=(t_1,\ldots,t_n,z_1,\ldots,z_k)$$ Clearly $g$ is injective and continuous. We now show that $g^{-1}(U_2)=T$. Note that $g^{-1}(U_2)=g^{-1}(S)$. Let $\mathbf q\in S$. Say $\mathbf q=(t_1,\ldots,t_n,q_1,\ldots,q_k)$ and it is obvious that $g^{-1}(\mathbf q)\in T$. Now let $g^{-1}(\mathbf q)\in T$ for some $q\in \mathbb R^{n+k}$. We need to show that $\mathbf q\in U_2$. Say $g^{-1}(\mathbf q)=(b_1,\ldots,b_k)$. Then $\mathbf q=(t_1,\ldots,t_n,b_1,\ldots,b_k)\in S$ and thus $\mathbf q\in U_2$.
So we have shown that $T=g^{-1}(U_2)$. Now since $g$ is a continuous function and $U_2$ is open in $\mathbb R^{n+k}$, we infer that $T$ is open in $\mathbb R^k$ and the claim is settled. Now define a function $\alpha:T\to M\cap U_1$ as $$\alpha(\mathbf z)=\phi^{-1}\circ g(\mathbf z)$$ It is a trivial matter to verify that $\alpha$ is a coordinate patch about the point $\mathbf a$ in $M$ and the proof is complete.
To solve $(2)$ what I did was define a function $g:\mathbb R^{n+k}\to \mathbb R^{n-1}$ as $$g(\mathbf x)=(f_1(\mathbf x),\ldots,f_{n-1}(\mathbf x))$$ Then $\text{rank }Dg(\mathbf x)=n-1$ for all $\mathbf x\in N$. Let $\mathbf z_0\in N$. I can show that there exists an open set $U\subseteq \mathbb R^{n+k}$ such that $\mathbf z_0\in U$ and $\text{rank }Dg(\mathbf z)=n-1$ for all $\mathbf z\in U$. Thereby, using the conastant rank theorem I get $U_1, U_2,\psi$ and $\phi$ such that $(\psi\circ g\circ\phi^{-1})(\mathbf x)=(x_1,\ldots,x_{n-1})$ Can somebody guide me what to do from here?
|
This is an excellent question. As indicated by the MathOverflow link in the comments, there are many ways to think about torsion and torsion-freeness. At the risk of being repetitive, allow me to summarize some of these, adding my own thoughts.
Throughout, we let $M$ be a smooth manifold, $\nabla$ a connection on $TM$, and $$T^\nabla(X,Y) = \nabla_XY - \nabla_YX - [X,Y]$$its torsion tensor field. We let $X$, $Y$ denote vector fields.
Initial Observations
(1) Parallel coordinates
Torsion (at a point) can be seen as the obstruction to the existence of parallel coordinates (at that point):
Fact: Let $p \in M$. Then $T^\nabla|_p = 0$ if and only if there exists a coordinate system $(x^i)$ centered at $p$ such that $\nabla \partial_i |_p = 0$.
The point here is that
if $T^\nabla = 0$, then any parallel frame is commuting (i.e.: $\nabla E_i = 0$ $\forall i$ $\implies$ $[E_i, E_j] = 0$ $\forall i,j$), hence is a coordinate frame (by the "Flowbox Coordinate Theorem").
(2) Commuting of second partials
The following two facts indicate that torsion can be thought of as the obstruction to (certain types of) second partial derivatives commuting.
For a smooth function $f \colon M \to \mathbb{R}$, recall that its
covariant Hessian (or second covariant derivative) is the covariant $2$-tensor field defined by$$\text{Hess}(f) := \nabla \nabla f = \nabla df.$$Explicitly, $\text{Hess}(f)(X,Y) = (\nabla_X df)(Y) = X(Yf) - (\nabla_XY)(f)$.
Fact [Lee]: The following are equivalent:
(i) $T^\nabla = 0$
(ii) The Christoffel symbols of $\nabla$ with respect to any coordinate system are symmetric: $$\Gamma^k_{ij} = \Gamma^k_{ji}$$
(iii) The covariant Hessian of any smooth function $f$ is symmetric: $$\text{Hess}(f)(X,Y) = \text{Hess}(f)(Y,X)$$
Torsion-freeness also implies another kind of symmetry of second partials:
Symmetry Lemma [Lee]: If $T^\nabla = 0$, then for every smooth family of curves $\Gamma \colon (-\epsilon, \epsilon) \times [a,b] \to M$, we have
$$\frac{D}{ds} \frac{d}{dt} \Gamma(s,t) = \frac{D}{dt} \frac{d}{ds} \Gamma(s,t).$$
I don't know for certain whether the converse to the Symmetry Lemma is true, but I imagine it is.
Some Heuristic Interpretations
(i) "Twisting" of parallel vector fields along geodesics
Suppose we have a connection $\nabla$ on $\mathbb{R}^n$ whose geodesics are lines, but that
has torsion. One could then imagine that parallel translating a vector along a line results in the vector "spinning" along the line, as if one were holding each end of a string and rolling it between our fingers.
An explicit example of such a connection is in the MathOverflow answer linked in the comments.
The justification for why this interpretation should be believed in general will be discussed below in (B).
On the MO thread, Igor Belegradek points out two related facts:
Fact [Spivak]:
(1) Two connections $\nabla^1$, $\nabla^2$ on $TM$ are equal if and only if they have the same geodesics and torsion tensors.
(2) For every connection on $TM$, there is a unique torsion-free connection with the same geodesics.
(ii) Closing of geodesic parallelograms (to second order)
Let $v, w \in T_pM$ be tangent vectors. Let $\gamma_v$ and $\gamma_w$ be the geodesics whose initial tangent vectors are $v$, $w$, respectively. Consider parallel translating the vector $w$ along $\gamma_v$, and also the vector $v$ along $\gamma_w$. Then the tips of the resulting two vectors agree to second order if and only if $T^\nabla|_p = 0$.
Heuristic reasons for this (and a picture!) are given in this excellent answer by Sepideh Bakhoda.
A precise proof of this fact is outlined by Robert Bryant at the end of this MO answer of his.
More Reasons We Like $T^\nabla = 0$
(A) Submanifolds of $\mathbb{R}^N$ come with torsion-free connections
Suppose $(M,g)$ is isometrically immersed into $\mathbb{R}^N$.
As hinted in the comments, the euclidean connection $\overline{\nabla}$ on $\mathbb{R}^N$ is torsion-free. It is a fact that the tangential component of $\overline{\nabla} = \nabla^\top + \nabla^\perp$ defines an induced connection on $M \subset \mathbb{R}^N$. This induced connection on $M$ will then also be torsion-free (and compatible with the induced metric).
Point: If $(M,g) \subset \mathbb{R}^N$ is an isometrically immersed submanifold, then its induced connection is torsion-free.
This example is more general than it seems: by the Nash Embedding Theorem, every Riemannian manifold $(M,g)$ can be isometrically embedded in some $\mathbb{R}^N$.
(B) $T = d^\nabla(\text{Id})$
[I'll add this another time.]
(C) Simplification of identities
Finally, I should mention that $T^\nabla = 0$ greatly simplifies many identities.
First, we have the Ricci Formula$$\nabla^2_{X,Y}Z - \nabla^2_{Y,X}Z = R(X,Y)Z - \nabla_{T^\nabla(X,Y)}Z.$$Thus, in the case where $T^\nabla = 0$, we can interpret the curvature $R(X,Y)$ as the obstruction to commuting second covariant derivatives
of vector fields.
In the presence of torsion, the First and Second Bianchi Idenities read, respectively,$$\mathfrak{S}(R(X,Y)Z) = \mathfrak{S}[ T(T(X,Y),Z) + (\nabla_XT)(Y,Z)],$$$$\mathfrak{S}[(\nabla_XR)(Y,Z) + R(T(X,Y),Z)] = 0,$$where $\mathfrak{S}$ denotes the cyclic sum over $X,Y,Z$.
References
[Lee] "Riemannian Manifolds: An Introduction to Curvature"
[Spivak] "A Comprehensive Introduction to Differential Geometry: Volume II"
|
Let \(a_1,a_2,\ldots\) be an infinite sequence of positive real numbers such that \(\sum_{n=1}^\infty a_n\) converges. Prove that for every positive constant \(c\), there exists an infinite sequence \(i_1<i_2<i_3<\cdots\) of positive integers such that \(| i_n-cn^3| =O(n^2)\) and \(\sum_{n=1}^\infty \left( a_{i_n} (a_1^{1/3}+a_2^{1/3}+\cdots+a_{i_n}^{1/3})\right)\) converges.
GD Star Rating loading...
The problem of the week will take a break during the midterm exam period and return on April 26, Friday. Good luck on your midterm exams!
GD Star Rating loading...
Suppose that \( a_1, a_2, \cdots \) are positive real numbers. Prove that
\[
\sum_{n=1}^{\infty} (a_1 a_2 \cdots a_n)^{1/n} \leq e \sum_{n=1}^{\infty} a_n \,.
\]
The best solution was submitted by 정성진. Congratulations!
Alternative solutions were submitted by 김경석 (+3), 이영민 (+3), 이종원 (+3), 장기정 (+3), 정성진 (+3), 조준영 (+3), 황성호 (+2). Incorrect solutions were submitted by K.S.J., L.S.C. (Some initials here might have been improperly chosen.)
GD Star Rating loading...
Suppose that \( a_1, a_2, \cdots \) are positive real numbers. Prove that
\[ \sum_{n=1}^{\infty} (a_1 a_2 \cdots a_n)^{1/n} \leq e \sum_{n=1}^{\infty} a_n \,. \]
GD Star Rating loading...
Let \(n\), \(k\) be positive integers and let \(A_1,A_2,\ldots,A_n\) be \(k\times k\) real matrices. Prove or disprove that \[ \det\left(\sum_{i=1}^n A_i^t A_i\right)\ge 0.\] (Here, \(A^t\) denotes the transpose of the matrix \(A\).)
The best (most elementary) solution was submitted by 김정민. Congratulations!
Alternative solutions were submitted by 조준영 (+3), 채석주 (+3), 이영민 (+3), 심병수 (+3), 박훈민 (+3), 장기정 (+3), 정성진 (+3), 황성호 (+3), 이종원 (+3), 김일희 (+2), 남재현 (+3), 박경호 (+3).
GD Star Rating loading...
Let \(n\), \(k\) be positive integers and let \(A_1,A_2,\ldots,A_n\) be \(k\times k\) real matrices. Prove or disprove that \[ \det\left(\sum_{i=1}^n A_i^t A_i\right)\ge 0.\] (Here, \(A^t\) denotes the transpose of the matrix \(A\).)
GD Star Rating loading...
Prove that there exist infinitely many pairs of positive integers \( (m, n) \) satisfying the following properties:
(1) gcd\( (m, n) = 1 \).
(2) \((x+m)^3 = nx\) has three distinct integer solutions.
The best solution was submitted by 이종원. Congratulations!
Alternative solutions were submitted by 김경석 (+3), 김은혜 (+3), 김일희 (+3), 김찬민 (+3), 박훈민 (+3), 안현수 (+3), 어수강 (+3), 윤성철 (+3), 이영민 (+3), 장경석 (+3), 장기정 (+3), 정성진 (+3), 조준영 (+3), 채석주 (+3), 황성호 (+3), 박경호 (+2), 조남경 (+2). An incorrect solutions was submitted by N.J.H. (Some initials here might have been improperly chosen.)
GD Star Rating loading...
|
Current browse context:
math.AC
Change to browse by: References & Citations Bookmark(what is this?) Mathematics > Commutative Algebra Title: On graphs related to co-maximal ideals of a commutative ring
(Submitted on 1 Jun 2011)
Abstract: This paper studies the co-maximal graph $\Om(R)$, the induced subgraph $\G(R)$ of $\Om(R)$ whose vertex set is $R\setminus (U(R)\cup J(R))$ and a retract $\G_r(R)$ of $\G(R)$, where $R$ is a commutative ring. We show that the core of $\G(R)$ is a union of triangles and rectangles, while a vertex in $\G(R)$ is either an end vertex or a vertex in the core. For a non-local ring $R$, we prove that both the chromatic number and clique number of $\G(R)$ are identical with the number of maximal ideals of $R$. A graph $\G_r(R)$ is also introduced on the vertex set $\{Rx|\,x\in R\setminus (U(R)\cup J(R))\}$, and graph properties of $\G_r(R)$ are studied. Submission historyFrom: Tongsuo Wu [view email] [v1]Wed, 1 Jun 2011 01:16:12 GMT (39kb)
|
Given the first $n$ primes, we can label the $k$th prime as $p_k$. So, what is the least common multiple(LCM) of {$p_1 - 1$, $p_2 - 1$, $p_3 - 1$, ..., $p_n-1$}? In other words, if we subtract $1$ from each of the first $n$ primes, and wish to find the LCM of these new values, can we find a lower bounds for this LCM?
By the most recent bound on Linnik's Theorem, there is an absolute constant $c$ such that for every prime $q < cp_n^{1/5}$, there is a prime $p < p_n$ such that $p \equiv 1 \pmod{q}$. Your least common multiple is therefore divisible by all primes below $cp_n^{1/5}$. The prime number theorem implies that the product of all primes below $cp_n^{1/5}$ is $e^{(c + o(1))p_n^{1/5}}$, and it follows that this is a lower bound on your lcm as well.
Conjecturally there is a prime $p < p_n$ such that $p \equiv 1 \pmod{q}$ for every $q < cp_n^{1-\epsilon}$, and this would provide a lower bound of $e^{(c + o(1))p_n^{1-\epsilon}}$. On the other hand, your lcm is not divisible by any prime larger than $\frac{1}{2}p_n$ so, again using the prime number theorem, a straightforward upper bound is $e^{(\frac{1}{2} + o(1))p_n}$.
I believe there is a recursive formula for this:
Let L(k) = LCM({$p_1 - 1$, $p_2 - 1$, $p_3 - 1$, ..., $p_k$}
Then L(k+1) = L(k) ($p_{k+1}-1$) / GCD(L(k),$p_{k+1}-1$)
where GCD = greatest common divisor
|
Let $f:[0;1]\to \mathbb{R}$ be a continuous function satisfying
$$f\left(\frac{x}{2}\right) + f\left(\frac{x+1}{2}\right)=3f(x).$$
How to show that $f\equiv0$?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Let $f:[0;1]\to \mathbb{R}$ be a continuous function satisfying
$$f\left(\frac{x}{2}\right) + f\left(\frac{x+1}{2}\right)=3f(x).$$
How to show that $f\equiv0$?
This question appears to be off-topic. The users who voted to close gave this specific reason:
Hint:
Show that $f$ does not get maximum and minimum in the interval, which leads to that it's constant and conclude that it is the $0$ function
Update:
Basically the idea is to let $x_0$ be the point where $f$ gets its maximum. Then we have: $$ 3f(x_0) = f(\frac{x_0}{2}) + f(\frac{x_0+1}{2}) \le 2f(x_0) \implies f(x_0) \le 0 \implies f(x) \le 0, x \in [0,1] $$ Similarly, if $f$ gets its minimum at $x_1$ we get $f(x) \ge 0$ for $x \in [0,1]$
Since $f$ is continuous on $[0,1]$ we can see after a little effort that $$f(0)=f(1)=1/2f(1/2)$$ Thus, $|f(x)|<\infty$ for $x\in [0,1]$. Let $\max_{x\in [0,1]}|f(x)|=M$. Then, we have $$|f(x)|=\left|\frac{1}{3^n}\sum_{k=1}^{n}\left[f\left(\frac{x+2^k-2}{2^n}\right)+f\left(\frac{x+2^k-1}{2^n}\right)\right]\right|\le \frac{2Mn}{3^n},\ \forall n\in \mathbb{N}$$ Thus $f(x)=0,\ x\in[0,1]$
|
Equivalence of Definitions of Isolated Point Contents Theorem Let $T = \left({S, \tau}\right)$ be a topological space.
Let $H \subseteq S$ be a subset of $S$.
$x \in H$ is an
isolated point of $H$ if and only if: $\exists U \in \tau: U \cap H = \left\{{x}\right\}$ Proof Definition 1 implies Definition 2
Let $x$ be an isolated point of $H$ by definition 1.
Then by definition:
$\exists U \in \tau: U \cap H = \left\{{x}\right\}$
Thus we have an open set in $T$ such that $x \in U$ contains no other point of $H$ than $x$.
Thus, by definition, $x$ is not a limit point of $H$.
Thus $x$ is an isolated point of $H$ by definition 2.
$\Box$
Definition 2 implies Definition 1
Let $x$ be an isolated point of $H$ by definition 2.
Then by definition every open set $U \in \tau$ such that $x \in U$ contains some point of $H$ other than $x$.
That is:
$\forall U \in \tau: x \in U \implies \exists y \in S, y \ne x: y \in U \cap H$
That is:
$\not \exists U \in \tau: U \cap H = \left\{{x}\right\}$
because all $U$ with $x$ in them are such that there is at least one point in $U \cap H$ apart from $x$.
That is, $x$ is an isolated point of $H$ by definition 1.
$\blacksquare$
|
Data on the mean multiplicity of strange hadrons produced in minimum bias proton--proton and central nucleus--nucleus collisions at momenta between 2.8 and 400 GeV/c per nucleon have been compiled. The multiplicities for nucleon--nucleon interactions were constructed. The ratios of strange particle multiplicity to participant nucleon as well as to pion multiplicity are larger for central nucleus--nucleus collisions than for nucleon--nucleon interactions at all studied energies. The data at AGS energies suggest that the latter ratio saturates with increasing masses of the colliding nuclei. The strangeness to pion multiplicity ratio observed in nucleon--nucleon interactions increases with collision energy in the whole energy range studied. A qualitatively different behaviour is observed for central nucleus--nucleus collisions: the ratio rapidly increases when going from Dubna to AGS energies and changes little between AGS and SPS energies. This change in the behaviour can be related to the increase in the entropy production observed in central nucleus-nucleus collisions at the same energy range. The results are interpreted within a statistical approach. They are consistent with the hypothesis that the Quark Gluon Plasma is created at SPS energies, the critical collision energy being between AGS and SPS energies.
Elastic and inelastic 19.8 GeV/c proton-proton collisions in nuclear emulsion are examined using an external proton beam of the CERN Proton Synchrotron. Multiple scattering, blob density, range and angle measurements give the momentum spectra and angular distributions of secondary protons and pions. The partial cross-sections corresponding to inelastic interactions having two, four, six, eight, ten and twelve charged secondaries are found to be, respectively, (16.3±8.4) mb, (11.5 ± 6.0) mb, (4.3 ± 2.5) mb, (1.9 ± 1.3) mb, (0.5 ± 0.5) mb and (0.5±0.5)mb. The elastic cross-section is estimated to be (4.3±2.5) mb. The mean charged meson multiplicity for inelastic events is 3.7±0.5 and the average degree of inelasticity is 0.35±0.09. Strong forward and backward peaking is observed in the center-of-mass system for both secondary charged pions and protons. Distributions of energy, momentum and transverse momentum for identified charged secondaries are presented and compared with the results of work at other energies and with the results of a statistical theory of proton-proton collisions.
Double differential K+cross sections have been measured in p+C collisions at 1.2, 1.5 and 2.5 GeV beam energy and in p+Pb collisions at 1.2 and 1.5 GeV. The K+ spectrum taken at 2.5 GeV can be reproduced quantitatively by a model calculation which takes into account first chance proton-nucleon collisions and internal momentum with energy distribution of nucleons according to the spectral function. At 1.2 and 1.5 GeV beam energy the K+ data excess significantly the model predictions for first chance collisions. When taking secondary processes into account the results of the calculations are in much better agreement with the data.
The differential and total cross sections for kaon pair production in the pp->ppK+K- reaction have been measured at three beam energies of 2.65, 2.70, and 2.83 GeV using the ANKE magnetic spectrometer at the COSY-Juelich accelerator. These near-threshold data are separated into pairs arising from the decay of the phi-meson and the remainder. For the non-phi selection, the ratio of the differential cross sections in terms of the K-p and K+p invariant masses is strongly peaked towards low masses. This effect can be described quantitatively by using a simple ansatz for the K-p final state interaction, where it is seen that the data are sensitive to the magnitude of an effective K-p scattering length. When allowance is made for a small number of phi events where the K- rescatters from the proton, the phi region is equally well described at all three energies. A very similar phenomenon is discovered in the ratio of the cross sections as functions of the K-pp and K+pp invariant masses and the identical final state interaction model is also very successful here. The world data on the energy dependence of the non-phi total cross section is also reproduced, except possibly for the results closest to threshold.
The production of eta mesons has been measured in the proton-proton interaction close to the reaction threshold using the COSY-11 internal facility at the cooler synchrotron COSY. Total cross sections were determined for eight different excess energies in the range from 0.5 MeV to 5.4 MeV. The energy dependence of the total cross section is well described by the available phase-space volume weighted by FSI factors for the proton-proton and proton-eta pairs.
Sigma+ hyperon production was measured at the COSY-11 spectrometer via the p p --> n K+ Sigma+ reaction at excess energies of Q = 13 MeV and Q = 60 MeV. These measurements continue systematic hyperon production studies via the p p --> p K+ Lambda/Sigma0 reactions where a strong decrease of the cross section ratio close-to-threshold was observed. In order to verify models developed for the description of the Lambda and Sigma0 production we have performed the measurement on the Sigma+ hyperon and found unexpectedly that the total cross section is by more than one order of magnitude larger than predicted by all anticipated models. After the reconstruction of the kaon and neutron four momenta, the Sigma+ is identified via the missing mass technique. Details of the method and the measurement will be given and discussed in view of theoretical models.
K+ meson production in pA (A = C, Cu, Au) collisions has been studied using the ANKE spectrometer at an internal target position of the COSY-Juelich accelerator. The complete momentum spectrum of kaons emitted at forward angles, theta < 12 degrees, has been measured for a beam energy of T(p)=1.0 GeV, far below the free NN threshold of 1.58 GeV. The spectrum does not follow a thermal distribution at low kaon momenta and the larger momenta reflect a high degree of collectivity in the target nucleus.
We report a new measurement of the pseudorapidity (eta) and transverse-energy (Et) dependence of the inclusive jet production cross section in pbar b collisions at sqrt(s) = 1.8 TeV using 95 pb**-1 of data collected with the DZero detector at the Fermilab Tevatron. The differential cross section d^2sigma/dEt deta is presented up to |eta| = 3, significantly extending previous measurements. The results are in good overall agreement with next-to-leading order predictions from QCD and indicate a preference for certain parton distribution functions.
We present the first observation of exclusive $e^+e^-$ production in hadron-hadron collisions, using $p\bar{p}$ collision data at \mbox{$\sqrt{s}=1.96$ TeV} taken by the Run II Collider Detector at Fermilab, and corresponding to an integrated luminosity of \mbox{532 pb$^{-1}$}. We require the absence of any particle signatures in the detector except for an electron and a positron candidate, each with transverse energy {$E_T>5$ GeV} and pseudorapidity {$|\eta|<2$}. With these criteria, 16 events are observed compared to a background expectation of {$1.9\pm0.3$} events. These events are consistent in cross section and properties with the QED process \mbox{$p\bar{p} \to p + e^+e^- + \bar{p}$} through two-photon exchange. The measured cross section is \mbox{$1.6^{+0.5}_{-0.3}\mathrm{(stat)}\pm0.3\mathrm{(syst)}$ pb}. This agrees with the theoretical prediction of {$1.71 \pm 0.01$ pb}.
|
Definition:Symmetric Mapping Definition
Let $\R$ be the field of real numbers.
Let $\F$ be a subfield of $\R$.
Let $V$ be a vector space over $\F$
Let $\left \langle {\cdot, \cdot} \right \rangle : V \times V \to \mathbb F$ be a mapping.
Then $\left \langle {\cdot, \cdot} \right \rangle : V \times V \to \mathbb F$ is symmetric if and only if: $\forall x, y \in V: \quad \left \langle {x, y} \right \rangle = \left \langle {y, x} \right \rangle$ Also see Definition:Conjugate Symmetric Mapping, this concept generalised to subfields of the field of complex numbers. Definition:Semi-Inner Product, where this property is used in the definition of the concept. Linguistic Note
This property as a noun is referred to as
symmetry.
|
NLO Higgs+jet production at large transverse momenta including top quark mass effects Abstract
Here, we present a next-to-leading order calculation of
H+jet in gluon fusion including the effect of a finite top quark mass $$m_t$$ at large transverse momenta. Using the recently published two-loop amplitudes in the high energy expansion and our previous setup that includes finite $$m_t$$ effects in a low energy expansion, we are able to obtain $$m_t$$-finite results for transverse momenta below 225 GeV and above 500 GeV with negligible remaining top quark mass uncertainty. The only remaining region that has to rely on the common leading order rescaling approach is the threshold region $$\sqrt{\hat s}\simeq 2m_t$$. We demonstrate that this rescaling provides an excellent approximation in the high $$p_T$$ region. Our calculation settles the issue of top quark mass effects at large transverse momenta. It is implemented in the parton level Monte Carlo code MCFM and is publicly available immediately in version 8.2. Authors: Illinois Institute of Technology, Chicago, IL (United States); Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) Publication Date: Research Org.: Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) Sponsoring Org.: USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25) OSTI Identifier: 1471432 Alternate Identifier(s): OSTI ID: 1438544 Report Number(s): arXiv:1802.02981; IIT-CAPP-18-01; FERMILAB-PUB-18-034-T Journal ID: ISSN 2399-6528; 1653986; TRN: US1900450 Grant/Contract Number: AC02-07CH11359 Resource Type: Journal Article: Published Article Journal Name: Journal of Physics Communications Additional Journal Information: Journal Volume: 2; Journal Issue: 9; Journal ID: ISSN 2399-6528 Publisher: IOP Publishing Country of Publication: United States Language: English Subject: 72 PHYSICS OF ELEMENTARY PARTICLES AND FIELDS; QCD; Standard Model; Higgs+jet; top quark mass effect Citation Formats
Neumann, Tobias.
NLO Higgs+jet production at large transverse momenta including top quark mass effects. United States: N. p., 2018. Web. doi:10.1088/2399-6528/aadfbf.
Neumann, Tobias.
NLO Higgs+jet production at large transverse momenta including top quark mass effects. United States. doi:10.1088/2399-6528/aadfbf.
Neumann, Tobias. Wed . "NLO Higgs+jet production at large transverse momenta including top quark mass effects". United States. doi:10.1088/2399-6528/aadfbf.
@article{osti_1471432,
title = {NLO Higgs+jet production at large transverse momenta including top quark mass effects}, author = {Neumann, Tobias}, abstractNote = {Here, we present a next-to-leading order calculation of H+jet in gluon fusion including the effect of a finite top quark mass $m_t$ at large transverse momenta. Using the recently published two-loop amplitudes in the high energy expansion and our previous setup that includes finite $m_t$ effects in a low energy expansion, we are able to obtain $m_t$-finite results for transverse momenta below 225 GeV and above 500 GeV with negligible remaining top quark mass uncertainty. The only remaining region that has to rely on the common leading order rescaling approach is the threshold region $\sqrt{\hat s}\simeq 2m_t$. We demonstrate that this rescaling provides an excellent approximation in the high $p_T$ region. Our calculation settles the issue of top quark mass effects at large transverse momenta. It is implemented in the parton level Monte Carlo code MCFM and is publicly available immediately in version 8.2.}, doi = {10.1088/2399-6528/aadfbf}, journal = {Journal of Physics Communications}, issn = {2399-6528}, number = 9, volume = 2, place = {United States}, year = {2018}, month = {9} } Figures / Tables: mexpansion for low t p, in the large energy expansion for large T pand with exact T mdependence as normalization. The first order of the large energy expansion deviates by more than 30% and is not shown. t
|
The
amsmath package provides a handful of options for displaying equations. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line.
Contents
The standard LaTeX tools for equations may lack some flexibility, causing overlapping or even trimming part of the equation when it's too long. We can surpass these difficulties with
amsmath. Let's check an example:
\begin{equation} \label{eq1} \begin{split} A & = \frac{\pi r^2}{2} \\ & = \frac{1}{2} \pi r^2 \end{split} \end{equation}
You have to wrap your equation in the
equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. Inside the equation environment, use the split environment to split the equations into smaller pieces, these smaller pieces will be aligned accordingly. The double backslash works as a newline character. Use the ampersand character &, to set the points where the equations are vertically aligned.
This is a simple step, if you use LaTeX frequently surely you already know this. In the preamble of the document include the code:
\usepackage{amsmath}
To display a single equation, as mentioned in the introduction, you have to use the
equation* or equation environment, depending on whether you want the equation to be numbered or not. Additionally, you might add a label for future reference within the document.
\begin{equation} \label{eu_eqn} e^{\pi i} + 1 = 0 \end{equation} The beautiful equation \ref{eu_eqn} is known as the Euler equation
For equations longer than a line use the
multline environment. Insert a double backslash to set a point for the equation to be broken. The first part will be aligned to the left and the second part will be displayed in the next line and aligned to the right.
Again, the use of an asterisk * in the environment name determines whether the equation is numbered or not.
\begin{multline*} p(x) = 3x^6 + 14x^5y + 590x^4y^2 + 19x^3y^3\\ - 12x^2y^4 - 12xy^5 + 2y^6 - a^3b^3 \end{multline*} Split is very similar to multline. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. This environment must be used inside an equation environment. For an example check the introduction of this document.
If there are several equations that you need to align vertically, the
align environment will do it:
Usually the binary operators (>, < and =) are the ones aligned for a nice-looking document.
As mentioned before, the ampersand character
& determines where the equations align. Let's check a more complex example:
\begin{align*} x&=y & w &=z & a&=b+c\\ 2x&=-y & 3w&=\frac{1}{2}z & a&=b\\ -4 + 5x&=2+y & w+2&=-1+w & ab&=cb \end{align*}
Here we arrange the equations in three columns. LaTeX assumes that each equation consists of two parts separated by a
&; also that each equation is separated from the one before by an &.
Again, use * to toggle the equation numbering. When numbering is allowed, you can label each row individually.
If you just need to display a set of consecutive equations, centered and with no alignment whatsoever, use the
gather environment. The asterisk trick to set/unset the numbering of equations also works here.
For more information see
|
In Wikipedia, it says that any epsilon number with the index that is countable is countable. How is it? Out of all those numbers, I especially want to know why $\epsilon_0$ is countable.
Thanks.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
It is often understood from context which of the two is used, but it can still be quite confusing to people new to this.
Cardinal exponentiation $\omega^\omega$ means we take the cardinality of all the functions from $\omega$ to $\omega$, this is of course of size continuum, which is of an uncountable cardinality.
On the other hand, $\omega^\omega$ in ordinal exponentiation means that we take some order type which is a supremum of countable ordinals. What are
those ordinals? These are ordinals which defined themselves as limit of smaller ordinals are $\omega^n$, and so we can continue to unfold the definitions of ordinal arithmetics until we have that $\omega^\omega$ is a supremum of some much "simpler" set (It is actually much more complicated that just $\omega^n$, though)
This "simpler" set contains only countable ordinals, and itself is countable. We know that the countable union of countable sets is countable, therefore $\omega^\omega$ is countable.
What does all that have to do with $\epsilon_0$? Well, by induction we have that $\omega,\omega^\omega,\omega^{\omega^\omega},\ldots$ are all countable, and there are only countably many of those. From this we have that $\epsilon_0$ is also countable. It is a large countable ordinal. Now we can continue by induction, what is $\epsilon_1$? We repeat the same process only we start from $\epsilon_0+1$ instead of $\omega$.
This is again a countable process, so it must end at a countable ordinal as before; by induction we can see that:
What happens with $\epsilon_{\omega_1}$? Well, for every $\alpha<\omega_1$ we have that $\epsilon_\alpha$ is countable, and if $\alpha<\beta$ then $\epsilon_\alpha<\epsilon_\beta$. Therefore we have $\aleph_1$ many distinct ordinals below $\epsilon_{\omega_1}$, therefore it is not a countable ordinal anymore. In fact $\epsilon_{\omega_1}$ is the limit of all those $\epsilon_\alpha$ for countable $\alpha$, and one can see that the the supremum of uncountably many countable ordinals cannot be other than $\omega_1$. (If $\delta>\omega_1$ is cannot be the supremum of a set of
countable ordinals!)
Let us repeat the definitions of ordinal arithmetics:
Some more reading material:
Wikipedia gives the answer, in fact: it's the union of a countable set of countable ordinals $\left\{\omega,\omega^\omega,\omega^{\omega^\omega},\ldots\right\}$
As written on Wikipedia, it's because it's a countable union of countable sets. Let $u_0 = 1$ and $u_{n+1} = \omega^{u_n}$. It is easily shown by induction that $u_n$ is countable for all $n$:
Finally, you have (by definition) $\epsilon_0 = \bigcup_{n < \omega} u_n$, a countable union of countable sets.
Vsauce introduces really well the subject. https://www.youtube.com/watch?v=SrU9YDoXE88
|
Ground state degeneracy occurs whenever there exists a unitary operator which acts non-trivially on a ground state and commutes with the Hamiltonian of the system.
I just want to find a potential $V(\mathbf{r})$, not necessary the central potential, such that Schrodinger equation in d-dimensional (no internal degrees of freedom like spin) $$E\Psi (\mathbf {r})=\left[{\frac {-\hbar ^{2}}{2\mu }}\nabla ^{2}+V(\mathbf {r} )\right]\Psi (\mathbf {r} )$$ has degenerate ground state.
I've tried many ways, but failed.
1 For example, given a potential $V(\mathbf{r})$, and solve eigenenergy $E_0, E_1\cdots$. I want to construct $H'=H-(E_1-E_0)|1\rangle\langle1|$, but this part $(E_1-E_0)|1\rangle\langle1|$ in position representation is not a local potential.
2 Certainly, it's easy to construct the finite dimensional quantum system have ground state degeneracy, that is we can write Hamiltonian matrix as a diagonal matrix with multiple lowest eigenvalues $H=diag(E_0,E_0,E_1 )$. But I don't want this trivial way.
3 It's also easy to construct quantum mechanical system with internal degree of freedom like spin. And internal degrees of freedom don't have dynamics. For example, hydrogen model with spin degree of freedom. For lowest energy $n=1,l=0$, we can have $s=\pm1/2$ with same energy. This way is also trivial.
4 And we know the scattering state in 1-dim has continuous spectra and every state is double degenerate. I want to construct $V(\mathbf{r})\ge0$ such that $$\lim_{|\mathbf{r}|\rightarrow\infty}V(\mathbf{r})\rightarrow 0$$ Even though all $E>0$ have degeneracy, $E=0$ is still unique.
5 Certainly, in 1-dim if $V(x)$ is a double infinite deep potential, we can have degenerate ground state. But this example is also trivial.
6 The potential with spontaneous broken symmetries, e.g. $V(x)= -x^2 +x^4$, is also impossible. There is a enegry gap between even parity and odd parity.
So my question is, apart from above trivial examples whether we can construct a example, that is in d-dim a particle without internal d.o.f, like spin, can have degenerate ground state in some potential.
This question may be a question in partial differential function. If such $V(\mathbf{r})$ does not exist, how to prove.
|
I had a teacher pose this interesting question yesterday:
Suppose you're running a high-energy scattering experiment at the LHC. For concreteness, let's suppose it's a 2 to 2 scattering event which involves electrons and/or muons.
The theorist uses QFT to compute some cross-section which comes from the amplitude $$ \mathcal{A}_{p_{1} p_{2} \to p_{3} p_{4} } = F(s,t,u,m_{e},m_{\mu},\ldots) $$
The amplitude is a function of the Mandelstam variables $s \equiv (p_1+p_2)^2$, $t \equiv (p_1-p_3)^2$ and $u \equiv (p_1-p_4)^2$, as well as the mass of the electron $m_{e}$ and muon $m_{\mu}$ (and some other stuff).
Because we're running a high-energy experiment we obviously have that $s,t,u \gg m_e,m_\mu$, and for this reason the theorist makes the approximation $m_{e} \approx 0$ and $m_{\mu} \approx 0$.
The Question: How is the LHC able to distinguish between an electron and a muon if the theorist makes the approximation that the electron and muon are both massless?
For some reason, the approximation $m_{e} \approx m_{\mu} \approx 0$ is a bad one and the question is why this is. One idea that a colleague had was that the tracks of the electron and muon look different; because of the cyclotron radius $r \sim \frac{cm}{qB} \propto m$ the magnetic fields used in the machine to track the particles coming out of the collision will see the electron spiral more dramatically than the muon.
Any ideas as to other reasons why?
|
I read in my textbook that if we multiply a chemical reaction by some factor(let's say $b$) its new equilibrium constant becomes $K^b$.But I don't understand why this happens..What is the difference ...
While I was reading about the usefulness of the quantity $\Delta H$, I found that it can be used to calculate the how the equilibrium constant varies with temperature. How can this be done?Does it ...
According to my textbook (and intuitively) certain changes when the aforementioned 3 variables are altered occur in accordance with Le Chatelier's Principle. However, what I don't understand is what ...
I am reviewing the book Biochemistry Concepts and Connections by Appling, Cahill, and Mathews and I cannot understand why they divide by the hydrogen concentration by $10^{-7}$. Why not just leave it ...
We know that the Gibbs free energy is related to the equilibrium constant by the following equation:$$\Delta_\mathrm{r}G^\circ=-RT\ln K$$We also know the Van't Hoff equation:$$\ln\left(\frac{K_2}{...
I read that the equilibrium constant is unitless because the molar activities of each of the species are used in the equilibrium expression, not the actual concentrations themselves. I understand that....
I have a problem with the definition of the standard Gibbs energy and its connection to the equilibrium constants.I think, that I've basically understood what the different equation mean but there ...
We know that chemical potential is defined as ${\mu}_i={\mu}_i^{standard}+RT\ln(a_i)$. Here $a_i$ is the activity of $i^\text{th}$ component of solution. In case of gases instead of $a_i$ it is $f_i$ (...
|
I am reading a paper and there is the following theorem:
Let $n$ be a fixed integer, and $n >1$.
Denote divisibility in $\mathbb{Z}[\frac{1}{n}]$ by $|_n$, thus for all $x, y \in \mathbb{Z}$ $$x |_n y \leftrightarrow \exists q, f \in \mathbb{Z}: y=xqn^{-f}$$ Then the positive existential theory of $(\mathbb{Z}; +, |_n)$ is undecidable, i.e. there is no algorithm to decide formulas of the form $$\exists x_1, \dots , x_m \in \mathbb{Z}: \land_{i=1}^s F_i (x_1, \dots , x_m) |_n G_i (x_1, \dots , x_m), $$ where $F_i$ and $G_i$ are polynomials over $\mathbb{Z}$ of degree one or less, and where $\land_{i=1}^s$ denotes a finite conjunction.
Why does the positive existential theory of $(\mathbb{Z}; +, |_n)$ contain formulas of the form $\exists x_1, \dots , x_m \in \mathbb{Z}: \land_{i=1}^s F_i (x_1, \dots , x_m) |_n G_i (x_1, \dots , x_m), $ ?
This formula does not contain $+$. Do we suppose that the polynomials $F_i$ and $G_i$ contain additions?
Also why are these polynomials of degree one or less?
|
Suppose you are given a single unit square, and you would like to completely cover the surface of a cube by cutting up the square and pasting it onto the cube's surface.
Q1. What is the largest cube that can be covered by a $1 \times 1$ square when cut into at most $k$ pieces?
The case $k=1$ has been studied, probably earlier than this reference: "Problem 10716: A cubical gift,"
American Mathematical Monthly, 108(1):81-82, January 2001,solution by Catalano-Johnson, Loeb, Beebee. (This was discussed in an MSE Question.) The depicted solution results in a cube edge length of $1/(2\sqrt{2}) \approx 0.35$.
As $k \to \infty$, there should be no wasted overlaps in the covering of the 6 faces, and so the largest cube covered will have edge length $1/\sqrt{6} \approx 0.41$. What partition of the square leads to this optimal cover?
Q2. For which value of $k$ is this optimal reached?
I have not found literature on this problem for $k>1$, but it seems likely it has been explored. Thanks for any pointers!
|
Using a silicon vertex detector, we measure the charged particle pseudorapidity distribution over the range 1.5 to 5.5 using data collected from PbarP collisions at root s = 630 GeV. With a data sample of 3 million events, we deduce a result with an overall normalization uncertainty of 5%, and typical bin to bin errors of a few percent. We compare our result to the measurement of UA5, and the distribution generated by the Lund Monte Carlo with default settings. This is only the second measurement at this level of precision, and only the second measurement for pseudorapidity greater than 3.
We present charged particle densities as a function of pseudorapidity and collision centrality for the 197Au+197Au reaction at Sqrt{s_NN}=200 GeV. For the 5% most central events we obtain dN_ch/deta(eta=0) = 625 +/- 55 and N_ch(-4.7<= eta <= 4.7) = 4630+-370, i.e. 14% and 21% increases, respectively, relative to Sqrt{s_NN}=130 GeV collisions. Charged-particle production per pair of participant nucleons is found to increase from peripheral to central collisions around mid-rapidity. These results constrain current models of particle production at the highest RHIC energy.
We report measurements of the primary charged-particle pseudorapidity density and transverse momentum distributions in p-Pb collisions at sNN=5.02TeV and investigate their correlation with experimental observables sensitive to the centrality of the collision. Centrality classes are defined by using different event-activity estimators, i.e., charged-particle multiplicities measured in three different pseudorapidity regions as well as the energy measured at beam rapidity (zero degree). The procedures to determine the centrality, quantified by the number of participants (Npart) or the number of nucleon-nucleon binary collisions (Ncoll) are described. We show that, in contrast to Pb-Pb collisions, in p-Pb collisions large multiplicity fluctuations together with the small range of participants available generate a dynamical bias in centrality classes based on particle multiplicity. We propose to use the zero-degree energy, which we expect not to introduce a dynamical bias, as an alternative event-centrality estimator. Based on zero-degree energy-centrality classes, the Npart dependence of particle production is studied. Under the assumption that the multiplicity measured in the Pb-going rapidity region scales with the number of Pb participants, an approximate independence of the multiplicity per participating nucleon measured at mid-rapidity of the number of participating nucleons is observed. Furthermore, at high-pT the p-Pb spectra are found to be consistent with the pp spectra scaled by Ncoll for all centrality classes. Our results represent valuable input for the study of the event-activity dependence of hard probes in p-Pb collisions and, hence, help to establish baselines for the interpretation of the Pb-Pb data.
We present results for the charged-particle multiplicity distribution at mid-rapidity in Au - Au collisions at sqrt(s_NN)=130 GeV measured with the PHENIX detector at RHIC. For the 5% most central collisions we find $dN_{ch}/d\eta_{|\eta=0} = 622 \pm 1 (stat) \pm 41 (syst)$. The results, analyzed as a function of centrality, show a steady rise of the particle density per participating nucleon with centrality.
Dihadron correlations are analyzed in sNN=200 GeV d+Au collisions classified by forward charged particle multiplicity and zero-degree neutral energy in the Au-beam direction. It is found that the jetlike correlated yield increases with the event multiplicity. After taking into account this dependence, the non-jet contribution on the away side is minimal, leaving little room for a back-to-back ridge in these collisions.
Charged-particle pseudorapidity densities are presented for the d+Au reaction at sqrt{s_{NN}}=200 GeV with -4.2 <= eta <= 4.2$. The results, from the BRAHMS experiment at RHIC, are shown for minimum-bias events and 0-30%, 30-60%, and 60-80% centrality classes. Models incorporating both soft physics and hard, perturbative QCD-based scattering physics agree well with the experimental results. The data do not support predictions based on strong-coupling, semi-classical QCD. In the deuteron-fragmentation region the central 200 GeV data show behavior similar to full-overlap d+Au results at sqrt{s_{NN}}=19.4 GeV.
In this Letter, the ALICE Collaboration presents the first measurements of the charged-particle multiplicity density, $\rm{d}N_{\rm{ch}}/\rm{d}\eta$, and total charged-particle multiplicity, $N_{\rm{ch}}^{\rm{tot}}$, in Xe-Xe collisions at a centre-of-mass energy per nucleon--nucleon pair of $\sqrt{s_{\rm NN}}$ = 5.44 TeV. The measurements are performed as a function of collision centrality over a wide pseudorapidity range of $-3.5 < \eta < 5$. The values of $\rm{d}N_{\rm{ch}}/\rm{d}\eta$ at mid-rapidity and $N_{\rm{ch}}^{\rm{tot}}$ for central collisions, normalised to the number of nucleons participating in the collision ($N_{\rm{part}}$) as a function of $\sqrt{s_{\rm NN}}$, follow the trends established in previous heavy-ion measurements. The same quantities are also found to increase as a function of $N_{\rm{part}}$, and up to the 10% most central collisions the trends are the same as the ones observed in Pb-Pb at a similar energy. For more central collisions, the Xe-Xe scaled multiplicities exceed those in Pb-Pb for a similar $N_{\rm{part}}$. The results are compared to phenomenological models and theoretical calculations based on different mechanisms for particle production in nuclear collisions. All considered models describe the data reasonably well within 20%.
Central collisions of O16 nuclei with the Ag107 and Br80 nuclei in nuclear emulsion at 14.6, 60, and 200 GeV/nucleon are compared with proton-emulsion data at equivalent energies. The multiplicities of produced charged secondaries are consistent with the predictions of superposition models. At 200 GeV/nucleon the central particle pseudorapidity density is 58±2 for those events with multiplicities exceeding 200 particles.
First results on charm quarkonia production in heavy ion collisions at the Relativistic Heavy Ion Collider (RHIC) are presented. The yield of J/Psi's measured in the PHENIX experiment via electron-positron decay pairs at mid-rapidity for Au-Au reactions at sqrt(s_NN) = 200 GeV are analyzed as a function of collision centrality. For this analysis we have studied 49.3 million minimum bias Au-Au reactions. We present the J/Psi invariant yield dN/dy for peripheral and mid-central reactions. For the most central collisions where we observe no signal above background, we quote 90% confidence level upper limits. We compare these results with our J/Psi measurement from proton-proton reactions at the same energy. We find that our measurements are not consistent with models that predict strong enhancement relative to binary collision scaling.
The pseudorapidity ( η ) and transverse-momentum ( pT ) distributions of charged particles produced in proton–proton collisions are measured at the centre-of-mass energy s=13 TeV . The pseudorapidity distribution in |η|<1.8 is reported for inelastic events and for events with at least one charged particle in |η|<1 . The pseudorapidity density of charged particles produced in the pseudorapidity region |η|<0.5 is 5.31±0.18 and 6.46±0.19 for the two event classes, respectively. The transverse-momentum distribution of charged particles is measured in the range 0.15<pT<20 GeV/c and |η|<0.8 for events with at least one charged particle in |η|<1 . The evolution of the transverse momentum spectra of charged particles is also investigated as a function of event multiplicity. The results are compared with calculations from PYTHIA and EPOS Monte Carlo generators.
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ( $2.3 < \eta < 3.9$ ) in proton–proton collisions at three center-of-mass energies, $\sqrt{s}$ $=$ 0.9, 2.76 and 7 TeV using the ALICE detector. It is observed that the increase in the average photon multiplicity as a function of beam energy is compatible with both a logarithmic and a power-law dependence. The relative increase in average photon multiplicity produced in inelastic pp collisions at 2.76 and 7 TeV center-of-mass energies with respect to 0.9 TeV are 37.2 $\pm $ 0.3 % (stat) $\pm $ 8.8 % (sys) and 61.2 $\pm $ 0.3 % (stat) $\pm $ 7.6 % (sys), respectively. The photon multiplicity distributions for all center-of-mass energies are well described by negative binomial distributions. The multiplicity distributions are also presented in terms of KNO variables. The results are compared to model predictions, which are found in general to underestimate the data at large photon multiplicities, in particular at the highest center-of-mass energy. Limiting fragmentation behavior of photons has been explored with the data, but is not observed in the measured pseudorapidity range.
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV were measured in the rapidity range $-0.5< y_{\rm{CMS}}<0$ for event classes corresponding to different charged-particle multiplicity densities, $\langle{\rm d}N_{\rm{ch}}/{\rm d}\eta_{\rm{lab}}\rangle$. The mean transverse momentum values are presented as a function of $\langle{\rm d}N_{\rm{ch}}/{\rm d}\eta_{\rm{lab}}\rangle$, as well as a function of the particle masses and compared with previous results on hyperon production. The integrated yield ratios of excited to ground-state hyperons are constant as a function of $\langle{\rm d}N_{\rm{ch}}/{\rm d}\eta_{\rm{lab}}\rangle$. The equivalent ratios to pions exhibit an increase with $\langle{\rm d}N_{\rm{ch}}/{\rm d}\eta_{\rm{lab}}\rangle$, depending on their strangeness content.
The pseudorapidity density of charged particles, dNch/dη, at midrapidity in Pb-Pb collisions has been measured at a center-of-mass energy per nucleon pair of sNN=5.02 TeV. For the 5% most central collisions, we measure a value of 1943±54. The rise in dNch/dη as a function of sNN is steeper than that observed in proton-proton collisions and follows the trend established by measurements at lower energy. The increase of dNch/dη as a function of the average number of participant nucleons, ⟨Npart⟩, calculated in a Glauber model, is compared with the previous measurement at sNN=2.76 TeV. A constant factor of about 1.2 describes the increase in dNch/dη from sNN=2.76 to 5.02 TeV for all centrality classes, within the measured range of 0%–80% centrality. The results are also compared to models based on different mechanisms for particle production in nuclear collisions.
The ATLAS experiment at the LHC has measured the centrality dependence of charged particle pseudorapidity distributions over |eta| < 2 in lead-lead collisions at a nucleon-nucleon centre-of-mass energy of sqrt(s_NN) = 2.76 TeV. In order to include particles with transverse momentum as low as 30 MeV, the data were recorded with the central solenoid magnet off. Charged particles were reconstructed with two algorithms (2-point 'tracklets' and full tracks) using information from the pixel detector only. The lead-lead collision centrality was characterized by the total transverse energy in the forward calorimeter in the range 3.2 < |eta| < 4.9. Measurements are presented of the per-event charged particle density distribution, dN_ch/deta, and the average charged particle multiplicity in the pseudorapidity interval |eta|<0.5 in several intervals of collision centrality. The results are compared to previous mid-rapidity measurements at the LHC and RHIC. The variation of the mid-rapidity charged particle yield per colliding nucleon pair with the number of participants is consistent with the lower sqrt(s_NN) results. The shape of the dN_ch/deta distribution is found to be independent of centrality within the systematic uncertainties of the measurement.
A measurement is presented of the charged hadron multiplicity in hadronic PbPb collisions, as a function of pseudorapidity and centrality, at a collision energy of 2.76 TeV per nucleon pair. The data sample is collected using the CMS detector and a minimum-bias trigger, with the CMS solenoid off. The number of charged hadrons is measured both by counting the number of reconstructed particle hits and by forming hit doublets of pairs of layers in the pixel detector. The two methods give consistent results. The charged hadron multiplicity density dN(ch)/d eta, evaluated at eta=0 for head-on collisions, is found to be 1612 +/- 55, where the uncertainty is dominated by systematic effects. Comparisons of these results to previous measurements and to various models are also presented.
This note describes the details of the analysis of charged-particle pseudorapidity densities and multiplicity distributions measured by the ALICE detector in pp collisions at $\sqrt{s}$ = 0.9 and 7 TeV in specific phase space regions. The primary goal of the analysis is to provide reference measurements for Monte Carlo tuning. The pseudorapidity range |h| < 0.8 is considered and a lower $p_T$ cut is applied, at 0.15, 0.5 GeV/c and at 1 GeV/c. The choice of such phase space regions to measure the charged-particle multiplicity allows a direct comparison with the analogous results obtained by other LHC collaborations, namely ATLAS and CMS. The class of events considered are those having at least one charged particle in the kinematical ranges just described. In the note, the analysis procedure is presented, together with the corrections applied to the data, and the systematic uncertainty evaluation. The comparison of the results with different Monte Carlo generators is also shown.
We present the first measurement of pseudorapidity densities of primary charged particles near mid-rapidity in Au+Au collisions at $\sqrt{s} =$ 56 and 130 AGeV. For the most central collisions, we find the charged particle pseudorapidity density to be $dN/d\eta |_{|\eta|<1} = 408 \pm 12 {(stat)} \pm 30 {(syst)}$ at 56 AGeV and $555 \pm 12 {(stat)} \pm 35 {(syst)}$ at 130 AGeV, values that are higher than any previously observed in nuclear collisions. Compared to proton-antiproton collisions, our data show an increase in the pseudorapidity density per participant by more than 40% at the higher energy.
On 23rd November 2009, during the early commissioning of the CERN Large Hadron Collider (LHC), two counter-rotating proton bunches were circulated for the first time concurrently in the machine, at the LHC injection energy of 450 GeV per beam. Although the proton intensity was very low, with only one pilot bunch per beam, and no systematic attempt was made to optimize the collision optics, all LHC experiments reported a number of collision candidates. In the ALICE experiment, the collision region was centred very well in both the longitudinal and transverse directions and 284 events were recorded in coincidence with the two passing proton bunches. The events were immediately reconstructed and analyzed both online and offline. We have used these events to measure the pseudorapidity density of charged primary particles in the central region. In the range |eta| < 0.5, we obtain dNch/deta = 3.10 +- 0.13 (stat.) +- 0.22 (syst.) for all inelastic interactions, and dNch/deta = 3.51 +- 0.15 (stat.) +- 0.25 (syst.) for non-single diffractive interactions. These results are consistent with previous measurements in proton--antiproton interactions at the same centre-of-mass energy at the CERN SppS collider. They also illustrate the excellent functioning and rapid progress of the LHC accelerator, and of both the hardware and software of the ALICE experiment, in this early start-up phase.
|
$\sqrt{x} = \frac{x}{\sqrt{x}}$
Seeing the above equation may look completely logical, or not. When i saw it a few days ago, i thought it was wrong. When i understood that it was correct, i thought it was the most beautiful thing ever. I’m not 100% sure why this intrigued me so much, but it just looks great.
There are a few ways of simplifying this. We could multiply both sides with $\sqrt{x}$, like so:
$\sqrt{x} \cdot \sqrt{x} = \frac{x}{\sqrt{x}} \cdot \sqrt{x}$
$ x = \frac{x}{\sqrt{x}} \cdot \sqrt{x}$
$ x = x $
Another way would be to do cross-multiplication
$\frac{\sqrt{x}}{1} = \frac{x}{\sqrt{x}}$
$\sqrt{x} \cdot \sqrt{x} = {x} \cdot 1 $
$ x = x $
And there are probably a ton of other ways to effectively ‘solve’ this. But if there is one thing that i learned from this simple calculation, is that i actually like doing this. I’m having fun with math again!
|
Closure Properties
Once you have a small collection of non-context-free languages you can often use closure properties of $\mathrm{CFL}$ like this:
Assume $L \in \mathrm{CFL}$. Then, by closure property X (together with Y), $L' \in \mathrm{CFL}$. This contradicts $L' \notin \mathrm{CFL}$ which we know to hold, therefore $L \notin \mathrm{CFL}$.
This is often shorter (and often less error-prone) than using one of the other results that use less prior knowledge. It is also a general concept that can be applied all kinds of class of objects.
Example 1: Intersection with Regular Languages
We note $\mathcal L(e)$ the regular language specified by any regular expression $e$.
Let $L = \{w \mid w \in \{a,b,c\}^*, |w|_a = |w|_b = |w|_c\}$. As
$\qquad \displaystyle L \cap \mathcal{L}(a^*b^*c^*) = \{a^nb^nc^n \mid n \in \mathbb{N}\} \notin \mathrm{CFL}$
and $\mathrm{CFL}$ is closed under intersection with regular languages, $L \notin \mathrm{CFL}$.
Example 2: (Inverse) Homomorphism
Let $L = \{(ab)^{2n}c^md^{2n-m}(aba)^{n} \mid m,n \in \mathbb{N}\}$. With the homomorphism
$\qquad \displaystyle \phi(x) = \begin{cases} a &x=a \\ \varepsilon &x=b \\ b &x=c \lor x=d\end{cases}$
we have $\phi(L) = \{a^{2n}b^{2n}a^{2n} \mid n \in \mathbb{N}\}.$
Now, with
$\qquad \displaystyle \psi(x) = \begin{cases} aa &x=a \lor x=c \\ bb &x=b\end{cases}\quad\text{and}\quad L_1 = \{x^nb^ny^n \mid x,y \in \{a,c\}\wedge n \in \mathbb{N}\},$
we get $L_1 = \psi^{-1}(\phi(L)))$.
Finally, intersecting $L_1$ with the regular language $L_2 = \mathcal L(a^*b^*c^*)$ we get the language $L_3 = \{a^n b^n c^n \mid n \in \mathbb{N}\}$.
In total, we have $L_3 = L_2 \cap \psi^{-1}(\phi(L))$.
Now assume that $L$ was context-free. Then, since $\mathrm{CFL}$ is closed against homomorphism, inverse homomorphism, and intersection with regular sets, $L_3$ is context-free, too. But we
know (via Pumping Lemma, if need be) that $L_3$ is not context-free, so this is a contradiction; we have shown that $L \notin \mathrm{CFL}$. Interchange Lemma
The
Interchange Lemma [1] proposes a necessary condition for context-freeness that is even stronger than Ogden's Lemma. For example, it can be used to show that
$\qquad \{xyyz \mid x,y,z \in \{a,b,c\}^+\} \notin \mathrm{CFL}$
which resists many other methods. This is the lemma:
Let $L \in \mathrm{CFL}$. Then there is a constant $c_L$ such that for any integer $n\geq 2$, any set $Q_n \subseteq L_n = L \cap \Sigma^n$ and any integer $m$ with $n \geq m \geq 2$ there are $k \geq \frac{|Q_n|}{c_L n^2}$ strings $z_i \in Q_n$ with
$z_i = w_ix_iy_i$ for $i=1,\dots,k$,
$|w_1| = |w_2| = \dots = |w_k|$,
$|y_1| = |y_2| = \dots = |y_k|$,
$m \geq |x_1| = |x_2| = \dots = |x_k| > \frac{m}{2}$ and
$w_ix_jy_i \in L_n$ for all $(i,j) \in [1..k]^2$.
Applying it means to find $n,m$ and $Q_n$ such that 1.-4. hold but 5. is violated. The application example given in the original paper is very verbose and is therefore left out here.
At this time, I do not have a freely available reference and the formulation above is taken from a preprint of [1] from 1981. I appreciate help in tracking down better references. It appears that the same property has been (re)discovered recently [2]. Other Necessary Conditions
Boonyavatana and Slutzki [3] survey several conditions similar to Pumping and Interchange Lemma.
An “Interchange Lemma” for Context-Free Languages by W. Ogden, R. J. Ross and K. Winklmann (1985) Swapping Lemmas for Regular and Context-Free Languages by T. Yamakami (2008) The interchange or pump (DI)lemmas for context-free languages by R. Boonyavatana and G. Slutzki (1988)
|
Given an instance I of an optimization constraint satisfaction problem (CSP), finding solutions with value at least the expected value of a random solution is easy. We wonder how good such solutions can be. Namely, we initiate the study of ratio \(\rho _E(I) =(\mathrm {E}_X[v(I, X)] -\mathrm {wor}(I))/(\mathrm {opt}(I) -\mathrm {wor}(I))\) where \(\mathrm {opt}(I)\), \(\mathrm {wor}(I)\) and \(\mathrm {E}_X[v(I, X)]\) refer to respectively the optimal, the worst, and the average solution values on I. We here focus on the case when the variables have a domain of size \(q \ge 2\) and the constraint arity is at most \(k \ge 2\), where k, q are two constant integers. Connecting this ratio to the highest frequency in orthogonal arrays with specified parameters, we prove that it is \(\varOmega (1/n^{k/2})\) if \(q =2\), \(\varOmega (1/n^{k -1 -\lfloor \log _{p^\kappa } (k -1)\rfloor })\) where \(p^\kappa \) is the smallest prime power such that \(p^\kappa \ge q\) otherwise, and \(\varOmega (1/q^k)\) in \((\max \{q, k\} +1\})\)-partite instances.
Keywords
Average differential ratio Optimization constraint satisfaction problems Orthogonal arrays
This is a preview of subscription content, log in to check access.
Demange, M., Paschos, V.T.: On an approximation measure founded on the links between optimization and polynomial approximation theory. Theor. Comput. Sci. 158(1–2), 117–141 (1996)MathSciNetCrossRefGoogle Scholar
|
I am thinking of the property in probability of inequality. In particular, we assume \begin{equation} P[\zeta>a]\leq b, \end{equation} where $a>0$, $b>0$ and $\zeta\in R$ is a random variable.
Now we would like to consider whether the inequality of $P[\zeta^2>a^2]\leq b$ holds.
In fact, for differential and monotonic transformation, e.g., exponential function, the inequality holds. That is, $P[\exp(\zeta)>\exp(a)]\leq b$.
Can someone give hints for me on this issue? Thanks a lot in advance.
|
Dieter Lüst and two co-authors from Monkberg (Munich) managed to post the first hep-th paper today at 19:00:02 (a two-second lag is longer than usual, the timing contest wasn't too competitive):
In particular, they claim that whenever there are particles whose spin is two or higher, they have to be massive and there has to be a whole tower of massive states. More precisely, if there is mass \(m\) spin-two particle in quantum gravity which is self-interacting, the strength of the interaction may be parameterized by a new mass scale \(M_W\) and the effective field theory has to break down at the mass scale \(\Lambda\) where\[
\frac{\Lambda}{M_{\rm Planck}} = \frac{m}{M_W}
\] You see that the Planck scale enters. The breakdown scale \(\Lambda\) of the effective theory is basically the lowest mass of the next-to-lightest state in the predicted massive tower.
So if the self-interaction of the massive field is \(M_W\approx M_{\rm Planck}\), then we get \(\Lambda\approx m\) and all the lighter states in the tower are parameterically "comparably light" to the lightest spin-two boson. However, you can try to make the self-interaction stronger, by making \(M_W\) smaller than the Planck scale, and then the tower may become more massive than the lightest representative.
They may derive the conjecture from the Weak Gravity Conjecture if they rewrite the self-interaction of the spin-two field through an interaction with a "gauge field" which is treated analogously to the electromagnetic gauge field in the Weak Gravity Conjecture – although it is the Stückelberg gauge field. It's not quite obvious to me that the Weak Gravity Conjecture must apply to gauge fields that are "unnecessary" or "auxiliary" in this sense but maybe there's a general rule saying that general principles such as the Weak Gravity Conjecture have to apply even in such "optional" cases.
I think that these conjectures – and evidence and partial proofs backing them – represent a clear progress of our knowledge beyond effective field theory. You know, in quantum field theory, we have theorems such as the Weinberg-Witten theorem. This particular one says that higher-spin particles can't be composite and similar things. That's only true in full-blown quantum field theories. But quantum gravity isn't strictly a quantum field theory (in the bulk). When you add gravity, things get generalized in a certain way. And things that were possible or impossible without gravity may become impossible or possible with quantum gravity.
Some "impossible scenarios" from QFTs may be suddenly allowed – but one pays with the need to allow an infinite tower of states and similar things. Note that if you look at\[
\frac{\Lambda}{M_{\rm Planck}} = \frac{m}{M_W}
\] and send \(M_{\rm Planck}\to \infty\) i.e. if you turn the gravity off, the Bavarian conjecture says that \(\Lambda\to\infty\), too. So it becomes vacuous because it says that the effective theory "must break" at energy scales higher than infinity. Needless to say, the same positive power of the Planck mass appears in the original Weak Gravity Conjecture, too. That conjecture also becomes vacuous if you turn the gravity off.
When quantum gravity is turned on, there are new interactions, new states (surely the black hole microstates), and new mandatory interactions of these states. These new states and duties guarantee that theories where you would only add some fields or particles "insensitively" would be inconsistent. People are increasingly understanding what is the "new stuff" that simply has to happen in quantum gravity. And this new mandatory stuff may be understood either by some general consistency-based considerations assuming quantum gravity; or by looking at much more specific situations in the stringy vacua. Like in most of the good Swampland papers, Lüst et al. try to do both.
So far these two lines of reasoning are consistent with one another. They are increasingly compatible and increasingly equivalent – after all, string theory seems to be the only consistent theory of quantum gravity although we don't have any "totally canonical and complete" proof of this uniqueness (yet). The Swampland conjectures may be interpreted as another major direction of research that makes this point – that string theory is the only game in town – increasingly certain.
|
Secans and Cosecans
I is amazing how easily old stuff gets lost, when it is used no longer.
I bet most mathematicians don’t know how to spot the secans or the cotangens in the unit circle. Neither did I, when the question came up on the German version of geopardy (Wer wird Millionär). Of course, we can look that up in Wikipedia (or elsewhere), but the origin of the names is obvious only, if you look at the following image.
The secans is where the tangent intersects the x-axis. The cosecans is related to the y-axis. The formulas are easily derived with similarity.
\(\sec(x) = \dfrac{1}{\cos(x)}, \quad \csc{x} = \dfrac{1}{\sin(x)}\)
Here is another interesting image.
Just like the unit circle, the unit hyperbola in the image above has a simple equation.
\(x^2-y^2 = 1\)
If we set
\(x = \sinh(A) = \dfrac{e^A – e^{-A}}{2}\)
and
\(y = \cosh(A) = \dfrac{e^A + e^{-A}}{2}\)
we have with a little computation that (x,y) is on the unit hyperbola. This explains the terms „sinus hyperbolicus“ and „cosinus hyperbolicus“.
It requires a bit more computation to show that the points (x,y), (x,-y), (0,0) are indeed the corners of an area of size A. The easiest way to see this is to use a formula for the area that is swiped by the ray from 0 to g(t) in the plane between two times t1 and t2
\(\int\limits_{t_1}^{t_2} \frac{1}{2} \left| \begin{matrix} g_1(t) & g_1′(t) \\ g_2(t) & g_2′(t)\end{matrix} \right| \,dt\)
The reason for this formula is that the integral sums up the areas of the triangles between g(t) and g'(t)dt. Now, with our path along the unit hyperbola, we see that the determinant is equal to 1. Thus, we get A/2, if we integrate from 0 to any point A>0, which proves the result.
It is a nice fact that the same formula for the area is true for (cos A, sin A) on the unit circle!
|
Yahtzee Waiting Times
I recently was asked about waiting times in the game of Yahtzee. If you do not know the game it suffices to say that you throw 5 dice, and one of the goals is to get 5 dice with the same number, a Yahtzee. I wrote about waiting times a long time ago. But let me repeat the main points.
The trick is to use a number of states S0, S1, …, Sn, and probabilities P(i,j) to get from state Si to state Sj. S0 is the starting state, and Sn is the state we want to reach, in our case the Yahtzee. For a first attempt, we use the number 6s we have on out 5 dice. Then we have 6 states, S0 to S5. With a bit of combinatorics, we can compute the probabilities P(i,j) as
\(p_{i,j} = p^{j-i} (1-p)^{n-i-(j-i)} \binom{n-i}{j-i}\)
If we compute that with Euler Math Toolbox we get the following matrix P.
>p=1/6; >i=(0:5)'; j=i'; % This is the matrix of probabilities to get from i sixes to j sixes. 0.4018776 0.4018776 0.1607510 0.0321502 0.0032150 0.0001286 0.0000000 0.4822531 0.3858025 0.1157407 0.0154321 0.0007716 0.0000000 0.0000000 0.5787037 0.3472222 0.0694444 0.0046296 0.0000000 0.0000000 0.0000000 0.6944444 0.2777778 0.0277778 0.0000000 0.0000000 0.0000000 0.0000000 0.8333333 0.1666667 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 1.0000000
This matrix allows us to simulate or compute results about probabilities for 6-Yahtzees. E.g., we can start with no 6 in S0. After one dice throw, the first row of P yields the distribution of the expected states. We can do dice throws by applying P to the right of our distribution vector.
>v=[1,0,0,0,0,0].P 0.4018776 0.4018776 0.1607510 0.0321502 0.0032150 0.0001286 % After two more throws, we get the following. >v.P.P 0.0649055 0.2362559 0.3439886 0.2504237 0.0911542 0.0132721
This tells us that the chance to be in state S5 after three throws is just 1.3%.
How about the average time that we have to wait if we keep throwing the dice until we get a 6-Yahtzee? This can be solved by denoting the waiting time from state Si to S5 by w(i) and observe
\(w_i = p_{i,n} + \sum_{j \ne n} p_{i,j} (1+w_j)\)
where n=5 is the final state. Obviously, w(5)=0. Moreover, the sum of all probabilities in one row is 1. Thus we get
\(w_i – \sum_{j=1}^{n-1} w_j = 1\)
Let us solve this system in Euler Math Toolbox.
>B=P[1:5,1:5]; >w=(id(5)-B)\ones(5,1) 13.0236615 11.9266963 10.5554446 8.7272727 6.0000000
The average waiting time for a 6-Yahtzee is approximately 13 throws. If you already got 4 sixes, the average waiting time for the fifth is indeed 6.
We can interpret this in another way. Observe
\(w = (I-B)^{-1} \cdot 1 = (I+B+B^2+\ldots) \cdot 1\)
The sum converges because the norm of B is clearly less than 1. So we interpret this as a process
\(w_{n+1} = B \cdot (w_n + 1)\)
Suppose a waiting room. We have w(n,i) persons in state Si at time n. Now, one more person arrives at each state and we process the states according to our probabilities. On the long run, the number of persons in state S0 will become the waiting time to get out of the system into the final state.
Unfortunately, the probabilities are not that easy to compute when we ask for a Yahtzee of any kind. E.g., if we have 66432 we will keep the two sixes as 66. But we may throw 66555 and switch to keep 555. There is a recursion for the probability to get from i same dice to j same dice. I computed the following matrix which replaces the matrix P above.
>P 0.0925926 0.6944444 0.1929012 0.0192901 0.0007716 0.0000000 0.5555556 0.3703704 0.0694444 0.0046296 0.0000000 0.0000000 0.6944444 0.2777778 0.0277778 0.0000000 0.0000000 0.0000000 0.8333333 0.1666667 0.0000000 0.0000000 0.0000000 0.0000000 1.0000000
Note that I have only the states S1 to S5 here. With the same method, I get the waiting times.
>B=PM[1:4,1:4]; >w=(id(4)-B)\ones(4,1) 11.0901554 10.4602273 8.7272727 6.0000000
Thus we need only about 11 throws to get any Yahtzee. The probability to get one in three throws is now about 4.6%.
>[1,0,0,0,0].PM.PM.PM 0.0007938 0.2560109 0.4524017 0.2447649 0.0460286
|
Consider a particle of mass $𝑚$ in an infinite square well of width $𝐿$. The wave function of the particle at $𝑡 = 0$ is $$ \psi (x,0)=Ax^2(x^2-L^2), \quad 0\leq x \leq L$$
a.) What is $\psi(x,t)$ for $ t \geq 0 $?
b.) At some time $t >0$, what is the probability of measuring the particle to have energy $16\pi ^2\hbar ^2/(2mL^2)$? Does it depend on time?
c.) Calculate the expectation value of the position.
So for a the first thing I did was to find the normalization constant $A$ using the normalization condition $$ \langle\psi (x,0)|\psi (x,0)\rangle = \int_{-\infty}^{\infty} \psi^*(x,0) \psi (x,0)dx =1 $$ which after evaluating gives me $$ A= \sqrt{\frac{105}{8L^7}}$$
then I find the expansion coefficients $$ C_n = \langle E_n|\psi (x,0)\rangle = \int_{-\infty}^{\infty} \varphi_n^*(x)\psi (x,0)dx $$ for which i got $$ C_n = \frac{3\sqrt{105}}{n^3\pi ^3}(-1)^n $$ now the time dependent wave function can be written down as $$ \psi (x,t) = \sum_n \frac{3\sqrt{105}}{n^3\pi ^3}(-1)^n exp({-\frac{in^2\pi ^2\hbar t}{2mL^2}}) \sqrt{\frac{2}{L}}sin(\frac{n\pi x}{L}) $$
I also did part b.) and got some really small probability that didnt depend on time. The part im confused about is part c.), I dont know what to do with the summations when you square the wave function in $$ \langle x \rangle = \int_{-\infty}^{\infty} x |\psi (x,t)|^2 dx $$
|
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
|
How to Perform Various Rotor Analyses in the COMSOL® Software
Vibration in rotating machinery is very sensitive to the geometric, structural, and inertial properties of the various rotating and stationary components interacting with each other. These properties include the location of the mounted components and their inertial properties, bearing characteristics, and shaft properties. To understand the effects of these parameters, start with a simple model and perform various analyses to correlate the rotor response within the same model. Let’s demonstrate this process with a simply supported beam rotor example.
2 Analysis Types for a Simply Supported Beam Rotor System
The rotor system in this example is a simple rotor with a uniform cross section throughout its length. It is supported at both ends by bearings and there are three mounted components called disks at different locations of the rotor.
You can model this rotor using the
Beam Rotor interface in the COMSOL Multiphysics® software. The inertial properties and offset of the rotor components are modeled with the Disk node. The bearing support is modeled by an equivalent stiffness-based approach via the Journal Bearing node provided in the Beam Rotor interface.
For more information about the geometric properties and model setup, check out the references in the model documentation.
Geometry of the beam rotor example.
Two types of analyses are commonly used to study rotor vibration characteristics: eigenfrequency and time-domain analyses. As mentioned in a previous blog post, critical speeds of the rotor strongly depend on the rotor’s angular speed. Therefore, while performing the eigenfrequency analysis, you need to consider the variation in the rotor speed to get the correct critical speeds. A time-domain analysis is performed when you want to look at the system response under time-varying excitation.
Now, let’s look at what type of information each analysis provides as well as the steps involved to perform these analyses.
Eigenfrequency Analysis of a Rotor
Eigenfrequency analysis is used to determine the natural frequencies of a system. In a rotordynamics scenario, this analysis can be used in two different ways.
First, for the operating speed of a system that is not fixed, you can perform an eigenfrequency analysis of the system for the range of operating speeds and choose the one that is furthest from the critical speed of the system and meets other design considerations. If you cannot find a suitable operating speed for the current system, you might need to make certain design modifications in the system to get a stable operating speed that meets all of the requirements.
In the second type of analysis, the operating speed of the system is fixed. In such a case, you need to perform an eigenfrequency analysis at the given operating speed to check that any of the natural frequencies of the system are not close to the operating speed. If any of the natural frequencies fall closer to the operating speed, design modifications are a must.
The design modifications in the rotor system require an understanding of what kind of modifications will produce the desired effect and at what cost. This is where simulating simple systems to understand the effect of design modifications is very helpful. Simulation can provide guidelines for design modifications, thus reducing the number of iterations in the design process.
Consider the first case, in which the operating speed of the system is not fixed, to understand the analysis steps. In this case, you need to perform a parametric eigenfrequency analysis for the angular speed of the rotor. This requires two steps in the
Study node: Parametric Sweep and Step 1: Eigenfrequency, shown below on the left. Settings for the Parametric Sweep node for a sweep over a parameter Ow representing the angular speed of the rotor are shown below in the center. This parameter is used as an input in the Rotor Speed section of the Beam Rotor node settings, shown below on the right.
After performing the analysis, you get a
whirl plot of the rotor as the default, shown below. The whirl plot shows the whirling orbit and the deformed shape of the rotor for the given rpm and natural frequency combinations. Whirl plot of the rotor.
The deformed shape of the rotor also gives you an idea of how strongly the natural frequency will depend on the angular speed of the rotor. If the disks move away from the rotation axis without significant tilting, then the split in the frequency in the backward whirl (opposite to the spin) and forward whirl (same direction as the spin) is not significant. Alternatively, if the disks do not move significantly far from the rotation axis and rather have significant tilting, then the split in the frequency of the backward and forward whirl is noticeable.
To understand this concept in depth, you can plot the variation of the natural frequency for different modes against the angular speed of the rotor, which is often called a
Campbell diagram. The Campbell plot for the simply supported rotor example is shown below. You can see the strong divergence of the eigenfrequencies with rotor speed for certain modes; whereas for others, particularly the modes with low natural frequencies, the divergence is not significant. If you look at the mode shapes corresponding to these frequencies, they confirm the behavior previously discussed. Critical speeds of the rotor can be obtained from the Campbell plot by looking at the intersection of the natural frequency vs. angular speed curve with ω = Ω curve. These are the speeds near which a rotor should not be operated, unless sufficiently damped.
Campbell plot of the simply supported rotor system.
The damping in the respective modes can be accessed by plotting the logarithmic decrement with the angular speed of the rotor. The logarithmic decrement is defined as
where
A( t) is a time-varying response and ω is the complex eigenfrequency of the system. T is the time period given by T = \frac{2\pi}{\Re(\omega)}. Logarithmic decrement for different bending modes in the simply supported rotor system.
In the plot above, you can see a logarithmic decrement variation for the different bending modes with the angular speed for the simply supported rotor. The notation ‘b’ and ‘f’ is used for the backward and forward whirl modes, respectively. A logarithmic decrement of zero means that the system is undamped, a negative value indicates an unstable system, and a positive value indicates a stable system.
You can also note the pattern change for some of the curves. The reason is that the modal data is arranged in increasing order of the natural frequency. But we know that the rotor’s natural frequencies decrease in the backward whirl modes and increase in the forward whirl modes. Due to this, there is crossover of the natural frequencies between the higher backward whirl and lower forward whirl modes beyond a certain angular speed. This upsets the initial order of the modes, resulting in switching of the patterns across the crossover points.
Time-Dependent Analysis of a Rotor
Eigenfrequency analysis gives the characteristics of the rotor system operating at steady state. However, before and after reaching the steady state, during the run-up and run-down, the angular speed of the rotor varies with time. In certain cases, the operating speed might be above the first few natural frequencies of the rotor. Therefore, during the run-up and run-down, the rotor will cross over the corresponding critical speeds. Also, there could be nonharmonic time-varying external excitation acting on the rotor. In such cases, the rotor response cannot be completely determined by an eigenfrequency or frequency-domain analysis. Rather, you need a time-dependent simulation to study the response of the system.
You can also perform a time-dependent analysis of the rotor at different angular speeds by performing a parametric sweep to see how the angular speed governs the response. An obvious extension of such an analysis is to evaluate the frequency spectrum of the time-dependent response of the rotor for all of the angular speeds and analyze what combinations of the angular speed and frequency result in a high amplitude response. A
waterfall plot shows the response amplitude vs. angular speed and frequency and gives the distribution of the modal participation in the response at different speeds. Such an analysis can be set up using the three steps in the study node, as shown below. Steps for a waterfall plot analysis. The Parametric Sweep study step is used to sweep the angular speed, a Time Dependent study step is used to perform a time-dependent analysis corresponding to each angular speed in the parametric sweep, and a Time to Frequency FFT study step takes the fast Fourier transform of the time-dependent data to convert into the frequency spectrum.
In the eigenfrequency analysis, bearings are modeled using constant stiffness and damping coefficients. However, in reality, these coefficients are strongly dependent on the journal motion. To highlight the effect of the nonlinearity for the time-domain analysis, a plain journal bearing model is used instead of the constant bearing coefficients. A plain journal bearing model is based on the analytical solution of the Reynolds equation for a short bearing approximation. The system in this case is self-excited due to the eccentric mounting modeled as a disk. To simplify the system, only the second disk is considered with small eccentricity in the local y direction.
The waterfall plot of the
z-component of the displacement is shown in the figure below. You can observe three peaks clearly in the spectrum. The third peak, which falls along the ω = Ω curve, corresponds to a 1X synchronous whirl. This is in response to the centrifugal force due to eccentricity changing its direction with the rotation of the shaft. Other peaks correspond to the orbiting of the rotor due to the complex rotor bearing interaction. The reason is that the forces from the pressure distribution around the journal in the bearing have a cross-coupling effect with the journal motion. In other words, the motion of the journal in one of the lateral directions induces a component of the force in the lateral direction perpendicular to it. The effect of this phenomenon is a net force acting on the rotor in the direction of the forward whirl. This causes the subsynchronous orbiting of the rotor. A waterfall plot shows the response amplitude vs. the angular speed and frequency of the rotor.
The orbit of the different locations along the length of the rotor at 30,000 rpm is shown below. The orbit curve changes its color from green to red with time. You can see that after the initial transient phase, the rotor undergoes a forward circular whirl in the steady state. Also, the second bending mode has the highest participation in the response.
Orbit of the rotor at different locations. The plot changes from green to red with time.
The time variation of the
z-direction displacement of a point on the rotor at 30,000 rpm is shown below. Apart from the high-frequency variation, there is also a low-frequency component that envelops the response, but gets damped out with time. Time variation of the z -displacement.
With this tutorial model, we have demonstrated the approach to set up different analyses in a rotor system, as well as how to plot and analyze the simulation results. Ready to give this tutorial a try? Simply click on the button below to access the MPH-file via the Application Gallery or open it via the Application Library in the COMSOL® software.
Learn More About Analyzing Rotordynamics Applications Learn about the features included with the Rotordynamics Module See how to model a reciprocating engine to optimize its design Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
So the knapsack problem has an integer programming formulation as follows,
$$ \max_x v\cdot x\\s.t \\x_i \in \{0,1\}\\w\cdot x \leq C$$
Now consider the second integer program which might be a variation of the knapsack integer program.
$$ \max_x v\cdot x\\s.t \\x_i \in \{0,L_i\}\\ x_i \leq R \cdot \delta_i\\ \delta_i \in \{0,1\}\\ \sum_i \delta_i = k$$ where $v_i$ is item's $i$ value, $L_i$ and $k$ are constants, and $R = \max{\{L_1,L_2,...L_d\}}$.
Is there a dynamic programming solution or an approximation algorithm for the second integer programming problem ?
Is it possible to use the solution of the knapsack problem to warm
start or partially solve the second integer programming ?
Thanks!
|
Defining parameters
Level: \( N \) = \( 100 = 2^{2} \cdot 5^{2} \) Weight: \( k \) = \( 1 \) Nonzero newspaces: \( 1 \) Newforms: \( 1 \) Sturm bound: \(600\) Trace bound: \(0\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{1}(\Gamma_1(100))\).
Total New Old Modular forms 74 25 49 Cusp forms 4 4 0 Eisenstein series 70 21 49
The following table gives the dimensions of subspaces with specified projective image type.
\(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 4 0 0 0 Decomposition of \(S_{1}^{\mathrm{new}}(\Gamma_1(100))\)
We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Label \(\chi\) Newforms Dimension \(\chi\) degree 100.1.b \(\chi_{100}(51, \cdot)\) None 0 1 100.1.d \(\chi_{100}(99, \cdot)\) None 0 1 100.1.f \(\chi_{100}(57, \cdot)\) None 0 2 100.1.h \(\chi_{100}(19, \cdot)\) None 0 4 100.1.j \(\chi_{100}(11, \cdot)\) 100.1.j.a 4 4 100.1.k \(\chi_{100}(13, \cdot)\) None 0 8
|
Skills to Develop
In this section, we strive to understand the ideas generated by the following important questions:
What is a graphical justification for why \( \frac { d } { d x } \left[ a ^ { x } \right] = a ^ { x } \ln ( a )\)? What do the graphs of \(y = \sin ( x ) \) and\( y = \cos ( x )\) suggest as formulas for their respective derivatives? Once we know the derivatives of \(\sin(x)\) and \(\cos(x)\), how do previous derivative rules work when these functions are involved?
Throughout Chapter 2, we will be working to develop shortcut derivative rules that will help us to bypass the limit definition of the derivative in order to quickly determine the formula for \(f'(x)\( when we are given a formula for \(f(x)\). In Section 2.1, we learned the rule for power functions, that if \(f (x) = x ^n \), then \(f'(x) = nx^{n−1}\) , and justified this in part due to results from different \(n\0-values when applying the limit definition of the derivative. We also stated the rule for exponential functions, that if \(a\) is a positive real number and \(f (x) = a^x\), then f'(x) = a x ln(a). Later in this present section, we are going to work to conjecture formulas for the sine and cosine functions, primarily through a graphical argument. To help set the stage for doing so, the following preview activity asks you to think about exponential functions and why it is reasonable to think that the derivative of an exponential function is a constant times the exponential function itself.
Preview Activity \(\PageIndex{1}\)
Consider the function \(g(x) = 2^ x\) , which is graphed in Figure \(\PageIndex{1}\).
At each of \(x =\) −2, −1, 0, 1, 2, use a straightedge to sketch an accurate tangent line to \(y = g(x)\). Use the provided grid to estimate the slope of the tangent line you drew at each point in (a). Use the limit definition of the derivative to estimate \(g ' (0)\) by using small values of \(h\), and compare the result to your visual estimate for the slope of the tangent line to \(y = g(x)\) at \(x = 0\) in (b). Based on your work in (a), (b), and (c), sketch an accurate graph of \(y = g ' (x)\) on the axes adjacent to the graph of \(y = g(x)\). Write at least one sentence that explains why it is reasonable to think that \(g ' (x) = cg(x)\), where \(c\) is a constant. In addition, calculate \(ln(2)\), and then discuss how this value, combined with your work above, reasonably suggests that \(g ' (x) = 2^{ x} ln(2)\).
Figure \(\PageIndex{1}\): At left, the graph of y = g(x) = 2 x . At right, axes for plotting y = g 0 (x). The sine and cosine functions
The sine and cosine functions are among the most important functions in all of mathematics. Sometimes called the circular functions due to their genesis in the unit circle, these periodic functions play a key role in modeling repeating phenomena such as the location of a point on a bicycle tire, the behavior of an oscillating mass attached to a spring, tidal elevations, and more. Like polynomial and exponential functions, the sine and cosine functions are considered basic functions, ones that are often used in the building of more complicated functions. As such, we would like to know formulas for \(\frac { d } { d x } [ \sin ( x ) ] \) and \(\frac { a } { d x } [ \cos ( x ) ] \), and the next two activities lead us to that end.
Activity \(\PageIndex{1}\)
Consider the function \(f (x) = \sin(x)\), which is graphed in Figure \(\PageIndex{2}\) below. Note carefully that the grid in the diagram does not have boxes that are 1 × 1, but rather approximately 1.57 × 1, as the horizontal scale of the grid is \(π\)/2 units per box.
At each of \(x = −2π\), \(- \frac { 3 \pi } { 2 }\) , \(−π\), \(- \frac { \pi } { 2 }\) ,\( 0\), (\frac { \pi } { 2 }) , (\pi), (\frac { 3 \pi } { 2 })\, \(2\pi)\, use a straightedge to sketch an accurate tangent line to \(y = f (x)\). Use the provided grid to estimate the slope of the tangent line you drew at each point. Pay careful attention to the scale of the grid. Use the limit definition of the derivative to estimate \(f'(0)\) by using small values of \(h\), and compare the result to your visual estimate for the slope of the tangent line to \(y = f (x)\) at \(x = 0\) in (b). Using periodicity, what does this result suggest about \(f'(2π)\)? about \(f'(−2π)\)? Based on your work in (a), (b), and (c), sketch an accurate graph of \(y = f'(x)\) on the axes adjacent to the graph of \(y = f (x)\). What familiar function do you think is the derivative of \(f (x) = \sin(x)\)?
Figure \(\PageIndex{2}\): At left, the graph of y = f (x) = \sin(x).
Activity \(\PageIndex{2}\)
Consider the function \(g(x) = \cos(x)\), which is graphed in Figure \(\PageIndex{3}\) below. Note carefully that the grid in the diagram does not have boxes that are 1 × 1, but rather approximately 1.57 × 1, as the horizontal scale of the grid is \(π\)/2 units per box.
At each of \(x = −2π\), \( - \frac { 3 \pi } { 2 }\) , \(- \pi \), \( 0\) , \(\frac { \pi } { 2 } \), \( \pi\) , \(\frac { 3 \pi } { 2 } \), \( 2 \pi\) use a straightedge to sketch an accurate tangent line to y = g(x). Use the provided grid to estimate the slope of the tangent line you drew at each point. Again, note the scale of the axes and grid. Use the limit definition of the derivative to estimate \(g'( π ^2 )\) by using small values of \(h\), and compare the result to your visual estimate for the slope of the tangent line to \(y = g(x)\) at \(x = \frac{\pi}{2}) in (b). Using periodicity, what does this result suggest about \(g ' (\frac{− 3π}{ 2} )? can symmetry on the graph help you estimate other slopes easily? Based on your work in (a), (b), and (c), sketch an accurate graph of \(y = g ' (x)\) on the axes adjacent to the graph of \(y = g(x)\). What familiar function do you think is the derivative of \(g(x) = \cos(x)\)?
Figure \(\PageIndex{3}\): At left, the graph of y = g(x) = \cos(x).
The results of the two preceding activities suggest that the sine and cosine functions not only have the beautiful interrelationships that are learned in a course in trigonometry – connections such as the identities
\[\sin^2 (x) + \cos^2 (x) = 1\]
and
\[\cos \left(x − \dfrac{π}{2} \right) = \sin(x)\]
– but that they are even further linked through calculus, as the derivative of each involves the other. The following rules summarize the results of the activities
4 .
Sine and Cosine Functions
For all real numbers \(x\),
\[\dfrac{d}{dx} [\sin(x)] = \cos(x)\]
and
\[\dfrac{d}{dx} [\cos(x)] = − \sin(x)\]
We have now added two additional functions to our library of basic functions whose derivatives we know: power functions, exponential functions, and the sine and cosine functions. The constant multiple and sum rules still hold, of course, and all of the inherent meaning of the derivative persists, regardless of the functions that are used to constitute a given choice of \(f (x)\). The following activity puts our new knowledge of the derivatives of \( \sin(x)\) and \(\cos(x)\) to work.
Activity \(\PageIndex{3}\)
Answer each of the following questions. Where a derivative is requested, be sure to label the derivative function with its name using proper notation.
Determine the derivative of \(h(t) = 3 cos(t) − 4 sin(t)\). Find the exact slope of the tangent line to \(y = f (x) = 2x + \frac{\sin(x)}{ 2}\) at the point where \(x = \frac{π}{ 6}\) . Find the equation of the tangent line to \(y = g(x) = x^{ 2} + 2 \cos(x)\) at the point where \(x = π ^2\) . Determine the derivative of \(p(z) = z^{ 4} + 4 ^{z} + 4 cos(z) − sin(\frac{ π }{2})\). The function \(P(t) = 24 + 8 sin(t)\) represents a population of a particular kind of animal that lives on a small island, where \(P\) is measured in hundreds and t is measured in decades since January 1, 2010. What is the instantaneous rate of change of \(P\) on January 1, 2030? What are the units of this quantity? Write a sentence in everyday language that explains how the population is behaving at this point in time. Summary
In this section, we encountered the following important ideas:
If we consider the graph of an exponential function \(f (x) = a^ x\) (where \(a\) > 1), the graph of f'(x) behaves similarly, appearing exponential and as a possibly scaled version of the original function a x . For f (x) = 2 x , careful analysis of the graph and its slopes suggests that d dx [2 x ] = 2 x ln(2), which is a special case of the rule we stated in Section 2.1. By carefully analyzing the graphs of \(y = \sin(x)\) and \(y = \cos(x)\), plus using the limit definition of the derivative at select points, we found that \[\dfrac{d}{ dx} [\sin(x)] = \cos(x)\] and \[\dfrac{d}{dx} [\cos(x)] = − \sin(x).\] We note that all previously encountered derivative rules still hold, but now may also be applied to functions involving the sine and cosine, plus all of the established meaning of the derivative applies to these trigonometric functions as well. Contributors
Matt Boelkins (Grand Valley State University), David Austin (Grand Valley State University), Steve Schlicker (Grand Valley State University)
________
4These two rules may be formally proved using the limit definition of the derivative and the expansion identities for \(\sin(x + h)\) and \(\cos(x + h)\).
|
I have a question about Sobolev spaces.
In the following, we assume $d \ge 2$.Let $D$ be a domain of $\mathbb{R}^d$. That is, $D$ is a connected open subset of $\mathbb{R}^d$.
Note that $D$ is not necessary bounded. $H^{1}(D)$ denotes first order $L^2$-Sobolev space on $D$ with Neumann boundary condition. I am interested in when $H^{1}(D)$ is continuously embedded into $L^{2^{\ast}}(D)$. That is, there exists $C\ge0$ such that\begin{equation*} \left( \int_{D} |f|^{2^{\ast}}\,dx\right)^{2/2^{\ast}} \le C \left(\int_{D}|\nabla f|^{2}\,dx+\int_{D}|f|^{2}\,dx \right)\cdots(1)\end{equation*}Here $2^{\ast}=2d/(d-2)$ if $d\ge 3$, $2^{\ast}$ is any number in $(2,\infty)$ if $d=2$. My question
In Ouhabaz's book enter link description here, it is said that $(1)$ holds when $D$ has smooth boundary. But I couldn't find the definition of smooth boundary(I think there are many styles of definition of smooth boundary) and the proof of this claim in this book. When $D$ is bounded, there are many references, though.
If you know the details, please let me know.
|
Our new book (NAT)
Nonabelian algebraic topology: filtered spaces, crossed complexes, cubical homotopy groupoids, EMS Tracts in Mathematics vol 15
uses mainly cubical, rather than simplicial, sets. The reasons are explained in the Introduction: in strict cubical higher categories we can easily express
algebraic inverse to subdivision,
a simple intuition which I have found difficult to express in simplicial terms. Thus cubes are useful for local-to-global problems. This intuition is crucial for our Higher Homotopy Seifert-van Kampen Theorem, which enables new calculations of some homotopy types, and suggests a new foundation for algebraic topology at the border between homotopy and homology.
A further reason for the connections is that they enabled an equivalence between crossed modules and certain double groupoids, and later, crossed complexes and strict cubical $\omega$-groupoids.
Also cubes have a nice tensor product and this is
crucial in the book for obtaining some homotopy classification results. See Chapter 15.
I have found that with cubes I have been able to conjecture and in the end prove theorems which have enabled new nonabelian calculations in homotopy theory, e.g. of second relative homotopy groups. So I have been happy to use cubes until someone comes up with something better. ($n$-simplicial methods, in conjunction with cubical ideas, turned out, however, to be necessary for proofs in the work with J.-L. Loday.)
See also some beamer presentations available on my preprint page.
Here is a further emphasis on the above point on algebraic structures: consider the following diagram:
From left to right pictures subdivision; from right to left pictures composition. The composition idea is well formulated in terms of double categories, and that idea is easily generalised to $n$-fold categories, and is expressed well in a cubical context. In that context one can conjecture, and eventually prove, higher dimensional Seifert-van Kampen Theorems, which allow new calculations in algebraic topology. Such multiple compositions are difficult to handle in globular or simplicial terms.
The further advantage of cubes, as mentioned in above answers, is that the formula $$I^m \times I^n \cong I^{m+n}$$ makes cubes very helpful in considering monoidal and monoidal closed structures. Most of the major results of the EMS book required cubical methods for their conjecture and proof. The main results of Chapter 15 of NAT have not been done simplicially. See for example Theorem 15.6.1, on a convenient dense subcategory closed under tensor product.
Sept 5, 2015: The paper by Vezzani arxiv::1405.4548 shows a use of cubical, rather than simplicial, methods, in motivic theory; while the paper by I. Patchkoria, HHA arXiv:1011.4870, Homology Homotopy Appl.Volume 14, Number 1 (2012), 133-158, gives a "Comparison of Cubical and Simplicial Derived Functors".
In all these cases the use of
connections in cubical methods is crucial. There is more discussion on this mathoverflow. For us connections arose in order to define commutative cubes in higher cubical categories: compare this paper.
See also this 2014 presentation The intuition for cubical methods in algebraic topology.
April 13, 2016. I should add some further information from Alberto Vezzani:
The cubical theory was better suited than the simplicial theory when dealing with (motives of) perfectoid spaces in characteristic 0. For example: degeneracy maps of the simplicial complex $\Delta$ in algebraic geometry are defined by sending one coordinate $x_i$ to the sum of two coordinates $y_j+y_{j+1}$. When one considers the perfectoid algebras obtained by taking all $p$-th roots of the coordinates, such maps are no longer defined, as $y_j+y_{j+1}$ doesn't have $p$-th roots in general. The cubical complex, on the contrary, is easily generalized to the perfectoid world.
November 29, 2016 There is more information in this paper on Modelling and Computing Homotopy Types: I which can serve as an introduction to the NAT book.
|
If $\sum_{m=0}^{\infty} b_m$ is
conditionally convergent, then is $\sum_{m=0}^{\infty} m^2b_m$ divergent? JUSTIFY
An example of conditionally convergent series is $\sum_{m=0}^{\infty} (-1)^m/\sqrt{m+1}$ and multiplying by $m^2$ it is divergent.
My conclusion is that it diverges. But what will the "justify" be?
|
1. State whether the following augmented matrices are in RREF and compute their solution sets.
$$\left(\begin{array}{rrrrr|r}1 &0 &0 &0 &3 &1 \\ 0 &1 &0 &0 &1 &2 \\ 0 &0 &1 &0 &1 &3 \\ 0 &0 &0 &1 &2 &0\end{array}\right),$$
$$\left(\begin{array}{rrrrrr|r}1 &1 &0 &1 &0 &1 &0 \\ 0 &0 &1 &2 &0 &2 &0 \\ 0 &0 &0 &0 &1 &3 &0 \\ 0 &0 &0 &0 &0 &0 &0\end{array}\right),$$
$$\left(\begin{array}{rrrrrrr|r}1 &1 &0 &1 &0 &1 &0 &1 \\ 0 &0 &1 &2 &0 &2 &0 &-1 \\ 0 &0 &0 &0 &1 &3 &0 &1 \\ 0 &0 &0 &0 &0 &2 &0 &-2 \\ 0 &0 &0 &0 &0 &0 &1 &1\end{array}\right).$$
2. Solve the following linear system:
$$\begin{matrix}
2x_1 + 5x_2 - 8x_3 + 2x_4 + 2x_5 = 0\\ 6x_1 + 2x_2 - 10x_3 + 6x_4 + 8x_5 = 6\\ 3x_1 + 6x_2 + 2x_3 + 3x_4 + 5x_5 = 6\\ 3x_1 + 1x_2 - 5x_3 + 3x_4 + 4x_5 = 3\\ 6x_1 + 7x_2 - 3x_3 + 6x_4 + 9x_5 = 9 \end{matrix}$$
Be sure to set your work out carefully with equivalence signs \(\sim\) between each step, labeled by the row operations you performed.
3. Check that the following two matrices are row-equivalent:
$$\left(\begin{array}{rrr|r}1 &4 &7 &10 \\ 2 &9 &6 &0\end{array}\right)$$
and
$$\left(\begin{array}{rrr|r}0 &-1 &8 &20 \\ 4 &18 &12 &0\end{array}\right).$$
Now remove the third column from each matrix, and show that the resulting two matrices (shown below) are row-equivalent:
$$\left(\begin{array}{rr|r}1 &4 &10 \\ 2 &9 &0\end{array}\right)$$
and
$$\left(\begin{array}{rr|r}0 &-1 &20 \\ 4 &18 &0\end{array}\right).$$
Now remove the fourth column from each of the original two matrices, and show that the resulting two matrices, viewed as augmented matrices (shown below) are row-equivalent:
$$\left(\begin{array}{rr|r}1 &4 &7 \\ 2 &9 &6\end{array}\right)$$
and
$$\left(\begin{array}{rr|r}0 &-1 &8 \\ 4 &18 &12\end{array}\right).$$
Explain why row-equivalence is never affected by removing columns.
4. Check that the system of equations corresponding to the augmented matrix
$$\left(\begin{array}{rr|r}1 &4 &10 \\ 3 &13 &9 \\ 4 &17 &20\end{array}\right)$$
has no solutions. If you remove one of the rows of this matrix, does the new matrix have any solutions? In general, can row equivalence be affected by removing rows? Explain why or why not.
5. Explain why the linear system has no solutions:
$$\left(\begin{array}{rrr|r}1 &0 &3 &1 \\ 0 &1 &2 &4 \\ 0 &0 &0 &6\end{array}\right)$$
For which of the values of \(k\) does the system below have a solution?
$$\begin{matrix} x- 3y = 6 \\ x + 3z = -3 \\ 2x + ky + (3-k)z = 1 \end{matrix}$$
6. Show that the RREF of a matrix is unique. (Hint: Consider what happens if the same augmented matrix had two different RREFs. Try to see what happens if you removed columns from these two RREF augmented matrices.)
7. Another method for solving linear systems is to use row operations to bring the augmented matrix to Row Echelon Form (REF as opposed to RREF). In REF, the pivots are not necessarily set to one, and we only require that all entries left of the pivots are zero, not necessarily entries above a pivot. Provide a counterexample to show that row echelon form is not unique.
Once a system is in row echelon form, it can be solved by "back substitution." Write the following row echelon matrix as a system of equations, then solve the system using back-substitution.
$$\left(\begin{array}{rrr|r}2 &3 &1 &6 \\ 0 &1 &1 &2 \\ 0 &0 &3 &3\end{array}\right)$$
8. Show that this pair of augmented matrices are row equivalent, assuming \(ad - bc \neq 0\):
$$\left(\begin{array}{rr|r}a &b &e \\ c &d &f \end{array}\right)\sim\left(\begin{array}{rr|r} 1 &0 &\frac{de-bf}{ad-bc} \\ 0 &1 &\frac{af-ce}{ad-bc}\end{array}\right)$$
9. Consider the augmented matrix:
$$\left(\begin{array}{rr|r}2 &-1 &3 \\ -6 &3 &1\end{array}\right)$$
Give a \(\textit{geometric}\) reason why the associated system of equations has no solution. (Hint, plot the three vectors given by the columns of this augmented matrix in the plane.) Given a general augmented matrix
$$\left(\begin{array}{rr|r}a &b &e \\ c &d &f\end{array}\right),$$
can you nd a condition on the numbers \(a, b, c\) and \(d\) that corresponds to the geometric condition you found?
10. A relation \(\sim\) on a set of objects \(U\) is an \(\textit{equivalence relation}\) if the following three properties are satisfied:
$$\bullet{~Reflexive:~For~any~x~\in~U,~we~have~x\sim x.}$$
$$\bullet{~Symmetric:~For~any~x,~y~\in~U,~if~x\sim y~then~y\sim x.}$$
$$\bullet{~Transitive:~For~any~x,~y~and~z~\in~U,~if~x\sim y~and~y\sim z~then~x\sim z.}$$
Show that row equivalence of matrices is an example of an equivalence relation.
11. Equivalence of augmented matrices does not come from equality of their solution sets. Rather, we de ne two matrices to be equivalent if one can be obtained from the other by elementary row operations. Find a pair of augmented matrices that are not row equivalent but do have the same solution set.
|
By means of $\varepsilon$-$\delta$, I am looking for some ideas to prove (a beginner math class) that limit does
not exist.For instance, consider the function\begin{equation}f(x)=\begin{cases}x,&x>1\\3-x,&x\leq1,\end{cases}\end{equation}show that $\lim_{x\to1}f(x)$ does not exist.
By means of $\varepsilon$-$\delta$, I am looking for some ideas to prove (a beginner math class) that limit does
I used to find the negation of $\varepsilon-\delta$ very tricky as an undergrad. It appears I still do because I am not 100% sure of this.
I reckon the limit does not exist if for all $\delta>0$ and $L$, there exists an $\varepsilon>0$ and an $x$ such that $0<|x-1|<\delta$ but $|f(x)-L|\geq \varepsilon$.
In this is the correct negation, then for any $\delta>0$, and any $L$, choose $\varepsilon=1/3$. We might have to pick $x$ according to what $L$ is.
Say $$x(L)=\begin{cases} 1+\min\{\delta/2,1/6\}, & \text{ if }L\leq 3/2 \\ 1-\min\{\delta/2,1/6\}, &\text{ if }L>3/2 \end{cases}.$$
Suppose that $L\leq 3/2$. With $x=1+\min\{\delta/2,1/6\}$, we have have $|x-1|=\min\{\delta/2,1/6\}<\delta$, so $x$ is $\delta$-close to one. Then $|f(x)-L|=|L-(1+\min\{\delta/2,1/6\})|\geq 1/3$.
Similarly for $L>3/2$, there exists a point $\delta$-close to $1$, $x=1-\min\{\delta/2,1/6\}$, such that $|f(x)-L|\geq 1/3$.
I wouldn't be one bit surprised however if I have messed up the negation.
Seems pretty tough for a beginner maths class lol. If you end up with an answer which doesn't satisfy $|x-x_0|<\delta \implies |f(x)-L|<\varepsilon$, then you just say 'limit doesn't exist'.
It doesn't exist because the left limit at $x = 1$ is $2$ and right limit at $x = 1$ is $1$. If a proper limit existed there, left and right limits would be equal.
I guess there's no easy way of doing this. Here's the way I would proceed:
Assume the limit exists and is some number $L$. Then for $\varepsilon =1$, there is an $\delta > 0$ such that $|f(x) - L|<1$ for all $0<|x-1|<\delta$.
Now if $\delta <1$, let's define $\delta ' := \delta$, and otherwise let $\delta '$ be some number with $\delta ' <1 \le \delta$.
Now pick $x_1$ and $x_2$ such that $1-\delta ' < x_1 < 1 < x_2 < 1+ \delta '$.Then it is clear that $0<|x_1-1|<\delta$ and $0<|x_2-1|<\delta$.
Thus, $|f(x_1) -1| < 1$. This means $|3-(x_1 + L)| <1$. Now, by using the fact that $x_1>0$, we can conclude that $|L|>2$. (Check this!)
Also, we have that $|f(x_2) -1| < 1$. This means $|x_2-L|<1$. Then $|L| < 1+ |x_2| < 1+\delta ' <2$.
But we showed $|L|<2$ and $|L|>2$, which is a contradiction. Thus, limit does not exist.
|
I was studying variational methods in theoretical physics and I got stuck with a few simple questions. I have possible answers but I cannot see clearly and rigorously if they are correct.
Suppose we have an action $S$ that depends on two fields: an antisymmetric tensor field $T_{\mu \nu}$ and the spacetime metric $g_{\mu \nu}$. Now we vary the action to obtain the equations of motion:
$$ \delta S = \int \left[ \frac{\delta S}{\delta g_{\mu \nu}}\delta g_{\mu \nu} +\frac{\delta S}{\delta T_{\mu \nu}}\delta T_{\mu \nu} \right] d^D x . \tag{1} $$
I know that, due to the symmetry and antisymmetry of $g_{\mu \nu}$ and $T_{\mu \nu}$ respectively, the equations of motion should be symmetrized/antisymmetrized properly:
$$ \frac{\delta S}{\delta g_{(\mu \nu)}}=0 , \qquad \frac{\delta S}{\delta T_{[\mu \nu]}}=0 \tag{2} $$
But I don't see these symmetrizations (in books, articles, etc.) explicitly. I have always believed that the objects $\frac{\delta S}{\delta g_{\mu \nu}} $ and $\frac{\delta S}{\delta T_{\mu \nu}}$ do not have any particular (explicit) symmetry, and the symmetrizations in the equations of motion come from the $\delta g_{\mu \nu}$ and the $\delta T_{\mu \nu}$ that are multiplying in the variation. Am I wrong? (
Question 1)
Suppose that for some reason I prefer $T_\mu {}^\nu$ to be the "fundamental" field. Then,
$$ \delta S = \int \left[ \frac{\delta S}{\delta g_{\mu \nu}}\delta g_{\mu \nu} +\frac{\delta S}{\delta T_\mu {}^\nu}\delta T_\mu {}^\nu \right] d^D x . \tag{3} $$
The antisymmetrization in the second term of $(3)$ is not explicit now. The indices of $T$ are one up and the other down and I cannot do things like:
$$ \frac{\delta S}{\delta T_\mu {}^\nu}\delta T_\mu {}^\nu = \frac{\delta S}{\delta T_{\mu \rho} } g_{\rho \nu} \delta T_\mu {}^\nu, \tag{4}$$
$$ \frac{\delta S}{\delta T_\mu {}^\nu} \delta (g^{\rho \nu} T_{\mu \rho}) = \frac{\delta S}{\delta T_\mu {}^\nu} g^{\rho \nu} \delta T_{\mu \rho} , \tag{5}$$
Because: in $(4)$ this metric would have to be affected by partial derivatives that are within the variation $\frac{\delta S}{\delta T_\mu {}^\nu}$; and, in $(5)$, we are forgetting a term ~$\delta g^{\rho \nu}$. Correct me if I am wrong (
Question 2).
I think I should use the constraint:
$$ T_\mu {}^\nu = - T^\nu{} _\mu = - g^{\nu \rho} g_{\mu \tau} T_\rho {}^\tau .\tag{6}$$
But I do not know how. The confusing fact for me is that it depends on the other field, the metric. Any ideas? (
Question 3)
Thanks!
|
Prince Rupert's Cube
Jump to navigation Jump to search
Prince Rupert's Cube
Let $C$ be a unit cube.
$\dfrac {3 \sqrt 2} 4 = 1 \cdotp 06066 \, 0$ Proof Source of Name
This entry was named for Prince Rupert of the Rhine.
The correct answer was determined by Pieter Nieuwland.
This provides a solution of $\sqrt 6 - \sqrt 2 \approx 1 \cdotp 03527$.
Sources 1950: D.J.E. Schrek: Prince Rupert's problem and its Extension by Pieter Nieuwland( Scripta Math. Vol. 16: 73 – 80) 1950: D.J.E. Schrek: Prince Rupert's problem and its Extension by Pieter Nieuwland( Scripta Math. Vol. 16: 261 – 267)
|
In more than two dimensions we use a similar definition, based on the fact that all eigenvalues of the coefficient matrix have the same sign (for an elliptic equation), have different signs (hyperbolic) or one of them is zero (parabolic). This has to do with the behavior along the characteristics, as discussed below.
Let me give a slightly more complex example
\[x^2\frac{\partial^2 u}{\partial x^2} + y^2\frac{\partial^2 u}{\partial y^2} + z^2\frac{\partial^2 u}{\partial z^2}+2 xy\frac{\partial^2 u}{\partial x \partial y}+2 xz\frac{\partial^2 u}{\partial x \partial z}+2 yz\frac{\partial^2 u}{\partial y \partial z}=0.\]
The matrix associated with this equation is \[\left(\begin{array}{lll} x^2 & xy & xz \\ xy & y^2 & yz \\ xz & yz & z^2 \end{array}\right)\]
If we evaluate its characteristic polynomial we find that it is \[\lambda^2 (x^2-y^2+z^2-\lambda)=0.\] Since this has always (for all \(x,y,z\)) two zero eigenvalues this is a parabolic differential equation.
Characteristics and Classification
A key point for classifying equations this way is not that we like the conic sections so much, but that the equations behave in very different ways if we look at the three different cases. Pick the simplest representative case for each class, and look at the lines of propagation.
|
Let us reformulate OP's question as follows:
Give a proof that a local coordinate transformation $x^{\mu} \to y^{\rho}=y^{\rho}(x)$ between two local coordinate systems (on a 3+1 dimensional Lorentzian manifold) must be affine if the metric $g_{\mu\nu}$ in both coordinate systems happen to be on constant flat Minkowski form $\eta_{\mu\nu}$.
Here we will present a proof that works both with Minkowski and Euclidean signature; in fact for any signature and for any finite non-zero number of dimensions, as long as the metric $g_{\mu\nu}$ is invertible.
1) Let us first recall the transformation property of the inverse metric $g^{\mu\nu}$, which is a contravariant $(2,0)$ symmetric tensor,
$$ \frac{\partial y^{\rho}}{\partial x^{\mu}} g^{\mu\nu}_{(x)}\frac{\partial y^{\sigma}}{\partial x^{\nu}}~=~g^{\rho\sigma}_{(y)}, $$
where $x^{\mu} \to y^{\rho}=y^{\rho}(x)$ is a local coordinate transformation. Recall that the metric $g_{\mu\nu}=\eta_{\mu\nu}$ is the flat constant metric in both coordinate systems. So we can write
$$ \frac{\partial y^{\rho}}{\partial x^{\mu}} \eta^{\mu\nu}\frac{\partial y^{\sigma}}{\partial x^{\nu}}~=~\eta^{\rho\sigma}. \qquad (1) $$
2) Let us assume that the local coordinate transformation is real analytic
$$y^{\rho} ~=~ a^{(0)\rho} + a^{(1)\rho}_{\mu} x^{\mu} + \frac{1}{2} a^{(2)\rho}_{\mu\nu}x^{\mu}x^{\nu} + \frac{1}{3!} a^{(3)\rho}_{\mu\nu\lambda}x^{\mu} x^{\nu} x^{\lambda} + \ldots. $$
By possibly performing an appropriate translation we will from now on assume without loss of generality that the constant shift $ a^{(0)\rho} =0 $ is zero.
3) To the zeroth order in $x$, the equation $(1)$ reads
$$ a^{(1)\rho}_{\mu} \eta^{\mu\nu}a^{(1)\sigma}_{\nu}~=~\eta^{\rho\sigma}, $$
which not surprisingly says that the matrix $a^{(1)\rho}_{\mu}$ is a Lorentz (or an orthogonal) matrix, respectively. By possibly performing an appropriate "rotation", we will from now on assume without loss of generality that the constant matrix
$$ a^{(1)\rho}_{\mu}~=~\delta^{\rho}_{\mu} $$
is the unit matrix.
4) In the following, it will be convenient to lower the index of the $y^{\sigma}$ coordinate as
$$y_{\rho}~:=~\eta_{\rho\sigma}y^{\sigma}.$$
Then the local coordinate transformation becomes
$$y_{\rho} ~=~ \eta_{\rho\mu} x^{\mu} + \frac{1}{2} a^{(2)}_{\rho,\mu\nu}x^{\mu}x^{\nu} + \frac{1}{3!} a^{(3)}_{\rho,\mu\nu\lambda}x^{\mu} x^{\nu} x^{\lambda}+ \ldots$$$$+\frac{1}{n!} a^{(n)}_{\rho,\mu_1\ldots\mu_n}x^{\mu_1} \cdots x^{\mu_n}+ \ldots. $$
5) To the first order in $x$, the equation $(1)$ reads
$$ a^{(2)}_{\rho,\sigma\mu}+a^{(2)}_{\sigma,\rho\mu}~=~0.$$
That is, $a^{(2)}_{\rho,\mu\nu}$ is symmetric in $\mu\leftrightarrow \nu$, but antisymmetric in $\rho\leftrightarrow \mu$. It is not hard to see (by applying the symmetry and the antisymmetry property in alternating order three times each), that the second order coefficients $a^{(2)}_{\rho,\mu\nu}=0$ must vanish.
6) To the second order in $x$, the equation $(1)$ reads
$$ a^{(3)}_{\rho,\sigma\mu\nu}+a^{(3)}_{\sigma,\rho\mu\nu}~=~0.$$
That is, $a^{(3)}_{\rho,\mu\nu\lambda}$ is symmetric in $\mu\leftrightarrow \nu\leftrightarrow \lambda $, but antisymmetric in $\rho\leftrightarrow \mu$. For fixed $\lambda$, we can again reach the conclusion $a^{(3)}_{\rho,\mu\nu\lambda}=0$.
7) Similarly, we conclude inductively that the higher order coefficients $a^{(n)}_{\rho,\mu_1\ldots\mu_n}=0$ must vanish as well. So $y^{\mu}= x^{\mu}$. Q.E.D.
|
You have two classes of points. Instead of managing them in two sets, one just assigns each point in the first class the value $-1$ and in the second class the value $+1$. So in fact you have point-value pairs $(x_i,y_i)$. To classify future points in a consistent way you now want to construct a function $f(x)$ that has not exactly $f(x_i)=y_i$ as in interpolation, but the still sufficient condition $f(x_i)\le -1$ for points in the first class and $f(x_i)\ge +1$ for points in the second class. These two kinds of inequality can be compressed into one single class of inequalities by multiplying with the sign $y_i$,
$$y_if(x_i)\ge 1$$
for all $i=1,...,N$, where $N$ is the number of training points.
"for all" has the symbolic sign $\forall$, an inverted letter "A". The inverted letter "E", $\exists$, is the symbol for "exists".
Now to find such a function, you select a parametrized class of functions $f(w,x)+b$ with some parameter vector $(w,b)$ and strive to find a compromise between having a simple form of $f$ and small function values on the test set, or rather, $f(w,x_i)=y_i$, which defines the support vectors, on as many points as possible. Simplicity includes that the parameters in $w$ are small numbers.
So we come to the linear SVM where $f(w,x)=w^Tx$ and minimal paramters means to minimize $\|w\|_2^2=w^Tw$.
In optimization, this task is encoded via a Lagrange function
$$L(w,b,α)=\tfrac12\|w\|_2^2-\sum_{i=1}^Nα_i(y_i(w^Tx_i+b)-1)$$
with the restriction $α_i\ge 0$.
Standard optimization techniques solve this problem via its KKT system.\begin{align}0=\frac{\partial L}{\partial w}&=w-\sum_{i=1}^Nα_iy_ix_i\\0=\frac{\partial L}{\partial b}&=-\sum_{i=1}^Nα_i y_i\\α_i&\ge 0\\y_i(w^Tx_i+b)-1&\ge 0\\α_i\,(y_i(w^Tx_i+b)-1)&=0\end{align}
The last three equations again for all $i$. They can be combined using NCP functions like
$$N(u,v)=2uv-(u+v)_-^2$$
with $(u+v)_-=\min(0,u+v)$ to one condition per $i$
$$N(α_i,\, y_i(w^Tx_i+b)-1)=0.$$
This now is smooth enough so that Newton's method or quasi-Newton methods may be applied.
|
For the 2018 version of our NMR Mandhala (working towards higher homogeneity along the cylindrical axis), I did a close reading of Soltner and Blumler’s 2010 article “Dipolar Halbach Magnet Stacks Made from Identically Shaped Permanent Magnets for Magnetic Resonance”. Below are some particularly useful findings.
Useful Information Regarding Comparing Finite Sized Magnets with Theory:
Magnetic field of final magnet is in very good approximation the sum of the field of its pieces. (Approximation still fairly good when you substitute each of its pieces with an ideal magnetic dipole, which is used for getting analytic results.)
For n=8 N52
1⁄ 2” cubes with remanent magnetization, Br = 1.48 T, analytic estimate of flux density at center would be $0.337613 \cdot 1.48 $ T $\approx 0.49967 $ T (for cubes as close to each other as possible).
Typically analytic and numeric results (using FEMM) differ by about 10% since magnetization inside magnets is reduced by magnetic interactions of neighboring magnetic material.
Analytic results also over-estimate because it used a dipole approximation where size of magnets is neglected.
Regard NMR Mandhala Stacks
For NMR Mandhala stack of 2, ideal placement of stacks similar to construction of a Helmholtz coil: $s_1/r = \pm 1/\sqrt{6} \approx \pm 0.408$ where $s_1$ is the distance between the stacks (measured from the
centerof each stack) and $r$ is the radius of the ring measured to the center of the magnets in each stack.
For our stack of 2 n=8 N52
1⁄ 2” cube Mandhalas, $r = 3$ cm, and optimal distance would be 0.48 inches, which is smaller than the actual physical size of the magnets.
Flux density at the center of two Mandhalas $\approx 1.36 \cdot$ flux density of single Mandhala (we got closer to $1.46 \cdot$ flux density of single stack).
For multiple stacked Mandhalas, normalized distance between stacks $0.44 \cdot r$ for the inner stacks. The outer stacks need to be made a bit closer to make up for them being at the ends.
Width of $\Delta B/B = 10^{-4}$ homogeneous region is $\pm 0.12 \cdot r$.
“Ultimate homogeneity value for given magnet shape is reached when two magnets touch each other. Their distance as measured from their centers cannot be smaller than their size”
For the reasons above, it made sense to try to move to longer magnets to both boost the field of the NMR Mandhala, as well as improve homogeneity along the cylindrical axis. We found 2” long
1⁄ 2” x 1⁄ 2” N52 magnets and switch to those for the second version of our NMR Mandhala (and kept the center Mandhala the same, since those magnets do not come in any longer sizes than 1”.)
|
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
|
Using basic circuit analysis techniques we can find the voltage gain of this basic integrator as follows:
\$i_1=\frac{v_I}{R_I}\quad\text{and}\quad i_2=-C(\frac{dv_O}{dt})\\\text{since:}\quad i_1=i_2 \ \rightarrow \ \frac{v_I}{R_I}=-C(\frac{dv_O}{dt})\$
From this we can derive the output voltage to be:\$-\frac{1}{RC}\int_{0}^{t}v_Idt+v_O(0)\$
If we look at it in the s domain, we can easily find the voltage gain of the circuit to be:
\$G_v=\frac{v_O}{v_I}=-\frac{1}{sRC}\$
This was easy enough. The only problem is, this is only valid if the input signal is a sine wave. Granted, the gain will approximate this value if the input signal is a square wave and it will be even closer if the input is a triangle wave, but it will not be 100% correct.
So my question: how can we modify this relationship to solve for the output voltage or gain of the circuit if the input signal is a square wave? I would think that since a square wave is composed of a sine wave at the primary frequency and a number of odd order harmonic frequencies, there must be a way to add to this and solve it more accurately.
|
Problem - need help for part (ii)
Let $\vec{F} = y \vec{i} -x \vec{j} + z \vec{k}$ and let the surface $S$ be the part of the paraboloid $ z = 4 - x^2 - y^2$ with $z \geq 0 $, oriented with $\vec{n}$ upwards. Calculate the flux integral $\int_S \vec{F} \cdot d\vec{S}$ using
i) Cartesian coordinates
ii) cylindrical coordinates
My attempt for (i)
We parameterize the surface $S$ using Cartesian coordinates, $$\vec{r}(u,v) = (u,v, 4 - u^2 - v^2), $$ and let $f(x,y,z) = x^2 + y^2 + z$. The normal to the surface $S$, $\vec{n}$, is found by computing grad(f), i.e. $$\vec{n} = \nabla f(x,y,z) = (2x,2y,1) = (2u,2v,1). $$ Time to evaluate the integral, \begin{align*} \int_S \vec{F} \cdot d\vec{S} & = \int_S \vec{F}(\vec{r}(u,v)) \cdot \vec{n} dS \\ & =\int_S (v,-u,4-u^2-v^2) \cdot (2u,2v,1) dS \\ & = \int_S (4 - u^2 - v^2) dS. \tag{1} \end{align*} Since the surface is defined as a paraboloid, we will use cylindrical coordinates to find out the limits of integration, \begin{align} u &= r\cos(\theta), \\ v &= r\sin(\theta), \\ z &= z. \end{align} As $z \geq 0$, then we have $u^2 +v^2 \leq 4$ which implies that the $ 0 \leq r \leq 2$.
Now the integral becomes, $$\int_0^{2 \pi} \int_0^2 (4-r^2)r dr d\theta = \dots = 8\pi. $$
A predicament - SOLVED - See first comment below.
Let's come back to $(1)$. To find the limits, I originally did this:
As $z \geq 0$, we have $u^2 + v^2 \leq 4$ which implies that $$-\sqrt{4-u^2} \leq v \leq \sqrt{4-u^2} \\ 0 \leq u \leq 2.$$ Putting this into the integral gives an answer of $4\pi$. What was wrong with this reasoning for finding the limits of integration?
For part (ii)
I'm going to skip the detail but using polar coordinates gives, $$ \vec{n} = (2rcos(\theta), 2rsin(\theta),r) \\ \vec{F}(\vec{x}(r,\theta)) = (rsin(\theta), -rcos(\theta), 4-r^2). $$
Substitute the findings into the integral, \begin{align*} \int_S \vec{F} \cdot \vec{n} dS & = \int_S (4-r^2) dS \end{align*}
Here is the problem, what is $dS$? If I understand correctly, it was $dxdy$ but after the cylindrical coordinates parameterization, it changes to $drd\theta$. Then does that mean I'm missing the Jacobian,$\left| \dfrac{\partial (x,y)}{\partial (r,\theta)}\right| = r$ since I changed the variables?
If so, if gives the answers. I feel that I have reasoned correctly. Could anyone please point out any errors in my reasoning or confirm if I'm correct?
Many thanks
|
The first observation of top quark production in proton-nucleus collisions is reported using proton-lead data collected by the CMS experiment at the CERN LHC at a nucleon-nucleon center-of-mass energy of $\sqrt{s_\mathrm{NN}} =$ 8.16 TeV. The measurement is performed using events with exactly one isolated electron or muon and at least four jets. The data sample corresponds to an integrated luminosity of 174 nb$^{-1}$. The significance of the $\mathrm{t}\overline{\mathrm{t}}$ signal against the background-only hypothesis is above five standard deviations. The measured cross section is $\sigma_{\mathrm{t}\overline{\mathrm{t}}} =$ 45$\pm$8 nb, consistent with predictions from perturbative quantum chromodynamics.
Measurements of two- and multi-particle angular correlations in pp collisions at s=5,7, and 13TeV are presented as a function of charged-particle multiplicity. The data, corresponding to integrated luminosities of 1.0pb−1 (5 TeV), 6.2pb−1 (7 TeV), and 0.7pb−1 (13 TeV), were collected using the CMS detector at the LHC. The second-order ( v2 ) and third-order ( v3 ) azimuthal anisotropy harmonics of unidentified charged particles, as well as v2 of KS0 and Λ/Λ‾ particles, are extracted from long-range two-particle correlations as functions of particle multiplicity and transverse momentum. For high-multiplicity pp events, a mass ordering is observed for the v2 values of charged hadrons (mostly pions), KS0 , and Λ/Λ‾ , with lighter particle species exhibiting a stronger azimuthal anisotropy signal below pT≈2GeV/c . For 13 TeV data, the v2 signals are also extracted from four- and six-particle correlations for the first time in pp collisions, with comparable magnitude to those from two-particle correlations. These observations are similar to those seen in pPb and PbPb collisions, and support the interpretation of a collective origin for the observed long-range correlations in high-multiplicity pp collisions.
Measurements are presented of the associated production of a W boson and a charm-quark jet (W + c) in pp collisions at a center-of-mass energy of 7 TeV. The analysis is conducted with a data sample corresponding to a total integrated luminosity of 5 inverse femtobarns, collected by the CMS detector at the LHC. W boson candidates are identified by their decay into a charged lepton (muon or electron) and a neutrino. The W + c measurements are performed for charm-quark jets in the kinematic region $p_T^{jet} \gt$ 25 GeV, $|\eta^{jet}| \lt$ 2.5, for two different thresholds for the transverse momentum of the lepton from the W-boson decay, and in the pseudorapidity range $|\eta^{\ell}| \lt$ 2.1. Hadronic and inclusive semileptonic decays of charm hadrons are used to measure the following total cross sections: $\sigma(pp \to W + c + X) \times B(W \to \ell \nu)$ = 107.7 +/- 3.3 (stat.) +/- 6.9 (syst.) pb ($p_T^{\ell} \gt$ 25 GeV) and $\sigma(pp \to W + c + X) \times B(W \to \ell \nu)$ = 84.1 +/- 2.0 (stat.) +/- 4.9 (syst.) pb ($p_T^{\ell} \gt$ 35 GeV), and the cross section ratios $\sigma(pp \to W^+ + \bar{c} + X)/\sigma(pp \to W^- + c + X)$ = 0.954 +/- 0.025 (stat.) +/- 0.004 (syst.) ($p_T^{\ell} \gt$ 25 GeV) and $\sigma(pp \to W^+ + \bar{c} + X)\sigma(pp \to W^- + c + X)$ = 0.938 +/- 0.019 (stat.) +/- 0.006 (syst.) ($p_T^{\ell} \gt$ 35 GeV). Cross sections and cross section ratios are also measured differentially with respect to the absolute value of the pseudorapidity of the lepton from the W-boson decay. These are the first measurements from the LHC directly sensitive to the strange quark and antiquark content of the proton. Results are compared with theoretical predictions and are consistent with the predictions based on global fits of parton distribution functions.
A search for narrow resonances in the dijet mass spectrum is performed using data corresponding to an integrated luminosity of 2.9 inverse pb collected by the CMS experiment at the LHC. Upper limits at the 95% confidence level (CL) are presented on the product of the resonance cross section, branching fraction into dijets, and acceptance, separately for decays into quark-quark, quark-gluon, or gluon-gluon pairs. The data exclude new particles predicted in the following models at the 95% CL: string resonances, with mass less than 2.50 TeV, excited quarks, with mass less than 1.58 TeV, and axigluons, colorons, and E_6 diquarks, in specific mass intervals. This extends previously published limits on these models.
The production of jets associated to bottom quarks is measured for the first time in PbPb collisions at a center-of-mass energy of 2.76 TeV per nucleon pair. Jet spectra are reported in the transverse momentum (pt) range of 80-250 GeV, and within pseudorapidity abs(eta < 2). The nuclear modification factor (R[AA]) calculated from these spectra shows a strong suppression in the b-jet yield in PbPb collisions relative to the yield observed in pp collisions at the same energy. The suppression persists to the largest values of pt studied, and is centrality dependent. The R[AA] is about 0.4 in the most central events, similar to previous observations for inclusive jets. This implies that jet quenching does not have a strong dependence on parton mass and flavor in the jet pt range studied.
A search for neutral Higgs bosons in the minimal supersymmetric extension of the standard model (MSSM) decaying to tau-lepton pairs in pp collisions is performed, using events recorded by the CMS experiment at the LHC. The dataset corresponds to an integrated luminosity of 24.6 fb$^{−1}$, with 4.9 fb$^{−1}$ at 7 TeV and 19.7 fb$^{−1}$ at 8 TeV. To enhance the sensitivity to neutral MSSM Higgs bosons, the search includes the case where the Higgs boson is produced in association with a b-quark jet. No excess is observed in the tau-lepton-pair invariant mass spectrum. Exclusion limits are presented in the MSSM parameter space for different benchmark scenarios, m$_{h}^{max}$ , m$_{h}^{mod +}$ , m$_{h}^{mod −}$ , light-stop, light-stau, τ-phobic, and low-m$_{H}$. Upper limits on the cross section times branching fraction for gluon fusion and b-quark associated Higgs boson production are also given.
Measurements of the differential production cross sections in transverse momentum and rapidity for B0 mesons produced in pp collisions at sqrt(s) = 7 TeV are presented. The dataset used was collected by the CMS experiment at the LHC and corresponds to an integrated luminosity of 40 inverse picobarns. The production cross section is measured from B0 meson decays reconstructed in the exclusive final state J/Psi K-short, with the subsequent decays J/Psi to mu^+ mu^- and K-short to pi^+ pi^-. The total cross section for pt(B0) > 5 GeV and y(B0) < 2.2 is measured to be 33.2 +/- 2.5 +/- 3.5 microbarns, where the first uncertainty is statistical and the second is systematic.
The Upsilon production cross section in proton-proton collisions at sqrt(s) = 7 TeV is measured using a data sample collected with the CMS detector at the LHC, corresponding to an integrated luminosity of 3.1 +/- 0.3 inverse picobarns. Integrated over the rapidity range |y|<2, we find the product of the Upsilon(1S) production cross section and branching fraction to dimuons to be sigma(pp to Upsilon(1S) X) B(Upsilon(1S) to mu+ mu-) = 7.37 +/- 0.13^{+0.61}_{-0.42}\pm 0.81 nb, where the first uncertainty is statistical, the second is systematic, and the third is associated with the estimation of the integrated luminosity of the data sample. This cross section is obtained assuming unpolarized Upsilon(1S) production. If the Upsilon(1S) production polarization is fully transverse or fully longitudinal the cross section changes by about 20%. We also report the measurement of the Upsilon(1S), Upsilon(2S), and Upsilon(3S) differential cross sections as a function of transverse momentum and rapidity.
A search for Z bosons in the mu^+mu^- decay channel has been performed in PbPb collisions at a nucleon-nucleon centre of mass energy = 2.76 TeV with the CMS detector at the LHC, in a 7.2 inverse microbarn data sample. The number of opposite-sign muon pairs observed in the 60--120 GeV/c2 invariant mass range is 39, corresponding to a yield per unit of rapidity (y) and per minimum bias event of (33.8 ± 5.5 (stat) ± 4.4 (syst)) 10^{-8}, in the |y|<2.0 range. Rapidity, transverse momentum, and centrality dependencies are also measured. The results agree with next-to-leading order QCD calculations, scaled by the number of incoherent nucleon-nucleon collisions.
A measurement of the J/psi and psi(2S) production cross sections in pp collisions at sqrt(s)=7 TeV with the CMS experiment at the LHC is presented. The data sample corresponds to an integrated luminosity of 37 inverse picobarns. Using a fit to the invariant mass and decay length distributions, production cross sections have been measured separately for prompt and non-prompt charmonium states, as a function of the meson transverse momentum in several rapidity ranges. In addition, cross sections restricted to the acceptance of the CMS detector are given, which are not affected by the polarization of the charmonium states. The ratio of the differential production cross sections of the two states, where systematic uncertainties largely cancel, is also determined. The branching fraction of the inclusive B to psi(2S) X decay is extracted from the ratio of the non-prompt cross sections to be: BR(B to psi(2S) X) = (3.08 +/- 0.12(stat.+syst.) +/- 0.13(theor.) +/- 0.42(BR[PDG])) 10^-3
Isolated photon production is measured in proton-proton and lead-lead collisions at nucleon-nucleon centre-of-mass energies of 2.76 TeV in the pseudorapidity range |eta|<1.44 and transverse energies ET between 20 and 80 GeV with the CMS detector at the LHC. The measured ET spectra are found to be in good agreement with next-to-leading-order perturbative QCD predictions. The ratio of PbPb to pp isolated photon ET-differential yields, scaled by the number of incoherent nucleon-nucleon collisions, is consistent with unity for all PbPb reaction centralities.
The prompt D0 meson azimuthal anisotropy coefficients, v[2] and v[3], are measured at midrapidity (abs(y) < 1.0) in PbPb collisions at a center-of-mass energy sqrt(s[NN]) = 5.02 TeV per nucleon pair with data collected by the CMS experiment. The measurement is performed in the transverse momentum (pT) range of 1 to 40 GeV/c, for central and midcentral collisions. The v[2] coefficient is found to be positive throughout the pT range studied. The first measurement of the prompt D0 meson v[3] coefficient is performed, and values up to 0.07 are observed for pT around 4 GeV/c. Compared to measurements of charged particles, a similar pT dependence, but smaller magnitude for pT < 6 GeV/c, is found for prompt D0 meson v[2] and v[3] coefficients. The results are consistent with the presence of collective motion of charm quarks at low pT and a path length dependence of charm quark energy loss at high pT, thereby providing new constraints on the theoretical description of the interactions between charm quarks and the quark-gluon plasma.
The transverse momentum (pt) spectrum of prompt D0 mesons and their antiparticles has been measured via the hadronic decay channels D0 to K- pi+ and D0-bar to K+ pi- in pp and PbPb collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair with the CMS detector at the LHC. The measurement is performed in the D0 meson pt range of 2-100 GeV and in the rapidity range of abs(y)<1. The pp (PbPb) dataset used for this analysis corresponds to an integrated luminosity of 27.4 inverse picobarns (530 inverse microbarns). The measured D0 meson pt spectrum in pp collisions is well described by perturbative QCD calculations. The nuclear modification factor, comparing D0 meson yields in PbPb and pp collisions, was extracted for both minimum-bias and the 10% most central PbPb interactions. For central events, the D0 meson yield in the PbPb collisions is suppressed by a factor of 5-6 compared to the pp reference in the pt range of 6-10 GeV. For D0 mesons in the high-pt range of 60-100 GeV, a significantly smaller suppression is observed. The results are also compared to theoretical calculations.
A search for supersymmetry is presented based on proton-proton collision events containing identified hadronically decaying top quarks, no leptons, and an imbalance pTmiss in transverse momentum. The data were collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV, and correspond to an integrated luminosity of 35.9 fb−1. Search regions are defined in terms of the multiplicity of bottom quark jet and top quark candidates, the pTmiss, the scalar sum of jet transverse momenta, and the mT2 mass variable. No statistically significant excess of events is observed relative to the expectation from the standard model. Lower limits on the masses of supersymmetric particles are determined at 95% confidence level in the context of simplified models with top quark production. For a model with direct top squark pair production followed by the decay of each top squark to a top quark and a neutralino, top squark masses up to 1020 GeV and neutralino masses up to 430 GeV are excluded. For a model with pair production of gluinos followed by the decay of each gluino to a top quark-antiquark pair and a neutralino, gluino masses up to 2040 GeV and neutralino masses up to 1150 GeV are excluded. These limits extend previous results.
A measurement of the exclusive two-photon production of muon pairs in proton-proton collisions at sqrt(s)= 7 TeV, pp to p mu^+ mu^- p, is reported using data corresponding to an integrated luminosity of 40 inverse picobarns. For muon pairs with invariant mass greater than 11.5 GeV, transverse momentum pT(mu) > 4 GeV and pseudorapidity |eta(mu)| < 2.1, a fit to the dimuon pt(mu^+ mu^-) distribution results in a measured cross section of sigma(pp to p mu^+ mu^- p) = 3.38 [+0.58 -0.55] (stat.) +/- 0.16 (syst.) +/- 0.14 (lumi.) pb, consistent with the theoretical prediction evaluated with the event generator Lpair. The ratio to the predicted cross section is 0.83 [+0.14-0.13] (stat.) +/- 0.04 (syst.) +/- 0.03 (lumi.). The characteristic distributions of the muon pairs produced via photon-photon fusion, such as the muon acoplanarity, the muon pair invariant mass and transverse momentum agree with those from the theory.
Measurements of the differential cross sections for the production of exactly four jets in proton-proton collisions are presented as a function of the transverse momentum pt and pseudorapidity eta, together with the correlations in azimuthal angle and the pt balance among the jets. The data sample was collected in 2010 at a center-of-mass energy of 7 TeV with the CMS detector at the LHC, with an integrated luminosity of 36 inverse picobarns. The cross section for a final state with a pair of hard jets with pt > 50 GeV and another pair with pt > 20 GeV within abs(eta) < 4.7 is measured to be sigma = 330 +- 5 (stat.) +- 45 (syst.) nb. It is found that fixed-order matrix element calculations including parton showers describe the measured differential cross sections in some regions of phase space only, and that adding contributions from double parton scattering brings the Monte Carlo predictions closer to the data.
|
Newform invariants
Coefficients of the \(q\)-expansion are expressed in terms of a basis \(1,\beta_1,\beta_2\) for the coefficient ring described below. We also show the integral \(q\)-expansion of the trace form.
Basis of coefficient ring in terms of a root \(\nu\) of \(x^{3}\mathstrut -\mathstrut \) \(x^{2}\mathstrut -\mathstrut \) \(3\) \(x\mathstrut +\mathstrut \) \(1\):
\(\beta_{0}\) \(=\) \( 1 \) \(\beta_{1}\) \(=\) \( \nu \) \(\beta_{2}\) \(=\) \( \nu^{2} - \nu - 2 \)
\(1\) \(=\) \(\beta_0\) \(\nu\) \(=\) \(\beta_{1}\) \(\nu^{2}\) \(=\) \(\beta_{2}\mathstrut +\mathstrut \) \(\beta_{1}\mathstrut +\mathstrut \) \(2\)
For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below.
For more information on an embedded modular form you can click on its label.
This newform does not admit any (nontrivial) inner twists.
\( p \) Sign \(2\) \(-1\) \(3\) \(-1\) \(167\) \(1\)
This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{2}^{\mathrm{new}}(\Gamma_0(8016))\):
\(T_{5}^{3} \) \(\mathstrut +\mathstrut 3 T_{5}^{2} \) \(\mathstrut -\mathstrut T_{5} \) \(\mathstrut -\mathstrut 1 \) \(T_{7}^{3} \) \(\mathstrut -\mathstrut 5 T_{7}^{2} \) \(\mathstrut -\mathstrut T_{7} \) \(\mathstrut +\mathstrut 1 \) \(T_{11}^{3} \) \(\mathstrut -\mathstrut 28 T_{11} \) \(\mathstrut -\mathstrut 52 \) \(T_{13}^{3} \) \(\mathstrut +\mathstrut 8 T_{13}^{2} \) \(\mathstrut +\mathstrut 12 T_{13} \) \(\mathstrut +\mathstrut 4 \)
|
I have been reading a recent paper. In it, the authors performed molecular dynamics (MD) simulations of parallel-plate supercapacitors, in which liquid resides between the parallel-plate electrodes. To simplify the situation, let us suppose that the liquid between the electrodes is argon liquid.
The system has a "slab" geometry, so the authors are only interested in variations of the liquid structure along the $z$ direction. Thus, the authors compute the particle number densities averaged over $x$ and $y$: $\bar{n}_\alpha(z)$, where $\alpha$ is a solvent species. (That is, in my simplified example, $\alpha$ is argon -- an argon atom.) $\bar{n} _\alpha(z)$ has dimensions of $\frac{\text{number}}{\text{length}^3}$ or simply $\text{length}^{-3}$, I think.
The $xy$-plane is given by the inequalities $-x_0 < x < x_0$ and $-y_0 < y < y_0$. The area $A_0$ of the $xy$-plane is thus given by $A_0 = 4x_0y_0$.
So, the authors define the particle number density averaged over $x$ and $y$ as follows: $$\bar{n}_\alpha(z) = A_0^{-1} \int_{-x_0}^{x_0} \int_{-y_0}^{y_0} dx^\prime dy^\prime n_\alpha(x^\prime, y^\prime, z)$$ where $A_0 = 4x_0y_0$ and $n_\alpha(x, y, z)$ is the local number density of $\alpha$ at $(x, y, z)$.
Thus, $\bar{n}_\alpha(z)$ is simply
proportional to $n_\alpha$ integrated over $x$ and $y$. But, my question is, what is $n_\alpha(x, y, z)$? How is $n_\alpha(x, y, z)$ determined in practice?
As far as the computer is concerned, the argon atoms are point particles; they are modeled as having zero volume (although they interact by Lennard-Jones interactions). So how is it possible to define a number density?
Do we simply "cut" the "slab" in "slices" along $z$ and then assign the particles to these slices? There might be 5 particles in the first $z$ slice, 10 in the second, 7 in the third, and so on. If I then divide 5, 10, and 7 by the volume of the respective slice, then I have a sort of number density, with units of $\frac{\text{number}}{\text{length}^3}$ or simply $\text{length}^{-3}$. But how do I now integrate this $n_\alpha(x^\prime, y^\prime, z)$ over $x$ and $y$? Do I have to additionally perform binning in the $x$ and $y$ directions?
|
OBD Reasoner
The OBD reasoner uses definitions of transitive relations, relation hierarchies, and relation compositions to infer implicit information. These inferences are added to the OBD Phenoscape database. This section documents the inherited code in Perl and embedded SQL, that extracts implicit inferences from the downloaded ontologies and annotations of ZFIN and Phenoscape phenotypes.
Contents 1 Notation 2 Implemented Relation Properties 3 Phenoscape-specific rules 4 Sweeps 5 Future directions Notation Classes, instances, relations
When describing rules below, we use the following notations:
A, B, C: classes (as subjects or objects). Note that relationship concepts can also appear as subject or object in an assertion. a, b, c: instances, or individuals (as subjects or objects) R: relationship (predicate) R(A, B): A RB, for example is_a(A, B) is equivalent to A is_aB. This is the functional form of assertions. Reification: assertions about assertions. I.e., A, B, ... may also be assertions. For example, the yellow that inheres_ina particular dorsal fin is_ayellow, which we can write formally as: is_a( inheres_in(yellow, dorsal_fin), yellow). Conjunction and Implication The double arrow (<math>\Rightarrow</math>) is also called directional implication. It can be translated into English to mean "it implies" or "it follows." The "cap" or "A minus the stripe" (<math>\and</math>) is the FOL construct to specify conjunctionand can be translated to "and" in plain English. Quantification of instances
In first order logic (FOL), it is common to assert statements about all possible instances of a concept in the real world. Let us start with the assertion, "All puppies are dogs." This can be stated as shown below in (1)
<math>\forall</math> A: instance_of(A, Puppy) <math>\Rightarrow</math> instance_of(A, Dog) -- (1)
The inverted A (<math>\forall</math>) is called the
universal quantifier and can be translated to "for every" or " for all" in plain English. Similarly, the colon (:) in the FOL statement above can be read as "such that." Therefore, the sentence above translated into English reads:
"
For all A such that A is a Puppy, implies that A is a Dog"
or even simpler as we shall readily comprehend, "
All puppies are dogs." Note this is a simple assertion of the semantics of the is_a predicate that is so common to Phenoscape. The formulation as is_a(Puppy, Dog) is a class-level abstraction from the quantified instances we have used in (1).
The FOL statement below states the transitive property of the
is_a relation <math>\forall</math> A, B, C: is_a(A, B) <math>\and</math> is_a(B, C) <math>\Rightarrow</math> is_a(A, C) -- (2)
The statement (2) above can be translated to read:
For all A, B, and C, such that A is a B, and B is a C, it follows that A is a C
The
existential quantifier (<math>\exists</math>) can be translated to "there exists" or "at least" in plain English. Now consider the statement, "Some birds are flightless" This can be stated as shown below <math>\exists</math> A: instance_of(A, Bird) <math>\and</math> instance_of(A, Flightless thing) -- (3)
This can be translated to plain English to:
"
There exists an A such that A is a Bird and A is a Flightless thing"
These are just some of the many constructs from first order logic which find common use in the Phenoscape project. There is a more full-fledged introduction to FOL on Wikipedia.
Implemented Relation Properties Relation Transitivity Rule: <math>\forall</math>A, B, C, R and R transitive: R(A, B) <math>\and</math> R(B, C) <math>\Rightarrow</math> R(A, C)
Transitive relationships are the simplest inferences to be extracted and comprise the majority of new assertions added by the reasoner. When the ontologies are loaded into the database, every transitive relation is marked with a specific value of a property called
is_transitive prior to loading. Transitive relationships include (ontology in brackets): is_a (OBO Relations) has_part (OBO Relations) part_of (OBO Relations) integral_part_of (OBO Relations) has_integral_part (OBO Relations) proper_part_of (OBO Relations) has_proper_part (OBO Relations) improper_part_of (OBO Relations) has_improper_part (OBO Relations) location_of (OBO Relations) located_in (OBO Relations) derives_from (OBO Relations) derived_into (OBO Relations) precedes (OBO Relations) preceded_by (OBO Relations) develops_from (Zebrafish Anatomy) anterior_to (Spatial Ontology) posterior_to (Spatial Ontology) proximal_to (Spatial Ontology) distal_to (Spatial Ontology) dorsal_to (Spatial Ontology) ventral_to (Spatial Ontology) surrounds (Spatial Ontology) surrounded_by (Spatial Ontology) superficial_to (Spatial Ontology) deep_to (Spatial Ontology) left_of (Spatial Ontology) right_of (Spatial Ontology) complete_evidence_for_feature(Sequence Ontology) evidence_for_feature (Sequence Ontology) derives_from (Sequence Ontology) member_of (Sequence Ontology) exhibits (Phenoscape Ontology)
Transitive relations are extracted from the database by the reasoner and transitive relationships are computed by the reasoner for each relation. For example, given that
is_a is a transitive relation, and if the database holds A is_a B and B is_a C, then the reasoner computes A is_a C and adds this new assertion to the database. Similarly, new inferred assertions are added to the database for every transitive relation. Note
Relation transitivity is the only relation property whose definition is (indirectly) extracted by the reasoner from the loaded ontologies (using the
is_transitive metadata tag) in order to compute inferences. Although definitions of many of the other relation properties (such as relation reflexivity) can be found in the ontologies as well, in the current implementation inference mechanisms associated with these relation properties are hard coded into the reasoner. Relation (role) compositions Rule: <math>\forall</math>A, B, C, R: R(A, B) <math>\and</math> is_a(B, C) <math>\Rightarrow</math> R(A, C) Rule: <math>\forall</math>A, B, C, R: is_a(A, B) <math>\and</math> R(B, C) <math>\Rightarrow</math> R(A, C)
Relation (role) compositions are of the form A
R 1 B, B R 2 C => A ( R 1| R 2) C. For example, given A is_a B and B part_of C then A part_of C. The reasoner computes such inferences and adds them to the database. is_a Relation Reflexivity Rule:<math>\forall</math>A, R and R reflexive <math>\Rightarrow</math> A R A
Reflexive relations relate their arguments to themselves. A good example: "A rose
is_a rose." The is_a relation is reflexive. In the database, every class, instance, or relation (having a corresponding identifier in the Node table of the database) is inferred by the reasoner to be related to itself through the is_a relation. Given a class called Siluriformes (with identifier TTO:302), the reasoner adds the TTO:302 is_a TTO:302 to the database.
The subsumption (
is_a) relation is the only reflexive relation that is handled by the reasoner. Other reflexive relations abound in the real world, subset_of is a good mathematical example from the domain of set theory. Every set is a subset of itself. Such relations are NOT dealt with by the reasoner. Relation Hierarchies Rule: <math>\forall</math>A, B, R 1, R: 2 R(A, B) <math>\and</math> 1 is_a( R, 1 R) <math>\Rightarrow</math> 2 R(A, B) 2
An example: If A
father_of B and father_of is_a parent_of, then A parent_of B Relation Chains Rule: <math>\forall</math>A, B, C: inheres_in(A, B) <math>\and</math> part_of(B, C) <math>\Rightarrow</math> inheres_in_part_of(A, C)
Relation chains are a special case of relation composition. Component relations are accumulated into an assembly relation. Specifically, instances of the relation
inheres_in_part_of are accumulated from instances of the relations of inheres_in and part_of. IF A inheres_in B and B part_of C, THEN A inheres_in_part_of C. This relation chain is specified by a holds_over_chain property in the inheres_in_part_of stanza of the Relation Ontology. However, the actual rule is hard coded into the OBD reasoner and not derived from the ontology. Decomposing "post-composition" relations Rule: <math>\forall</math>Q, E: inheres_in(Q, E) <math>\Rightarrow</math> inheres_in( inheres_in(Q, E), E) Rule: <math>\forall</math>Q, E: inheres_in(Q, E) <math>\Rightarrow</math> is_a( inheres_in(Q, E), Q)
Phenotype annotations are typically "post-composed", where an entity and quality are combined into a Compositional Description. For example, an annotation about the quality
decreased size (PATO:0000587) of the entity Dorsal Fin (TAO:0001173) may be post-composed into a Compositional Description that looks like PATO:0000587^OBO_REL:inheres_in(TAO:0001173). Instances of is_a and inheres_in relations are extracted from post compositions like this. In the above example, the reasoner extracts: PATO:0000587^OBO_REL:inheres_in(TAO:0001173) OBO_REL:inheres_in TAO:0001173, and PATO:0000587^OBO_REL:inheres_in(TAO:0001173) OBO_REL:is_a PATO:0000587 Phenoscape-specific rules
This section describes the Phenoscape-specific rules added to the OBD reasoner.
PATO Character State relations
The Phenotypes and Traits Ontology (PATO) contains definitions of qualities, many of which are used in phenotype descriptions. These qualities are partitioned into various subsets (or slims) such as attribute slims, absent slims, and value slims. Attribute and value slims are mutually exclusive subsets. Attribute slims include qualities that correspond to Characters of anatomical entities, Color or Shape for example. Value slims include qualities, which correspond to States that a Character may take, for example Red and Blue for the Color character and Curved and Round for the Shape character. These relationships are not explicitly defined in the PATO ontology but can be inferred using the relations shown below
PATO:0000587 oboInOwl:inSubset value_slim PATO:0000587 OBO_REL:is_a PATO:0000117 PATO:0000117 oboInOwl:inSubset attribute_slim
From these definitions, the relationship
PATO:0000587 PHENOSCAPE:value_for PATO:0000117
can be inferred by the reasoner. Ideally, the inference rule for this can be represented as
Rule: <math>\forall</math>V, A: in_Subset(V, value_slim) <math>\and</math> is_a(V, A) <math>\and</math> in_subset(A, attribute_slim) <math>\Rightarrow</math> value_for(V, A)
However, not all qualities are partitioned into one of the attribute or value slim subsets. In such cases, the super quality of these qualities is discovered by the reasoner and checked to find out if it is in the attribute or value slim subsets. This process continues until a quality belonging to the attribute slim subset is found. This can be represented as
Rule: <math>\forall</math>V, A: NOT in_subset(V, value_slim) <math>\and</math> is_a(V, A) <math>\and</math> in_subset(A, attribute_slim) <math>\Rightarrow</math> value_for(V, A)
Lastly, there are orphan qualities in PATO which are not related to any other qualities by subsumption and which do not belong to the attribute or value slim subsets. These are grouped under an unknown or undefined attribute.
Rule: <math>\forall</math>V, A: NOT in_Subset(V, value_slim) <math>\and</math> NOT is_a(V, A) <math>\Rightarrow</math> value_for(V, unknown attribute) The Balhoff rule Rule: <math>\forall</math>A, B, x: is_a(A, B) <math>\and </math> exhibits(A, x) <math>\Rightarrow</math> exhibits(B, x)
This rule was proposed by Jim Balhoff to reason upwards in a (typically taxonomic) hierarchy using the
exhibits relation. A relevant example: GIVEN THAT Danio rerio exhibits a round fin AND Danio rerio is a Danio THEN Danio exhibits a round fin. Note that exhibits has someOf semantics - so the inference is that some of Danio exhibit the phenotype.
This is the exact opposite of the
genus differentia rule which postulates reasoning only downwards in a hierarchy. Sweeps
A reasoner functions over several sweeps. In each sweep, new implicit inferences are derived from the explicit annotations (as described in the previous sections) and added to the database. In the following sweep, inferences added from the previous sweep are used to extract further inferences. This process continues until no additional inferences are added in a sweep. This is when the
deductive closure of the inference procedure is reached. No further inferences are possible and the reasoner exits. Future directions
Possible future directions for the extension of the reasoner include adding more relation properties as well as some outstanding technical issues.
Relation Properties to be implemented
The following relation properties may be implemented on the reasoner in future if necessary.
Relation Symmetry Rule: <math>\forall</math>A, B, R and R symmetric: R(A, B) <math>\Rightarrow</math> R(B, A)
An example of a symmetric relation is the
neighbor relation. IF Jim neighbor_of Ryan THEN Ryan neighbor_of Jim. A more biologically relevant example is the in_contact_with relation. IF middle_nuchal_plate in_contact_with spinelet, THEN spinelet in_contact_with middle_nuchal_plate
NOTE: There is no direct relationship between
relation symmetry and relation reflexivity Relation Inversion Rule: <math>\forall</math>A, B, R 1, R: 2 R(A, B) <math>\and</math> 1 inverse_of( R, 1 R) <math>\Rightarrow</math> 2 R(B, A) 2
An example of relation inversions can be found in the
posterior_to and anterior_to relations. IF anterior_nuchal_plate anterior_to middle_nuchal_plate AND anterior_to inverse_of posterior_to, THEN middle_nuchal_plate posterior_to anterior_nuchal_plate
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.