text stringlengths 256 16.4k |
|---|
The metric tensor is a special tensor that is invariant under Lorentz transformations. This is of course trivial to see due to the very definition. Under Lorentz transformations$$\eta_{\mu\nu} \to \eta_{\rho\sigma} \Lambda^\rho{}_\mu \Lambda^\sigma{}_\nu.$$But due to the property of Lorentz transformations, the above is just $\eta_{\mu\nu}$ so that $$\eta_{\mu\nu} \to \eta_{\rho\sigma} \Lambda^\rho{}_\mu \Lambda^\sigma{}_\nu = \eta_{\mu\nu}.$$Thus $\eta$ is Lorentz invariant.
Similarly, the Levi-Civita tensor is invariant. Under Lorentz transformations$$\varepsilon_{\mu_1 \cdots \mu_d} \to \varepsilon_{\nu_1 \cdots \nu_d} \Lambda^{\mu_1}{}_{\nu_1} \cdots \Lambda^{\mu_d}{}_{\nu_d}$$But now, due to the definition of the epsilon tensor and the definition of determinant of matrix (see this link), the above is equal to$$\varepsilon_{\mu_1 \cdots \mu_d} \to \varepsilon_{\nu_1 \cdots \nu_d} \Lambda^{\mu_1}{}_{\nu_1} \cdots \Lambda^{\mu_d}{}_{\nu_d} = (\det \Lambda)\varepsilon_{\mu_1 \cdots \mu_d}.$$However, we know that $\det \Lambda = \pm 1$. Thus, $\varepsilon$ is invariant under proper Lorentz transformations (which have $\det \Lambda =1$) but picks up a sign under improper Lorentz transformations.
Thus, the equation you have in the problem is invariant under proper Lorentz transformations but not under the improper ones such as parity or time reversals.
PS - It looks like you got that equation trying to solve massive CS theories and the CS action is famously parity non-invariant. |
Prolog
Let $\mathcal L=\frac{1}{2}(\partial\phi^2+m^2\phi^2)+\frac{g}{3!}\phi^3$ be the lagrangian for $\phi^3$ theory (real-scalar field $\phi)$
The action of the theory is
$$ S=\frac{1}{2}\int \frac{\mathrm dk}{(2\pi)^4}\bigg[\phi(-k)(k^2+m^2)\phi(k)\bigg]+\\ +\frac{g}{3!}\int\frac{\mathrm dk_1}{(2\pi)^4}\frac{\mathrm dk_2}{(2\pi)^4}\frac{\mathrm dk_3}{(2\pi)^4} \phi(k_1)\phi(k_2)\phi(k_3)(2\pi)^4\delta(k_1+k_2+k_3) $$
The amplitude for any process can be calculated, order by order in $g$, by summing all the tree and loop diagrams, using the interaction as vertices.
On the other hand, we can define the (quantum) effective action for the theory,
$$ \Gamma=\frac{1}{2}\int\frac{\mathrm dk}{(2\pi)^4}\bigg[\phi(k)(k^2+m^2-\Pi(k^2)\bigg]\phi(k)+\\+\sum_{n=3}^\infty \frac{1}{n!}\int\frac{\mathrm dk_1}{(2\pi)^4}\cdots\frac{\mathrm dk_n}{(2\pi)^4} V_n(k_1,\cdots,k_n)\phi(k_1)\cdots \phi(k_n) (2\pi)^4\delta(k_1+\cdots+k_n) $$ where $V_n$ is the $n$-point vertex function (i.e., amplitude for a process with $n$ external lines that are one/two/three particle-irreducible, using the exact propagator for the lines, and the exact three-point vertex function for the interactions).
The point of the effective quantum action is that the tree-level amplitudes of $\Gamma$ are equivalent to the tree+loop amplitudes of $S$.
My question(s)
I'm trying to find a reference of how to calculate the quantum effective action for fermions, for fixed external electromagnetic field (in QED). In other words, I'm trying to do the same as for the scalar field $\phi$, but using a fermion field $\psi$, whose interactions are mediated by a fixed electromagnetic field $A^\mu$.
I haven't been able to find a reference for what I'm trying to do, but I my guess is that the effective action can be written as
$$ \Gamma\overset{?}{=}\int \frac{\mathrm dk}{(2\pi)^4}\bar\psi(-k)\bigg[\not k+m-\Sigma(\not k)\bigg]\psi+\int\frac{\mathrm dk_1}{(2\pi)^4}\frac{\mathrm dk_2}{(2\pi)^4} \bar\psi(k_1)V_2(k_1,k_2)\psi(k_2)+\cdots $$
where $V_2=V_2(A^\mu,A^\mu_{,\nu})$, and $\cdots$ includes higher powers of $\psi$ (and derivative interactions?).
My ansatz for first correction to the action is
$$ V_2(k_1,k_2)= \bigg[A^\mu\mathcal M_\mu+\partial_{[\mu} A_{\nu]}\, \mathcal M^{\mu\nu}+\text{higher powers of $A$}\bigg] $$ where ($k_3=k_2-k_1$)
though I'm not quite sure what $\mathcal M^{\mu\nu}$ should be.
Question 1) Is there any nice reference for what I'm trying to calculate? Question 2) Is my guess for $\Gamma$ right? if so, Question 2) Is my guess for $V_2$ right? if so, Question 4) Should the external lines $k_1,k_2,k_3$ be taken on-shell, or of-shell? |
Author's note: This linear systems series is a collection of notes I took from one of my professors. It's my attempt to get practice with latex and an excuse to have something to blog about. You'll notice some anomalies within the syntax. This is mostly due to the binder library not supporting full latex functionality. I'm looking at you "\substack"! There are some notation shortcuts my professor wrote on the board which I'm unable to reproduce with latex, so I'll default to more conventional ways to represent them. Most of it should be understandable to the reader though. In the future, some of it is expected to change. If you're curious on what failures and where work arounds might've been attempted you can view here for more background.' Linearization
Basically every real world physical system is by definition nonlinear. Some of systems can be modeled by nonlinear equations, and some of these nonlinear equations can be "approximated" by linear equations under certain conditions. This isn't always true however. Some nonlinear equations, with very small differences in initial states wil generate completely radical seemingly unrelated solutions. This phenomenon is called chaos. This post will assume the nonexistence of chaos. There are several techniques one might use to linearize a system. Below we go through a few examples including a simple mechanical SISO (single-input single-output systems, electrical SISO systems (RLC circuit), and a more complex multi variable system MISO (multi-input single-output) system.begin
Suppose you had a system defined by two state variables
\begin{aligned} \dot x_1& = x_1 x_2 - x_1^3 \\ \dot x_2& = -5x_2 + 9e^{x_1+5x_2}-9+5 \end{aligned}
The minus 9 and plus 5 terms at the end of the second equation were changed to make the solution more user friendly. Note that this represents motion around an equilibrium point. Depending on where you choose to linearize, different solutions will appear.
Let's choose the following;
\begin{aligned} x_1 = 0 = x_{10} \\ x_2 = 0 = x_{20} \end{aligned}
figure 1 - XY Plot with origin
Now Imagine if figure had a disturbance leading to perturbed poles - represented by the area surrounded the origin. Think of the perturbed poles as
where;
\begin{aligned} x_1 &= x_{10} + \tilde x_1 \\ x_2 &= x_{20} + \tilde x_2 \end{aligned}
are small deviations from the origin found at x_{10}, x_{20} The way we lineaerize this is by taking the Taylor series expansion, thus we have
\begin{aligned} f(x_{1},x_{2}) &= f(x_{10} + \tilde x_1,x_{20} + \tilde x_2) \\ &= f(x_{10},x_{20}) + \frac{\partial f(x_{10},x_{20})}{\partial x_1} \tilde x_1 + \frac{\partial f(x_{10},x_{20})}{\partial x_2} \tilde x_2+ H.O.T \end{aligned}
No need to reach the 2nd derivatives before calling it quits on the higher order terms, unless of course, you want to accuracy. Now, in order to actually solve for a specific partial derivative as stated in the 2nd and 3rd term of the 2nd line, we need invoke an operator known as the Jacobian. Once we invoke the almighty Jacobian, we should have something that looks like...
\begin{aligned} \frac{\partial}{\partial x} f_1(x)= \begin{bmatrix} {\frac{\partial f_1}{\partial x_1}} & {\frac{\partial f_1}{\partial x_2}} \\ {\frac{\partial f_2}{\partial x_1}} & {\frac{\partial f_2}{\partial x_2}} \end{bmatrix} &= {\begin{bmatrix} {x_2 - 3x_1^2} & {x_1} \\ {0+9e^{x_1}e^{5x_2}} & {9e^x_1 5^{5x_2}-5} \end{bmatrix}} \Big|_{x_1 = 0, x_2 = 0} \\ &= \begin{bmatrix} {0} & {0} \\ {9e^0} & {45e^0-5} \end{bmatrix} = \begin{bmatrix} {0} & {0} \\ {9} & {40} \end{bmatrix} = A \end{aligned}
With this task accomplished, we now know the answer for the Jacobian Matrix A and can use it linearized equation mentioned above.
\begin{aligned} \dot {\tilde x}(t) = A(t)\tilde x(t) \end{aligned}
Note that we did not define an input for this system. Meaning, the output is only defined by the state vector
x.
\begin{aligned} \end{aligned} [\latex] |
The
emission spectrum
of a chemical element
or chemical compound
is the relativeintensity of each frequency
of electromagnetic radiation emitted
by theelement's atoms
or the compound's molecules
when they are returned to a ground state
.
Each element's emission spectrum is unique. Therefore, spectroscopy
can be used to the identify theelements in matter of unknown composition. Similarly, the emissionspectra of molecules can be used in chemical analysis ofsubstances.
Emission (Light)
In physics
,
emission
is theprocess by which the energy
of a photon
is released by another entity, for example, byan atom
whose electrons
make a transition between two electronic energy levels
. The emitted energy is in theform of a photon.The emittance
of anobject quantifies how much light is emitted by it. This may berelated to other properties of the object through the Stefan–Boltzmann law
.For mostsubstances, the amount of emission varies with the temperature
and the spectroscopic composition
of the object,leading to the appearance of colortemperature
and emission lines
.Precise measurements at many wavelengths allow the identificationof a substance via emissionspectroscopy
.
Origins
When the electrons
in the atom areexcited, for example by being heated, the additional energy
pushes the electrons to higher energy orbits.When the electrons fall back down and leave the excited state,energy is re-emitted in the form of a photon
.The wavelength (or, equivalently, frequency) of the photon isdetermined by the difference in energy between the two states.These emitted photons form the element's emission spectrum.
The fact that only certain colors appear in an element's atomicemission spectrum means that only certain frequencies of light areemitted. Each of these frequencies are related to energy by theformula:
E_{\text{photon}} = hv,
where
E
is the energy of the photon,
v
is itsfrequency
, and
h
is Planck's constant
.This concludes that onlyphotons
having certain energies are emittedby the atom. The principle of the atomic emission spectrum explainsthe varied colors in neon signs
, as wellas chemical flame test
results mentionedabove.
The frequencies of light that an atom can emit are dependent onstates the electrons can be in. When excited, an electron moves toa higher energy level/orbital. When the electron falls back to itsground level the light is emitted.
Radiation from molecules
As well as the electronic transitions discussed above, the energyof a molecule can also change via rotational
, vibrational
and vibronic
(combined vibrational andelectronic) transitions. These energy transitions often lead toclosely-spaced groups of many different spectral lines, known asspectral bands
. Unresolved bandspectra may appear as a spectral continuum.
Molecular emission is the mechanism behind the sulfur lamp
and the deuterium arc lamp
.
Emission spectroscopy
Light consists of electromagnetic radiation of differentwavelengths. Therefore, when the elements or their compounds areheated either on a flame or by an electric arc they emit energy inform of light. Analysis of this light, with the help ofspectroscope gives us a discontinuous spectrum. A spectroscope or aspectrometer is an instrument which is used for separating thecomponents of light, which have different wavelengths. The spectrumappears in a series of lines called the line spectrum. This linespectrum is also called the Atomic Spectrum because it originatesin the element. Each element has a different atomic spectrum. Theproduction of line spectra by the atoms of an element indicate thatan atom can radiate only a certain amount of energy. This leads tothe conclusion that electrons cannot have just any amount of energybut only a certain amount of energy.
The emission spectrum can be used to determine the composition of amaterial, since it is different for each element
of the periodic table
. One example is astronomical spectroscopy
:identifying the composition of stars
byanalysing the received light.The emission spectrum characteristicsof some elements are plainly visible to the naked eye when theseelements are heated. For example, when platinum wire is dipped intoa strontium
nitrate solution and theninserted into a flame, the strontium atoms emit a red color.Similarly, when copper
is inserted into aflame, the flame becomes green. These definite characteristicsallow elements to be identified by their atomic emission spectrum.Not all lights emitted by the spectrum are viewable to the nakedeye, it also includes ultra violet rays and infra red lighting,anemission is formed when an excited gas is viewed directly though aspectroscope.
Emission spectroscopy
is a spectroscopic
technique which examines thewavelengths of photons
emitted by atoms ormolecules during their transition from an excited state
to a lower energy state. Eachelement emits a characteristic set of discrete wavelengthsaccording to its electronicstructure
, by observing these wavelengths the elementalcomposition of the sample can be determined. Emission spectroscopydeveloped in the late 19th century and efforts in theoreticalexplanation of atomic emission spectra eventually led to quantum mechanics
.
There are many ways in which atoms can be brought to an excitedstate. Interaction with electromagnetic radiation is used influorescence spectroscopy
,protons or other heavier particles in Particle-Induced X-rayEmission
and electrons or X-ray photons in Energy-dispersive X-rayspectroscopy
or X-rayfluorescence
. The simplest method is to heat the sample to ahigh temperature, after which the excitations are produced bycollisions between the sample atoms. This method is used inflame emissionspectroscopy
, and it was also the method used by Anders Jonas Ångström
whenhe discovered the phenomenon of discrete emission lines in1850s.
Although the emission lines are caused by a transition betweenquantized energy states and may at first look very sharp, they dohave a finite width, i.e. they are composed of more than onewavelength of light. This spectral line broadening
has manydifferent causes.
Emission spectroscopy
is often referred to as
optical emission spectroscopy
, due to the lightnature of what is being emitted.
History
Emission lines from hot gases were first discovered by Ångström
, and the techniquewas further developed by David Alter
,Gustav Kirchhoff
and Robert Bunsen
.
See spectrum analysis
fordetails.
Experimental technique in flame emission spectroscopy
The solution containing the relevant substance to be analysed isdrawn into the burner and dispersed into the flame as a fine spray.The solvent evaporates first, leaving finely divided solid
particles which move to hottest region of theflame where gaseous atoms
and ions
are produced. Here electrons
are excited as described above. It iscommon for a monochromator
to be usedto allow for easy detection.
On a simple level, flame emission spectroscopy can be observedusing just a Bunsen burner
and samplesof metals. For example, sodium
metal placedin the flame will glow yellow, whilst calcium
metal particles will glow red, copper
placed into the flame will create a greenflame.
Absorption spectra
When light passes through a material, it is absorbed atcharacteristic frequencies. The absorbed photons are converted toheat, re-emitted in a different direction or in some other wayremoved from the original beam of photons. A spectrum of the lightafter it has passed through the material will have dark lines(absence of light from the continuous spectrum
) at characteristicfrequencies. The pattern of dark lines is known as the absorption spectrum
. Absorption linesoccur at the same frequencies as emission lines for a givenmaterial, but the relative intensities of the lines differ betweenthe emission and absorption spectrum.
Emission coefficient Emission coefficient
is a coefficient
in the power output per unit time ofan electromagnetic
source,a calculated value in physics
. It is alsoused as a measure of environmental
emissions (by mass) perMWh of electricity generated
,see: Emission factor
.
Scattering of light
In Thomson scattering
a chargedparticle emits radiation under incident light. The particle may bean ordinary atomic electron, so emission coefficients havepractical applications.
If
X
d
V
dΩ dλ is the energyscattered by a volume element d
V
into solid angle dΩbetween wavelengths λ and λ+dλ per unit time then the Emissioncoefficient
is
X
.
The values of
X
in Thomson scattering canbe predicted
from incident flux,the density of the charged particles and their Thomson differentialcross section (area/solid angle).
Spontaneous Emission
A warm body emitting photons
has a monochromatic
emission coefficient relatingits temperature and total power radiation, this is sometimes calledthe second "Einsteincoefficient
", and can be deduced from quantum mechanical theory
.
Energy spectrum
An
energy spectrum
is a distribution energy
among a large assemblage of particles. It is astatistical representation of the wave energy as a function of thewave frequency, and an empirical estimator of the spectralfunction. For any given value of energy, it determines how many ofthe particles have that much energy.
The particles may be atoms
, photons
or a flux of elementary particles
.
The Schrödinger equation
and a set of boundary conditions form an eigenvalue
problem. A possible value(
E
) is called an eigenenergy. A non-zerosolution of the wave function
iscalled an eigenenergy state, or simply an eigenstate
. The set of eigenvalues{
E j
} is called theenergy spectrum of the particle.
The electromagnetic spectrum can also be represented as thedistribution of electromagnetic radiation according to energy. Therelationship among the wavelength (usually denoted by Greek"\lambda"), the frequency (usually denoted by Greek "\nu"), and theenergy
E
are:
E = h \nu = \frac{hc}{\lambda} \,\!
where
c
is the speed of light and
h
is Planck's Constant
.
An example of an energy spectrum in the physical domain is oceanwaves breaking on the shore. For any given interval of time it canbe observed that some of the waves are larger than others. Plottingthe number of waves against the amplitude (height) for the intervalwill yield the energy spectrum of the set.
Optical Spectroscopy and Astrophysics Application
Energy spectra are often used in astrophysical spectroscopy. Thequantity plotted, energy units, is the wavelength times the energyper unit wavelength and thus accurately represents the amount ofenergy at any wavelength. The energy per unit wavelength and theenergy per unit frequency peak at significantly differentwavelengths due the reciprocal relation between frequency andwavelength. Using energy units avoids this problem, since(wavelength * flux per unit wavelength) = (frequency * flux perunit frequency).
Some modern spectrophotometers, such as the Perkin Elmer 950,include an energy scan option. This is additionally useful in caseswhere a reference cell is not practical or when absorbance /transmittance is off-scale.http://home.znet.com/schester/facts/solar_energy.html
See also Links related to emission spectroscopy Links related to emission coefficient External links[[Category:Atomicphysics |
I've been attempting this fundamental shear force diagram problem for several days, but can't seem to get the correct result. I'm trying to calculate the shear force diagram in terms of $x$, but I'm unsure about the intensity $w(x)$ of the triangular load distribution between $0m \le x \lt 3m$. I am able to calculate the correct result for the latter section $3m \lt x \le 6m$, so I'm a little confused as to what the correct intensity of the triangular load distribution is and how to calculate the correct shear force using the correct intensity $w(x)$?
Below I've attached the problem and calculated the support reactions, which are $A_y=15kN$ and $B_y=15kN$.
Now, I've attached my free body diagram of the first section between $0m \le x\lt 3m $ and indicated the positive sign convention for this beam.
I then proceeded to find the shear force in terms of $x$ as follows:
$\sum F_y=0:$
$$15-w(x)·x·\frac12 - v_1 = 0 \quad (eq\ 1)$$
Where $w(x)·x·\frac12$ is the area of the triangular load distribution.
This is where I get confused. My understanding of triangular load distribution in terms of the intensity $w(x)$ is that:
$$w(x)=\frac{w_0x}{L}$$
Where $w_0 = 10$ and $L=3$ for this problem.
But substituting these values into the intensity $w(x)$ and back into $(eq\ 1)$ gets me the wrong result of: $$v_1=15-\frac53 x^2$$
After reading multiple textbooks and watching several videos, I finally found out that if the maximum load of a triangular load distribution is at the initial point $x=0$ then the following formula should be applied:
$$w(x)=\frac{w_0x}{L}-w_0$$
I now understand this a bit, but I am wondering where I could get a good explanation as to why?
I'm struggling to find a good explanation as almost every example I've found in textbooks/videos use triangular load distributions that increase from the initial point and not decrease.
However, after utilising this formula, I still get the wrong solution. My working out is as follows:
$\sum F_y=0:$
$$15-\Bigl(w(x)·x·\frac12 \Bigr) - v_1 = 0$$ $$15-\biggl(\Bigl(\frac{10x}{3}-10\Bigr)·x·\frac12 \biggr)- v_1 = 0$$ $$15-\biggl(\Bigl(\frac{10x}{6}-\frac{10}{2}\Bigr)·x\biggr) - v_1 = 0$$ $$15-\biggl(\Bigl(\frac{5x}{3}-5\Bigr)·x \biggr)- v_1 = 0$$ $$15-\frac{5x^2}{3}+5x - v_1 = 0$$ $$\Rightarrow v_1=15-\frac{5x^2}{3}+5x$$
The actual solution is: $$v_1=15+\frac{5x^2}{3}-10x$$
So I'm not sure whether I'm using the correct intensity $w(x)$ and/or whether the triangle area has been correctly calculated using this intensity $w(x)$.
For the second section $3m\le x\lt6m$ I am able to calculate the correct shear force in terms of $x$, this solution is:
$$v_2=-15-\frac{5x^2}{3}+10x$$
Plotting a diagram of the correct shear forces $v_1$ and $v_2$ in terms of $x$ looks the following:
For your reference, this problem (F11.6) can be found in chapter 11 of Statics and Mechanics of Materials (4th Ed. SI edition) by Hibbeler.
I'd appreciate if someone could explain intensity loads for situations similar to above and where I went wrong in my calculations.
Thank you.
Edit:
After reading a few examples, I found that if I calculate the shear force from the left end I am able to get the correct shear force using my initial intensity $w(x)=\frac{w_0x}{L}$ and not the latter intensity $w(x)=\frac{w_0x}{L}-w_0$.
However I'm unsure why I can't calculate this from the right end? Does it have something to do with the left support $A_y=15kN$ creating a discontinuity? If I calculate from the left end am I correct in changing the section's range to $0m \lt x \le 3m$ to not include the left support $A_y$?
My working out is as follows:
$\sum F_y=0:$
$$-\Bigl(w(x)·x·\frac12 \Bigr) + v_1 = 0$$ $$-\biggl(\Bigl(\frac{10}{3}(3-x)\Bigr)·(3-x)·\frac12 \biggr)+ v_1 = 0$$ $$-\biggl(\Bigl(10-\frac{10x}{3}\Bigr)·(3-x)·\frac12 \biggr)+ v_1 = 0$$ $$-\biggl(\bigl(30-10x-10x+\frac{10x^2}{3}\bigr)·\frac12 \biggr)+ v_1 = 0$$ $$-\biggl(\bigl(30-20x+\frac{10x^2}{3}\bigr)·\frac12 \biggr)+ v_1 = 0$$ $$-\bigl(15-10x+\frac{10x^2}{6}\bigr)+ v_1 = 0$$ $$-15+10x-\frac{5x^2}{3}+ v_1 = 0$$
$$\Rightarrow v_1=15+\frac{5x^2}{3}-10x$$
This is the correct solution. |
For two arbitrary finite sequences of naturals $a_1, a_2, \cdots a_n$ and $b_1, b_2, b_3 \cdots b_n$ let
$$c = \sum_{i=1}^n (a_i)^{\frac{1}{b_i}}$$
Is there an algorithm which generates the monic polynomial $p \in \Bbb Z[X] $ of smallest degree such that $p(c) = 0$?
Motivation: I'm interested in generalizing the result in this question.
Edit: The "motivation" has an answer as linked in the comments. My idea was using the rational root theorem instead. Nevertheless, I think this problem is somewhat interesting by itself. Another related problem is this. |
4.1 Residual diagnostics
A good forecasting method will yield residuals with the following properties:
The residuals are uncorrelated. If there are correlations between residuals, then there is information left in the residuals which should be used in computing forecasts. The residuals have zero mean. If the residuals have a mean other than zero, then the forecasts are biased.
Any forecasting method that does not satisfy these properties can be improved. However, that does not mean that forecasting methods that satisfy these properties cannot be improved. It is possible to have several different forecasting methods for the same data set, all of which satisfy these properties. Checking these properties is important in order to see whether a method is using all of the available information, but it is not a good way to select a forecasting method.
If either of these properties is not satisfied, then the forecasting method can be modified to give better forecasts. Adjusting for bias is easy: if the residuals have mean \(m\), then simply add \(m\) to all forecasts and the bias problem is solved. Fixing the correlation problem is harder, and we will not address it until Chapter 10.
In addition to these essential properties, it is useful (but not necessary) for the residuals to also have the following two properties.
The residuals have constant variance. The residuals are normally distributed.
These two properties make the calculation of prediction intervals easier (see Section 3.5 for an example). However, a forecasting method that does not satisfy these properties cannot necessarily be improved. Sometimes applying a Box-Cox transformation may assist with these properties, but otherwise there is usually little that you can do to ensure that your residuals have constant variance and a normal distribution. Instead, an alternative approach to obtaining prediction intervals is necessary. Again, we will not address how to do this until later in the book.
Example: Forecasting the Google daily closing stock price
We will continue with the Google daily closing stock price example from the previous chapter. For stock market prices and indexes, the best forecasting method is often the naïve method. That is, each forecast is simply equal to the last observed value, or \(\hat{y}_{t} = y_{t-1}\). Hence, the residuals are simply equal to the difference between consecutive observations: \[ e_{t} = y_{t} - \hat{y}_{t} = y_{t} - y_{t-1}. \]
The following graph shows the Google daily closing stock price for trading days during 2015. The large jump corresponds to 17 July 2015 when the price jumped 16% due to unexpectedly strong second quarter results. (The
google_2015 object was created in Section 3.2.)
The residuals obtained from forecasting this series using the naïve method are shown in Figure 4.2. The large positive residual is a result of the unexpected price jump in July.
These graphs show that the naïve method produces forecasts that appear to account for all available information. The mean of the residuals is close to zero and there is no significant correlation in the residuals series. The time plot of the residuals shows that the variation of the residuals stays much the same across the historical data, apart from the one outlier, and therefore the residual variance can be treated as constant. This can also be seen on the histogram of the residuals. The histogram suggests that the residuals may not be normal — the right tail seems a little too long, even when we ignore the outlier. Consequently, forecasts from this method will probably be quite good, but prediction intervals that are computed assuming a normal distribution may be inaccurate.
A convenient shortcut for producing these residual diagnostic graphs is the
gg_tsresiduals() function, which will produce a time plot, ACF plot and histogram of the residuals.
Portmanteau tests for autocorrelation
In addition to looking at the ACF plot, we can also do a more formal test for autocorrelation by considering a whole set of \(r_k\) values as a group, rather than treating each one separately.
Recall that \(r_k\) is the autocorrelation for lag \(k\). When we look at the ACF plot to see whether each spike is within the required limits, we are implicitly carrying out multiple hypothesis tests, each one with a small probability of giving a false positive. When enough of these tests are done, it is likely that at least one will give a false positive, and so we may conclude that the residuals have some remaining autocorrelation, when in fact they do not.
In order to overcome this problem, we test whether the first \(h\) autocorrelations are significantly different from what would be expected from a white noise process. A test for a group of autocorrelations is called a
portmanteau test, from a French word describing a suitcase containing a number of items.
One such test is the
Box-Pierce test, based on the following statistic\[ Q = T \sum_{k=1}^h r_k^2,\]where \(h\) is the maximum lag being considered and \(T\) is the number of observations. If each \(r_k\) is close to zero, then \(Q\) will be small. If some \(r_k\) values are large (positive or negative), then \(Q\) will be large. We suggest using \(h=10\) for non-seasonal data and \(h=2m\) for seasonal data, where \(m\) is the period of seasonality. However, the test is not good when \(h\) is large, so if these values are larger than \(T/5\), then use \(h=T/5\)
A related (and more accurate) test is the
Ljung-Box test, based on\[ Q^* = T(T+2) \sum_{k=1}^h (T-k)^{-1}r_k^2.\]
Again, large values of \(Q^*\) suggest that the autocorrelations do not come from a white noise series.
How large is too large? If the autocorrelations did come from a white noise series, then both \(Q\) and \(Q^*\) would have a \(\chi^2\) distribution with \((h - K)\) degrees of freedom, where \(K\) is the number of parameters in the model. If they are calculated from raw data (rather than the residuals from a model), then set \(K=0\).
For the Google stock price example, the naïve model has no parameters, so \(K=0\) in that case also.
# lag=h and fitdf=Kaug %>% features(.resid, box_pierce, lag=10, dof=0)#> Warning: 1 error encountered for feature 1#> [1] 'ts' object must have one or more observations#> # A tibble: 1 x 2#> Symbol .model #> <chr> <chr> #> 1 GOOG NAIVE(Close)aug %>% features(.resid, ljung_box, lag=10, dof=0)#> Warning: 1 error encountered for feature 1#> [1] 'ts' object must have one or more observations#> # A tibble: 1 x 2#> Symbol .model #> <chr> <chr> #> 1 GOOG NAIVE(Close)
For both \(Q\) and \(Q^*\), the results are not significant (i.e., the \(p\)-values are relatively large). Thus, we can conclude that the residuals are not distinguishable from a white noise series. |
First of all I am new to this topic, algebraic number theory, so I only know a decent (not great) amount of abstract algebra.
The question I have is that, given the imaginary quadratic field $\mathbb{Z}[\sqrt{-2}]$, I want to find;
(1) all irreducible elements of it, (2) show that it is a Euclidean domain, and (3) show that for an odd prime number $p,\; \exists \;x,y\; \in \mathbb{Z}$ s.t. $p = x^2+2y^2$ iff $p=1,3(\textrm{mod}\; 8)$.
I have been reading and have books but there are some things I am not getting.
(a) My attempt at finding the units (I read that there are only ${}^{\pm}1$ for this integral domain (ID));
A unit is an element with an inverse, so for an element $p_1 \in \mathbb{Z}[\sqrt{-2}]$, there is another element $p_1'$ s.t. $p_1\,p_1' = p_1'\,p_1 = 1$ (it is integral domain, not just domain).
Let $p_1 := a+b\sqrt{-2}$ and $p_1':=x+y\sqrt{-2}$ and so $p_1\,p_1' = 1$ becomes $(a+b\sqrt{-2})(x+y\sqrt{-2}) = 1 = 1+0\sqrt{-2}$ and into the two equations, $1=ax-2by$ and $0=ay+bx$. Solving these leads to $x=\frac{a}{a^2+2b^2}$, $y=\frac{-b}{a^2+2b^2}$ and $p_1'=\frac{a}{a^2+2b^2} + \left( \frac{-b}{a^2+2b^2} \right)\sqrt{-2}$, which can only belong to $\mathbb{Z}$ if $b=0,\;a={}^{\pm}1$.
Is there a better way of determing the units of an ID? I read that the units, $\epsilon$, of a quadratic ID of the general form $R[\sqrt{d}]$, where $d$ was square-free, were determined by $Norm(\epsilon) = {}^{\pm}1$ Is this general ? Is this for any $d$ that is square-free (though I see little difference between $d=d$ and $d=z*d$, as $z\in \mathbb{Z}$) ? (1) I know, procedurally, how to do this for a given element, but I do not know of a better way to do it in general. Here is my attempt:
I read that:
An element $p$ of an ID is irreducible in R if it satisfies: (i) $p \neq 0$ and $p$ is not a unit, (ii) if $p=ab$ in R, then $a$ or $b$ is a unit in R.
(maybe because I know the units in this ID I can say that all other non-zero elements are irreducible
? )
So if $p = ab$, with $a = m+n\sqrt{-2}$, then using the as-of-yet-unproved-homomorphism-norm-map, $N(ab) = N(a)\,N(b)$, $N(p=x+y\sqrt{-2}) = x^2+2y^2 = N(a)\,N(b) = (m^2+2n^2)\,N(b)$. Now if I had a specific element, to determine if it was irreducible, I could then determine what values of $N(a)$ and $N(b)$ were valid so that their product equaled $N(p) = N(x+y\sqrt{-2})$, which this latter term would be an integer ($N: \mathbb{Z}[\sqrt{-2}] \mapsto \mathbb{Z}$).
The latter two questions I haven't got far with either but am wanting to get this initial question(s) understood first.
Thanks all for your time reading my rather lengthy question! |
I am using the implicit finite difference method to discretize the 1-D transient heat diffusion equation for solid spherical and cylindrical shapes:
$$ \frac{1}{\alpha}\frac{\partial T}{\partial t} = \frac{\partial^2 T}{\partial t^2} + \frac{p}{r} \frac{\partial T}{\partial r} \; \; \; \text{for} \; r\neq0 \\ \frac{1}{\alpha}\frac{\partial T}{\partial t} = (1+p)\frac{\partial^2 T}{\partial r^2} \; \; \; \text{for} \; r=0 \\ \text{note that }\; \; \alpha = \frac{k}{\rho C_p} $$
where $p=1$ for cylinder and $p=2$ for sphere.
The boundary conditions are: $$ \left.\begin{matrix} \frac{\partial T}{\partial r} \end{matrix}\right|_{r=0} = 0 \; \; \; \text{for center node}\\ \left.\begin{matrix} k\frac{\partial T}{\partial r} \end{matrix}\right|_{r=R} = h(T_\infty - T_S) \; \; \; \text{for surface node} $$ where $T_S$ is temperature at surface node and $R$ is outer radius of cylinder or sphere.
Using the above equations and boundary conditions I arrived at the following discretized approximations for the temperatures at radial points from the center to surface:
for the center node where $i=0$
$$ \left [ 1 + 2(1+p) Fo \right ]T_0^{\;n+1} - 2(1+p)FoT_1^{\;n+1} = T_0^\;n $$
for the internal nodes where $i=1,2,...,M-1$
$$ Fo\left ( 1-\frac{p}{2i} \right )T_{i-1}^{\;n+1}+(1+2Fo)T_i^{\;n+1}-Fo\left ( 1+\frac{p}{2i} \right )T_{i+1}^{\;n+1}=T_i^{\;n} $$
and finally for the surface node where $i=M$
$$ -2FoT_{M-1}^{\;n+1}+\left [ 1+2Fo \left (1+Bi+Bi\frac{p}{2M} \right ) \right ] T_M^{\;n+1} = T_M^{\;n} + 2FoBi\left ( 1+\frac{p}{2M} \right )T_\infty $$
where $n$ is the present time while $n+1$ in the next time level and $Fo=\alpha\Delta t / \Delta r^2$, $Bi=h\Delta r / k$, $\alpha = k/\rho c_p$, $h$ is heat transfer coefficient, $\rho$ is density, $k$ is thermal conductivity, $c_p$ is heat capacity.
So using the numerical equations, one can solve for the temperatures inside the sphere or cylinder by creating a system of equations in the form of $[A]\left \{ T \right \}=\left \{ B \right \}$ and solve the temperature at each node by using the Matlab operation T = A \ B
$$ \begin{bmatrix} 1+2(1+p)Fo & -2(1+p) & 0 & 0 & 0\\ Fo\left ( 1-\frac{p}{2i} \right ) & 1+2Fo & Fo\left ( 1+\frac{p}{2i} \right ) & 0 & 0\\ 0 & Fo\left ( 1-\frac{p}{2i} \right ) & 1+2Fo & Fo\left ( 1+\frac{p}{2i} \right ) & 0\\ 0 & 0 & Fo\left ( 1-\frac{p}{2i} \right ) & 1+2Fo & Fo\left ( 1+\frac{p}{2i} \right )\\ 0 & 0 & 0 & -2Fo & 1+2Fo \left (1+Bi+Bi\frac{p}{2M} \right ) \end{bmatrix} \begin{bmatrix} T_0^{\;n+1}\\ T_1^{\;n+1}\\ T_2^{\;n+1}\\ T_3^{\;n+1}\\ T_4^{\;n+1} \end{bmatrix} = \begin{bmatrix} T_0^{\;n}\\ T_1^{\;n}\\ T_2^{\;n}\\ T_3^{\;n}\\ T_4^{\;n}+2FoBi\left ( 1+\frac{p}{2M} \right )T_\infty \end{bmatrix} $$
But how do I include kinetic reactions into this system?
I have the following reaction rates for $w$ wood and $c$ char:
$$ \frac{\partial \rho_w}{\partial t} = -(K_1+K_2)\rho_w \\ \frac{\partial \rho_{c1}}{\partial t} = K_2\rho_w \\ \frac{\partial \rho_{c2}}{\partial t} = K_3\rho_{c1} $$
The rate constants $K$ are represented by the Arrhenius equation:
$$ K=A\,e^{\frac{-E}{RT}} $$
where $A$ = pre-factor, $E$ = activation energy, $R$ = universal gas constant, and $T$ = temperature at that node.
So to try to incorporate these reaction equations into my system of equations for the temperatures I have discretized the reactions using the implicit method:
$$ \left [ 1+(K_1+K_2)\Delta t \right ] \rho_{wi}^{\;n+1} = \rho_{wi}^{\;n} \\ \rho_{c1i}^{\;n+1}-K_2\rho_{wi}^{\;n+1}\Delta t = \rho_{c1i}^{\;n} \\ \rho_{c2i}^{\;n+1}-K_3\rho_{c1i}^{\;n+1}\Delta t = \rho_{c2i}^{\;n} $$
Any suggestion on how to incorporate these kinetic reactions into the system of temperature equations?
Or should I solve for the temperatures first, then use the new temperatures in the reaction rates to update the $\rho$, then use the updated $\rho$ for the next iteration? |
The domino shuffling algorithm first appeared in the following paper by Propp and Kuperberg:
They used this algorithm to give a fourth proof that the number of domino tilings of the $n$th aztec diamond is $2^{\frac{n(n+1)}{2}}$.
Now the domino shuffling has been a powerful tool in the combinatorics of domino tilings. The algorithm is quite simple, but its correctness is very subtle to illustrate.
Consider the infinite chessboard on the lattice $\mathbb{Z}^2$, color the cells black or white such that adjacent cells get different colors. Then any $1\times2$ domino (if placed on the chessboard) will cover exactly one white cell and one black cell.
Now suppose some dominos (the number of the dominos may be infinite) are placed on the chessboard with out overlapping with each other, they cover the whole chessboard partilly, so we call it a partial tiling $T$.
A $2\times2$ square is called odd block with respect to $T$ (following Propp's convention), if it contains exactly two parralle dominos of $T$, and has a black cell in its upper left-hand corner. Any $2\times2$ square with a black cell in its upper left-hand will be called a block.
The partial tiling $T$ is called odd-deficient, if it has no odd blocks, and it's free region (cells not covered by $T$) can be tiled with disjoint blocks.
The domino shuffling algorothm states that, given any odd-deficient partial tiling $T$, one can produce a new partial tiling $S(T)$ which is also odd-deficient, and the mapping $T\to S(T)$ is an involution. ($S(S(T))=T$)
The algorithm goes as follows: for every domino $A$ in $T$, we find the unique block $B$ contains $A$, then we move $A$ to the oppsite position in $B$.
It's easy to see that $S(T)$ would not contain any odd block, because odd blocks are unchanged under the shuffling procedure, and since $T$ contains no odd blocks, $S(T)$ would not either.
But the crucial point is that the free region of $S(T)$ can also be tiled by disjoint blocks. This is a very subtle problem in the algorithm, the The original proof in Propp's paper did not explain much in this direction, which puzzled me for quite a long time.
In Aigner's book "A course in Enumeration", he used a 2-coloring method to handle this problem,but I think his wany is still too involved.
My question is: is there any easy way to deduce that $S(T)$ is also odd-deficient? |
Dear MO_World,
I'm working on an ergodic theory question (about a generalization of eigenfunctions for measure-preserving transformations) and have run into a number theory question concerning cyclotomic polynomials that I'm unable to tackle.
The question is this:
Let $p$ be a prime and let $p|n$. When is it the case that $\Phi_n(e^{2\pi i/p})=\pm e^{2\pi ij/p}$ for some $j$?
Here $\Phi_n$ denotes the $n$th cyclotomic polynomial.
I've experimented with Mathematica and have found there are non-trivial cases in which the condition holds, whereas for most cases it does not seem to hold.
Letting $c(n,p)=\Phi_n(e^{2\pi i/p})$, we have $c(105,3)=1$, but $c(105,5)$ and $c(105,7)$ are not on the unit circle. None of $c(15,3)$, $c(21,3)$, $c(15,5)$, $c(35,5)$, $c(21,7)$, $c(35,7)$ are on the unit circle; $c(40,2)=1$, but $c(50,2)=5$...
Not surprisingly it seems to be easiest for the condition to hold for small $p$.
Also, using the relations $\Phi_{p^2n}(x)=\Phi_{pn}(x^p)$; and $\Phi_n(1)=q$ if $n=q^k$ for some prime $q$ and an integer $k$, but $\Phi_n(1)=1$ otherwise, it's not hard to see that the condition holds whenever $p^2|n$, but $n$ is not a power of $p$.
Thanks for any more systematic suggestions... |
MathModePlugin
Add math formulas to TWiki topics using LaTeX markup language
Description
This plugin allows you to include mathematics in a TWiki page, with a format very similar to LaTeX. The external program
latex2html
is used to generate
gif
(or
png
) images from the math markup, and the image is then included in the page. The first time a particular expression is rendered, you will notice a lag as
latex2html
is being run on the server. Once rendered, the image is saved as an attached file for the page, so subsequent viewings will not require re-renders. When you remove a math expression from a page, its image is deleted.
Note that this plugin is called MathModePlugin
, not LaTeXPlugin, because the only piece of LaTeX implemented is rendering of images of mathematics.
Syntax Rules <latex [attr="value"]* >
formula </latex>
generates an image from the contained
formula
. In addition attribute-value pairs may be specified that are passed to the resulting
img
html tag. The only exeptions are the following attributes which take effect in the latex rendering pipeline:
size: the latex font size; possible values are tiny, scriptsize, footnotesize, small, normalsize, large, Large, LARGE, huge or Huge; defaults to %LATEXFONTSIZE%
color: the foreground color of the formula; defaults to %LATEXFGCOLOR%
bgcolor: the background color; defaults to %LATEXBGCOLOR%
The formula will be displayed using a
math
latex environment by default. If the formula contains a latex linebreak (
\\
) then a
multline
environment of amsmath is used instead. If the formula contains an alignment sequence (
& = &
) then an
eqnarray
environment is used.
Note that the old notation using
%$formula$%
and
%\[formula\]%
is still supported but are deprecated.
If you might want to recompute the images cached for the current page then append
?refresh=on
to its url, e.g. click
here
to refresh the formulas in the examples below.
Examples
The following will only display correctly if this plugin is installed and configured correctly.
<latex title="this is an example">
\int_{-\infty}^\infty e^{-\alpha x^2} dx = \sqrt{\frac{\pi}{\alpha}}
</latex>
<latex>
{\cal P} & = & \{f_1, f_2, \ldots, f_m\} \\
{\cal C} & = & \{c_1, c_2, \ldots, c_m\} \\
{\cal N} & = & \{n_1, n_2, \ldots, n_m\}
</latex>
<latex title="Calligraphics" color="orange">
\cal
A, B, C, D, E, F, G, H, I, J, K, L, M, \\
\cal
N, O, P, Q, R, S, T, U, V, W, X, Y, Z
</latex>
<latex>
\sum_{i_1, i_2, \ldots, i_n} \pi * i + \sigma
</latex>
This is
new inline test.
Greek letters
\alpha
\theta
\beta
\iota
\gamma
\kappa
\delta
\lambda
\epsilon
\mu
\zeta
\nu
\eta
\xi
Plugin Installation Instructions Download the ZIP file Unzip
in your twiki installation directory. Content:
MathModePlugin.zip
File: Description:
data/TWiki/MathModePlugin.txt
lib/TWiki/Plugins/MathModePlugin/Core.pm
lib/TWiki/Plugins/MathModePlugin.pm
pub/TWiki/MathModePlugin/latex2img This plugin makes use of three additional tools that are used to convert latex formulas to images. These are Make sure they are installed and check the paths to the programs
latex,
dvipng and
convert in the latex2img shiped with this plugin
Edit the file
<path-to-twiki>/pub/TWiki/MathModePlugin/latex2img accordingly and set execute permission for your webserver on it
Visit
configure in your TWiki installation, and enable the plugin in the {Plugins} section.
Troubleshooting If you get error like
"fmtutil: [some-dir]/latex.fmt does not exist", run
fmtutil-sys --all on your server to recreate all latex formatstyles.
If your generated image of the latex formula does not show up, then you probably have encoding issues. Look into the source of the <img>-tag in your page's source code. Non-ASCII characters in file names might cause troubles. Check the localization in the TWiki configure page. Configuration
There are a set of configuration variables that an be set in different places. All of the below variables can be set in your
LocalSite.cfg
file like this:
$TWiki::cfg{MathModePlugin}{<Name>} = <value>;
Some of the below variables can
only
be set this way, some of the may be overridden by defining the respective prefrence variable.
Name Preference Variable Default
HashCodeLength
32 length of the hash code. If you switch to a different hash function, you will likely have to change this
ImagePrefix
'_MathModePlugin_' string to be prepended to any auto-generated image
ImageType
%LATEXIMAGETYPE% 'png' extension of the image type; possible values are 'gif' and 'png'
Latex2Img
'.../TWiki/MathModePlugin/latex2img' the script to convert a latex formula to an image
LatexPreamble
%LATEXPREAMBLE% '\usepackage{latexsym}' latex preamble to include additional packages (e.g. \usepackage{mathptmx} to change the math font) ; note, that the packages
amsmath and
color are loaded too as they are obligatory
ScaleFactor
%LATEXSCALEFACTOR% 1.2 factor to scale images
LatexFGColor
%LATEXFGCOLOR% black default text color
LatexBGColor
%LATEXBGCOLOR% white default background color
LatexFontSize
%LATEXFONTSIZE% normalsize default font size Plugin Info |
L # 1
Show that
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Last edited by krassi_holmz (2006-03-09 02:44:53)
IPBLE: Increasing Performance By Lowering Expectations.
Offline
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 2
If
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Let
log x = x' log y = y' log z = z'. Then:
x'+y'+z'=0.
Rewriting in terms of x' gives:
IPBLE: Increasing Performance By Lowering Expectations.
Offline
Well done, krassi_holmz!
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 3
If x²y³=a and log (x/y)=b, then what is the value of (logx)/(logy)?
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
loga=2logx+3logy
b=logx-logy loga+3b=5logx loga-2b=3logy+2logy=5logy logx/logy=(loga+3b)/(loga-2b). Last edited by krassi_holmz (2006-03-10 20:06:29)
IPBLE: Increasing Performance By Lowering Expectations.
Offline
Very well done, krassi_holmz!
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
L # 4
Offline
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
You are not supposed to use a calculator or log tables for L # 4. Try again!
Last edited by JaneFairfax (2009-01-04 23:40:20)
Offline
No, I didn't
I remember
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
You still used a calculator / log table in the past to get those figures (or someone else did and showed them to you). I say again:
no calculators or log tables to be used (directly or indirectly) at all!! Last edited by JaneFairfax (2009-01-06 00:30:04)
Offline
Offline
log a = 2log x + 3log y
b = log x log y
log a + 3 b = 5log x
loga - 2b = 3logy + 2logy = 5logy
logx / logy = (loga+3b) / (loga-2b)
Offline
Hi ganesh
for L # 1 since log(a)= 1 / log(b), log(a)=1 b a a we have 1/log(abc)+1/log(abc)+1/log(abc)= a b c log(a)+log(b)+log(c)= log(abc)=1 abc abc abc abc Best Regards Riad Zaidan
Offline
Hi ganesh
for L # 2 I think that the following proof is easier: Assume Log(x)/(b-c)=Log(y)/(c-a)=Log(z)/(a-b)=t So Log(x)=t(b-c),Log(y)=t(c-a) , Log(z)=t(a-b) So Log(x)+Log(y)+Log(z)=tb-tc+tc-ta+ta-tb=0 So Log(xyz)=0 so xyz=1 Q.E.D Best Regards Riad Zaidan
Offline
Gentleman,
Thanks for the proofs.
Regards.
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
log_2(16) = \log_2 \left ( \frac{64}{4} \right ) = \log_2(64) - \log_2(4) = 6 - 2 = 4, \,
log_2(\sqrt[3]4) = \frac {1}{3} \log_2 (4) = \frac {2}{3}. \,
Offline
L # 4
I don't want a method that will rely on defining certain functions, taking derivatives,
noting concavity, etc.
Change of base:
Each side is positive, and multiplying by the positive denominator
keeps whatever direction of the alleged inequality the same direction:
On the right-hand side, the first factor is equal to a positive number less than 1,
while the second factor is equal to a positive number greater than 1. These facts are by inspection combined with the nature of exponents/logarithms.
Because of (log A)B = B(log A) = log(A^B), I may turn this into:
I need to show that
Then
Then 1 (on the left-hand side) will be greater than the value on the
right-hand side, and the truth of the original inequality will be established.
I want to show
Raise a base of 3 to each side:
Each side is positive, and I can square each side:
-----------------------------------------------------------------------------------
Then I want to show that when 2 is raised to a number equal to
(or less than) 1.5, then it is less than 3.
Each side is positive, and I can square each side:
Last edited by reconsideryouranswer (2011-05-27 20:05:01)
Signature line:
I wish a had a more interesting signature line.
Offline
Hi reconsideryouranswer,
This problem was posted by JaneFairfax. I think it would be appropriate she verify the solution.
It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Hi all,
I saw this post today and saw the probs on log. Well, they are not bad, they are good. But you can also try these problems here by me (Credit: to a book):
http://www.mathisfunforum.com/viewtopic … 93#p399193
Practice makes a man perfect.
There is no substitute to hard work All of us do not have equal talents but everybody has equal oppurtunities to build their talents.-APJ Abdul Kalam
Offline
JaneFairfax, here is a basic proof of L4:
For all real a > 1, y = a^x is a strictly increasing function.
log(base 2)3 versus log(base 3)5
2*log(base 2)3 versus 2*log(base 3)5
log(base 2)9 versus log(base 3)25
2^3 = 8 < 9
2^(> 3) = 9
3^3 = 27 < 25
3^(< 3) = 25
So, the left-hand side is greater than the right-hand side, because
Its logarithm is a larger number.
Offline |
Note that most radical mechanisms do not explicitly feature homolysis of $\ce{C-H}$ bonds or $\ce{C-C}$ bonds. If one were to compare these two processes, say for ethane $\ce{CH3CH3}$, we would find that homlysis of $\ce{C-C}$ is favored over homolysis of $\ce{C-H}$. Obligatory bond dissociation energy references for the organics and here for the halogens.
Homolysis
$\ce{C-C}$
$$\ce{H3C-CH3 -> 2H3C.}\ \ \Delta H^\circ=377\ \text{kJ/mol}$$
$\ce{C-H}$
$$\ce{H3CCH2-H -> H3CCH2. + H.}\ \ \Delta H^\circ=423\ \text{kJ/mol}$$
However, $\ce{C-C}$ and $\ce{C-H}$ homolysis steps are not common steps in radical mechanisms of alkanes. For example, the chlorination of ethane begins with the homolysis of $\ce{Cl2}$, which is much more favorable than homolysis of $\ce{C-C}$ or $\ce{C-H}$.
$$\ce{Cl2 -> 2Cl.} \ \ \Delta H^\circ = 243\ \text{kJ/mol}$$
The chlorine radicals then react with the alkane by
abstraction, meaning that bonds are being formed as well as being broken.
Abstraction
$\ce{C-C}$
$$\ce{Cl. + H3C-CH3 -> Cl-CH3 +H3C.}$$
$$\begin{array}{c|c|c|} & \text{Broken} & \text{Formed} \\ \hline\text{bond} & \ce{H3C-CH3} & \ce{Cl-CH3} \\ \hline\Delta H^\circ & 377\ \text{kJ/mol} & -350 \ \text{kJ/mol} \\ \hline\end{array}$$
$$\Delta_r H^\circ = +27 \ \text{kJ/mol}$$
$\ce{C-H}$
$$\ce{Cl. + H3CCH2-H -> Cl-H +H3CCH2.}$$
$$\begin{array}{c|c|c|} & \text{Broken} & \text{Formed} \\ \hline\text{bond} & \ce{H3CCH2-H} & \ce{Cl-H} \\ \hline\Delta H^\circ & 423\ \text{kJ/mol} & -432 \ \text{kJ/mol} \\ \hline\end{array}$$$$\Delta_r H^\circ = -9 \ \text{kJ/mol}$$
Thus, abstraction of $\ce{H}$ by $\ce{Cl.}$ is more favored because it is coupled to the formation of the $\ce{H-Cl}$ bond. |
I want to find the equations of motion of an RRRR robot.I have studied about it a bit but I am having some confusion.
Here, in one of the lectures I found online, it describes an Inertia matrix of a link as $\bf{I}_i$ which is computed by $\tilde{\bf{I}}_i$ also described below?
In conclusion, the kinetic energy of a manipulator can be determined when, for each link, the following quantities are known:
the link mass $m_i$; the inertia matrix $\bf{I}_i$, computed with respect to a frame $\mathcal{F}_i$ fixed to the center of mass in which it has a constant expression $\tilde{\bf{I}}_i$; the linear velocity $\bf{v}_{Ci}$ of the center of mass, and the rotational velocity $\omega_i$ of the link (both expressed in $\mathcal{F}_0$); the rotation matrix $\bf{R}_i$ between the frame fixed to the link and $\mathcal{F}_0$.
The kinetic energy $K_i$ of the i-th link has the form:
$$ K_i = \frac{1}{2}m_i\bf{v}_{Ci}^T\bf{v}_{Ci} + \frac{1}{2}\omega_i^T\bf{R}_i\tilde{\bf{I}}_i\bf{R}_i^T\omega_i \\ $$ It is now necessary to compute the linear and rotational velocities ($\bf{v}_{Ci}$ and $\omega_i$) as functions of the Lagrangian coordinates (i.e. the joint variables $\bf{q}$).
So $\tilde{\bf{I}}_i$ is computed wrt to fixed frame attached to the centre of mass.
However in another example below from another source there is no rotation matrix multiplication with ${I}_{C_1}$ and $I_{C_2}$ as shown above. Am I missing something?
$\underline{\mbox{Matrix M}}$
$$ M = m_1 J_{v_1}^TJ_{v_1} + J_{\omega_1}^TI_{C_1}J_{\omega_1} + m_2 J_{v_2}^TJ_{v_2} + J_{\omega_2}^TI_{C_2}J_{\omega_2} \\ $$
What is the significance of multiplying Rotation matrix with $I_{C_1}$ or $\tilde{\bf{I}}_i$?
I am using former approach and getting fairly large mass matrix. Is it normal to have such long terms inside a Mass matrix? I still need to know though which method is correct.
The equation I used for the mass matrix is:
$$ \begin{array}{lcl} K & = & \displaystyle{\frac{1}{2}} \displaystyle{\sum_{i=1}^{n}} m_i\bf{v}_{Ci}^T \bf{v}_{Ci} + \displaystyle{\frac{1}{2}} \displaystyle{\sum_{i=1}^{n}} \omega_i^T\bf{R}_i\tilde{\bf{I}}_i\bf{R}_i^T\omega_i \\ & = & \boxed{ \frac{1}{2} \dot{\bf{q}}^T \sum_{i=1}^{n}\left[ m_i {\bf{J(\bf{q})}_{v}^{i}}^T {\bf{J(\bf{q})}_{v}^i} + {\bf{J(\bf{q})}_{\omega}^i}^T\bf{R}_i\tilde{\bf{I}}_i\bf{R}_i^T\bf{J(\bf{q})}_{\omega}^i \right] \dot{\bf{q}} } \\ & = & \displaystyle{\frac{1}{2}} \dot{\bf{q}}^T\bf{M(q)}\dot{\bf{q}} \\ & = & \displaystyle{\frac{1}{2}} \displaystyle{\sum_{i=1}^{n}} \displaystyle{\sum_{j=1}^{n}} M_{ij}(\bf{q})\dot{q}_i \dot{q}_j \\ \end{array} $$ |
In the first article in this issue, by Michael Hirschhorn, you will learn about the harmonic series
$$\sum_{k=1}^\infty \frac{1}{k}.$$
2004
In the first article in this issue, by Michael Hirschhorn, you will learn about the harmonic series
$$\sum_{k=1}^\infty \frac{1}{k}.$$
The first article in this issue, by Peter Donovan, tells a fascinating story of how code breakers working at Fleet Radio Unit, Melbourne (FRUMEL) during the Second World War, were able to de-code the principal Japanese Navy operational code. |
Vishwanath, CK and Shamala, N and Easwaran, KRK and Vijayan, M (1983)
Structure of Nonactin-Calcium Perchlorate, $C_{40}H_{64}O_{12}.Ca(ClO_{4})_{2}$, and a Comparative Study of Metal-Nonactin Complexes. In: Acta Crystallographica, Section C: Crystal Structure Communications, 39 (12). pp. 1640-1643.
PDF
3.pdf
Restricted to Registered users only
Download (473kB) | Request a copy
Abstract
$M_r = 975.9$, orthorhombic, Pnna, a = 20.262 (3), b = 15.717 (2), $c = 15.038 (1) \AA$, $V = 4788.97 \AA^3$, z = 4, $D_ x = 1.35 Mg m ^-{3}$, $CuK\alpha$ radiation, $\lambda = 1.5418 \AA$, $\mu = 2.79 mm^{-1}$, F(000) = 2072, T = 293 K, R = 0.08, 3335 observed reflections. The molecular structure and the crystal packing are similar to those observed in the nonactin complexes of sodium thiocyanate and potassium thiocyanate. The eight metal-O distances are nearly the same in the potassium complex whereas the four distances involving carbonyl O atoms are shorter than the remaining four involving the tetrahydrofuran-ring O atoms in the Na and the Ca complexes. This observation can be explained in terms of the small ionic radii of $Na^+$ and $Ca^{2+}$, and leads to a plausible structural rationale for the stronger affinity of nonactin for $K^+$ than for the other two metal ions.
Item Type: Journal Article Additional Information: Copyright of this article belongs to International Union of Crystallography. Department/Centre: Division of Biological Sciences > Molecular Biophysics Unit Depositing User: Shriram Pandey Date Deposited: 15 Jul 2008 Last Modified: 19 Sep 2010 04:46 URI: http://eprints.iisc.ac.in/id/eprint/14812 Actions (login required)
View Item |
Difference between revisions of "Abelian group"
(→Examples)
Line 89: Line 89:
A group <math>G</math> is an abelian group if and only if, in the [[external direct product]] <math>G \times G</math>, the diagonal subgroup <math>\{ (g,g) \mid g \in G \}</math> is a [[normal subgroup]].
A group <math>G</math> is an abelian group if and only if, in the [[external direct product]] <math>G \times G</math>, the diagonal subgroup <math>\{ (g,g) \mid g \in G \}</math> is a [[normal subgroup]].
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
==Metaproperties==
==Metaproperties==
Revision as of 00:55, 21 October 2009
Contents This article is about a basic definition in group theory. The article text may, however, contain advanced material. VIEW: Definitions built on this | Facts about this: (facts closelyrelated to Abelian group, all facts related to Abelian group) |Survey articles about this | Survey articles about definitions built on this VIEW RELATED: Analogues of this | Variations of this | Opposites of this |[SHOW MORE] This article defines a group property that is pivotal (i.e., important) among existing group properties View a list of pivotal group properties | View a complete list of group properties [SHOW MORE] History Origin of the term
The term
abelian group comes from Niels Henrick Abel, a mathematician who worked with groups even before the formal theory was laid down, in order to prove unsolvability of the quintic.
The word
abelian is usually begun with a small a. wikinote: Some older content on the wiki uses capital A for Abelian. We're trying to update this content. Definition Symbol-free definition
An
abelian group is a group where any two elements commute. Definition with symbols
A group is termed
abelian if for any elements and in , (here denotes the product of and in ). Full definition
An
abelian group is a set equipped with a (infix) binary operation (called the addition or group operation), an identity element and a (prefix) unary operation , called the inverse map or negation map, satisfying the following: For any , . This property is termed associativity. For any , . thus plays the role of an additive identity element or neutral element. For any , . Thus, is an inverse element to with respect to . For any , . This property is termed commutativity. Equivalent formulations A group is abelian if its center is the whole group. A group is abelian if its commutator subgroup is trivial. Notation
When is an abelian group, we typically use
additive notation and terminology. Thus, the group multiplication is termed addition and the product of two elements is termed the sum. The infix operator is used for the group multiplication, so the sum of two elements and is denoted by . The group multiplication is termed additionand the product of two elements is termed the sum. The identity element is typically denoted as and termed zero The inverse of an element is termed its negativeor additive inverse. The inverse of is denoted done times is denoted , (where ) while done times is denoted .
This convention is typically followed in a situation where we are dealing with the abelian group in isolation, rather than as a subgroup of a possibly non-abelian group. If we are working with subgroups in a non-abelian group, we typically use multiplicative notation even if the subgroup happens to be abelian.
Examples VIEW: groups satisfying this property | groups dissatisfying this property VIEW: Related group property satisfactions | Related group property dissatisfactions Some infinite examples
The additive group of integers , the additive group of rational numbers , the additive group of real numbers , the multiplicative group of nonzero rationals , and the multiplicative group of nonzero real numbers are some examples of Abelian groups.
(More generally, for any field, the additive group, and the multiplicative group of nonzero elements, are Abelian groups).
Finite examples
Cyclic groups are good examples of abelian groups, where the cyclic group of order is the group of integers modulo .
Further, any direct product of cyclic groups is also an abelian group. Further, every finitely generated Abelian group is obtained this way. This is the famous structure theorem for finitely generated abelian groups.
The structure theorem can be used to generate a complete listing of finite abelian groups, as described here: classification of finite Abelian groups.
Non-examples
Not every group is abelian. The smallest non-abelian group is the symmetric group on three letters: the group of all permutations on three letters, under composition. Its being non-abelian hinges on the fact that the order in which permutations are performed, matters.
Facts Occurrence as subgroups
Every cyclic group is abelian. Since each group is generated by its cyclic subgroups, every group is generated by a family of abelian subgroups. A trickier question is: do there exist abelian normal subgroups? A good candidate for an abelian normal subgroup is the center, which is the collection of elements of the group that commute with
every element of the group. Occurrence as quotients
The maximal abelian quotient of any group is termed its abelianization, and this is the quotient by the commutator subgroup. A subgroup is an abelian-quotient subgroup (i.e., normal with abelian quotient group) if and only if the subgroup contains the commutator subgroup.
Formalisms In terms of the diagonal-in-square operator This property is obtained by applying the diagonal-in-square operator to the property: normal subgroup View other properties obtained by applying the diagonal-in-square operator Relation with other properties Stronger properties
property quick description proof of implication proof of strictness (reverse implication failure) intermediate notions comparison Cyclic group generated by one element cyclic implies abelian abelian not implies cyclic (see also list of examples) For intermediate notions between abelian group and cyclic group, click here. Homocyclic group direct product of isomorphic cyclic groups (see also list of examples) Finite abelian group abelian and a finite group (see also list of examples) Finitely generated abelian group abelian and a finitely generated group (see also list of examples) Weaker properties
property quick description proof of implication proof of strictness (reverse implication failure) intermediate notions comparison Nilpotent group lower central series reaches identity, upper central series reaches whole group abelian implies nilpotent nilpotent not implies abelian (see also list of examples) For intermediate notions between nilpotent group and abelian group, click here. Solvable group derived series reaches identity, has normal series with abelian factor groups abelian implies solvable solvable not implies abelian (see also list of examples) For intermediate notions between solvable group and abelian group, click here. Metabelian group has abelian normal subgroup with abelian quotient group (see also list of examples) Virtually abelian group has abelian subgroup of finite index (see also list of examples) Metaproperties Varietal group property This group property is a varietal group property, in the sense that the collection of groups satisfying this property forms a variety of algebras. In other words, the collection of groups satisfying this property is closed under taking subgroups, taking quotients and taking arbitrary direct products. Subgroups This group property is subgroup-closed, viz., any subgroup of a group satisfying the property also satisfies the property View a complete list of subgroup-closed group properties
Any subgroup of an abelian group is abelian -- viz., the property of being abelian is subgroup-closed. This follows as a direct consequence of abelianness being varietal.
For full proof, refer: Abelianness is subgroup-closed
Quotients This group property is quotient-closed, viz., any quotient of a group satisfying the property also has the property View a complete list of quotient-closed group properties
Any quotient of an abelian group is abelian -- viz the property of being abelian is quotient-closed. This again follows as a direct consequence of abelianness being varietal.
For full proof, refer: Abelianness is quotient-closed
Direct products This group property is direct product-closed, viz., the direct product of an arbitrary (possibly infinite) family of groups each having the property, also has the property View other direct product-closed group properties
A direct product of abelian groups is abelian -- viz the property of being abelian is direct product-closed. This again follows as a direct consequence of abelianness being varietal.
For full proof, refer: Abelianness is direct product-closed
Testing The testing problem
Further information: Abelianness testing problem
GAP command This group property can be tested using built-in functionality of Groups, Algorithms, Programming(GAP). The GAP command for this group property is: IsAbelian The class of all groups with this property can be referred to with the built-in command: AbelianGroups View GAP-testable group properties
To test whether a group is abelian, the GAP syntax is:
IsAbelian (group)where groupeither defines the group or gives the name to a group previously defined. Study of this notion Mathematical subject classification Under the Mathematical subject classification, the study of this notion comes under the class: 20K References Textbook references Abstract Algebraby David S. Dummit and Richard M. Foote, 10-digit ISBN 0471433349, 13-digit ISBN 978-0471433347, More info, Page 17 (definition as Point (2) in general definition of a group) Groups and representationsby Jonathan Lazare Alperin and Rowen B. Bell, ISBN 0387945261, More info, Page 2 (definition introduced in paragraph) Algebraby Michael Artin, ISBN 0130047635, 13-digit ISBN 978-0130047632, More info, Page 42 (defined immediately after the definition of group, as a group where the composition is commutative) Topics in Algebraby I. N. Herstein, More info, Page 28 (formal definition) A Course in the Theory of Groupsby Derek J. S. Robinson, ISBN 0387944613, More info, Page 2 (formal definition) Finite Group Theory (Cambridge Studies in Advanced Mathematics)by Michael Aschbacher, ISBN 0521786754, More info, Page 1 (definition introduced in paragraph) |
I learnt a new term for an intuition I've developed for a couple different problems recently called the
kernel method (or trick, method sounds more philosophical).
Firstly, some background: one of my hobbie projects right now is applying the hallmark algorithm behind Google to the abstract syntax tree's of codebases.
Here's what a visualisation looks like currently:
which is generated from files like this:
I want to see how well it models the important parts of the codebase. Only pursuing it out of intellectual curiosity for now, but it definitely paints a nice picture of my friend's minimal Go VPN for someone new to it.
The algorithm is very simple:
Parse code, build AST Recurse AST, build graph from links between identifiers Run PageRank to calculate each identifier's importance
Step #2 looks like this:
The beauty of it vests in PageRank's versatility in determining how important a node is. In more general terms, we could say PageRank is extracting a feature called
importance for every node in the codebase (where a node is an identifier such as for a type, function, argument). The interesting aspect being that the data it's given is simply identifiers and their usages; nodes and edges of a graph. PageRank
The PageRank algorithm determines a webpage's importance by the importance of sites linking in to it, which is moderated by how many outbound links they make. Very similar to being friends with a celebrity and thus being more famous yourself, except if everyone is friends with Bob Marley then saying you knew him is less impressive than saying you smoked up with Satoshi Nakamoto.
The other aspect to PageRank is a damping factor \(d\), which simply means as a web surfer you don't have all day to spend on the Internet, and likewise not enough time to maintain relationships with everyone in Madagascar (at least, not with
that attitude).
So for any node \(N_i\) and its inbound links \(I\) in a graph of \(\boldsymbol{n}(N)\) nodes, the PageRank \(PR\) is:
$$PR(N_i) = \frac{1 - d}{\boldsymbol{n}(N)} + d \sum_{n \in I_n} \frac{PR(n)}{\boldsymbol{n}(I_n)}$$
Although the definition is recursive, it can be algebraically represented and iteratively computed (indeed, Google does it batchwise).
Teaching old DAG's, new (kernel) tricks
The salient characteristic of PageRank, and many other successful machine learning algorithms, is that we define
how it computes features only from relative measures. In PageRank's case, I'm referring to the 'recursive' nature of its definition; defining a page's importance is based in the importance of the pages that link to it [^1: Although I'll note that PageRank diverges a bit from this definition. Note that the non-recursive variables are the total number of nodes and the damping factor, so it is defined on the graph itself, if you consider its general definition.]. In computer vision
I built an image alignment algorithm this semester for aligning thermographic images of breasts (for cancer detection). Using an algorithm extracting descriptors of features, we detected the location of nipples throughout a whole dataset of images. Instead of manually engineering a kernel for checking for the nipple, we can use SIFT to construct a representation automatically. Where this kernel is relatively defined, is that it doesn't measure the skin colour of the pixels or the circular shape, but how each pixel's colour changes relative to each pixel surrounding it, and the gradient and magnitude of this change (such that you get an orientation). Here's a visualisation of what these gradients represent:
So instead of calculating the exact feature itself using a manually-engineered kernel, we calculate a distance of that datum from another datum using a kernel function and use this intra-feature distance as the feature. [^There's a link to hash functions and blockchains here, I'm sure...😉].
Conclusion
And so this is called the
kernel trick, as instead of defining a kernel, we define a kernel function that "enables them to operate in a high-dimensional, implicit feature space" simply by nature of relative intuition. Sounds like my life in general! |
We can conclude that $a=c, b=d$ where $a,b,c,d \in \mathbb{R}$ when $a,b,c,d \in \mathbb{N}$ since in this case this becomes the prime number representation of integers which is unique (by the fundamental theorem of arithmetic). By the same token the condition $a=c, b=d$ can be extended over both $\mathbb{Z}$ and $\mathbb{Q}$ (i.e
integers and rationals)
In the more general case we have:
$2^{a}3^{b}= 2^{c}3^{d}$
$a \times ln(2) + b \times ln(3) = c \times ln(2) + d \times ln(3)$
$ln(2) \times (a-c) = ln(3) \times (d-b)$
$\frac{a-c}{d-b} = \frac{ln(3)}{ln(2)}$
So for any three choices of the parameters, one can have a solution of the remaining parameter (when $d \ne b$ and $a,b,c,d \in \mathbb{R}$), thus
infinite solutions in $\mathbb{R}$ |
I'v got roughly half way through this question:
For (fixed) x which is an element of the real numbers, consider the series
$\sum_{n=1}^\infty \frac{x^{n-1}}{2^nn} $
For which x does this series converge? For which x is the series conditionally convergent?
So far I've managed to deduce that for x=-1, the series is in oscillating harmonic form i.e.
$\sum_{n=1}^\infty \frac{x^{n-1}}{2^nn} = \sum_{n=1}^\infty \frac{1}{2^nn} * (-1)^{n-1} $
here $a_n$ = $\frac{1}{2^nn}$ where as n approaches infinity, $\frac{1}{2^nn}$ approaches 0. Therefore the series converges according to the alternating series test.
For all other values between -1 and 1, the series must be convergent as $a_n$ is decreasing.
Therefore I concluded that the series is absolutely convergent in the interval [-1,1].
Thats what I got so far but I'm quite uncomfortable with this area. Would anyone mind correcting/helping? |
I'd like to know if the following statement is true ?
If $f : (0,1) \to \mathbb{R}$ is a strictly monotonically increasing function and $f$ is differentiable at some $x \in (0,1)$ then $f^{-1}(y)$ is differentiable at $y = f(x)$ ?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Yes, if $f'(x)>0$; then $(f^{-1})'(y)=1/f'(x)$. But not if $f'(x)=0$.
I am afraid that is not true $f= (x-1/2)^2$.
No. $f(x)=(x−1/2)^3$ is strictly increasing and $f'(x)=3(x-1/2)^2$ and $f'(1/2)=0$, and $g(y):=\sqrt[3]{y}+1/2$ satisfies $g=f^{-1}$ but $g'(0)$ does not exist, as $g'(y)=\frac13 y^{-2/3}$, even though $0=f(1/2)$. |
$\newcommand{\Spec}{\operatorname{Spec}}$ $\newcommand{\mSpec}{\operatorname{Max}}$
This is a homework from my algebra course. I am in a situation where I think I have found a solution, though somehow there's a condition in the question that I don't need.
Important: I don't want help on the problem itself, I just want to know what's wrong with my proof!
Exercise: Let $R$ be an integral domain with quotient field $K$. Let $M \subset K$ be an $R$-submodule of $K$. Then for each prime ideal $\mathfrak{p} \subset R$ we can regard $M_\mathfrak{p} \subset K$.
Show that $M = \bigcap_{\mathfrak{p} \in \Spec(R)} M_\mathfrak{p} = \bigcap_{\mathfrak{m} \in \mSpec(R)} M_\mathfrak{m}$
as $R$-submodules of $K$.
My proof: the inclusions from left to right are obvious (since $R$ is an integral domain and $M$ is torsionfree the inclusion $M \rightarrow M_\mathfrak{a}$, $a \mapsto \frac{a}{1}$ is injective, so $M$ can be seen as an $R$-submodule of any of the $M_\mathfrak{p}$. The inclusion $\bigcap_{\mathfrak{p} \in \Spec(R)} M_\mathfrak{p} \subset \bigcap_{\mathfrak{m} \in \mSpec(R)} M_\mathfrak{m}$ is always trivial.
Now for the inclusion $\bigcap_{\mathfrak{m} \in \mSpec(R)} M_\mathfrak{m} \subset M$:
Let $\frac{1}{s} \cdot m \in \bigcap_{\mathfrak{m} \in \mSpec(R)} M_\mathfrak{m} \Rightarrow \forall \mathfrak{m} \in \mSpec(R): \frac{1}{s} \cdot m \in M_\mathfrak{m} \Rightarrow \forall \mathfrak{m} \in \max(R): s \notin \mathfrak{m}$.
Therefore $s$ is a unit in $R$ and $\frac{1}{s} \cdot m = s^{-1} \cdot m \in M$.
However, I don't see where my proof uses the fact that $M$ is a submodule of $K$ (torsionfree and integral domain would be enough). This confuses me a bit, so I am afraid to have made a mistake. It would be nice if someone could check this solution and tell me what I have done wrong, because right now I seem to be blind. Thanks a lot.
Edit: I've just realized that all the $M_\mathfrak{p}$ must be submodules of some module $P$ for the $\bigcap$ to be defined. But is this really all the problem? |
@DavidReed the notion of a "general polynomial" is a bit strange. The general polynomial over a field always has Galois group $S_n$ even if there is not polynomial over the field with Galois group $S_n$
Hey guys. Quick question. What would you call it when the period/amplitude of a cosine/sine function is given by another function? E.g. y=x^2*sin(e^x). I refer to them as variable amplitude and period but upon google search I don't see the correct sort of equation when I enter "variable period cosine"
@LucasHenrique I hate them, i tend to find algebraic proofs are more elegant than ones from analysis. They are tedious. Analysis is the art of showing you can make things as small as you please. The last two characters of every proof are $< \epsilon$
I enjoyed developing the lebesgue integral though. I thought that was cool
But since every singleton except 0 is open, and the union of open sets is open, it follows all intervals of the form $(a,b)$, $(0,c)$, $(d,0)$ are also open. thus we can use these 3 class of intervals as a base which then intersect to give the nonzero singletons?
uh wait a sec...
... I need arbitrary intersection to produce singletons from open intervals...
hmm... 0 does not even have a nbhd, since any set containing 0 is closed
I have no idea how to deal with points having empty nbhd
o wait a sec...
the open set of any topology must contain the whole set itself
so I guess the nbhd of 0 is $\Bbb{R}$
Btw, looking at this picture, I think the alternate name for these class of topologies called British rail topology is quite fitting (with the help of this WfSE to interpret of course mathematica.stackexchange.com/questions/3410/…)
Since as Leaky have noticed, every point is closest to 0 other than itself, therefore to get from A to B, go to 0. The null line is then like a railway line which connects all the points together in the shortest time
So going from a to b directly is no more efficient than go from a to 0 and then 0 to b
hmm...
$d(A \to B \to C) = d(A,B)+d(B,C) = |a|+|b|+|b|+|c|$
$d(A \to 0 \to C) = d(A,0)+d(0,C)=|a|+|c|$
so the distance of travel depends on where the starting point is. If the starting point is 0, then distance only increases linearly for every unit increase in the value of the destination
But if the starting point is nonzero, then the distance increases quadratically
Combining with the animation in the WfSE, it means that in such a space, if one attempt to travel directly to the destination, then say the travelling speed is 3 ms-1, then for every meter forward, the actual distance covered by 3 ms-1 decreases (as illustrated by the shrinking open ball of fixed radius)
only when travelling via the origin, will such qudratic penalty in travelling distance be not apply
More interesting things can be said about slight generalisations of this metric:
Hi, looking a graph isomorphism problem from perspective of eigenspaces of adjacency matrix, it gets geometrical interpretation: question if two sets of points differ only by rotation - e.g. 16 points in 6D, forming a very regular polyhedron ...
To test if two sets of points differ by rotation, I thought to describe them as intersection of ellipsoids, e.g. {x: x^T P x = 1} for P = P_0 + a P_1 ... then generalization of characteristic polynomial would allow to test if our sets differ by rotation ...
1D interpolation: finding a polynomial satisfying $\forall_i\ p(x_i)=y_i$ can be written as a system of linear equations, having well known Vandermonde determinant: $\det=\prod_{i<j} (x_i-x_j)$. Hence, the interpolation problem is well defined as long as the system of equations is determined ($\d...
Any alg geom guys on? I know zilch about alg geom to even start analysing this question
Manwhile I am going to analyse the SR metric later using open balls after the chat proceed a bit
To add to gj255's comment: The Minkowski metric is not a metric in the sense of metric spaces but in the sense of a metric of Semi-Riemannian manifolds. In particular, it can't induce a topology. Instead, the topology on Minkowski space as a manifold must be defined before one introduces the Minkowski metric on said space. — baluApr 13 at 18:24
grr, thought I can get some more intuition in SR by using open balls
tbf there’s actually a third equivalent statement which the author does make an argument about, but they say nothing about substantive about the first two.
The first two statements go like this : Let $a,b,c\in [0,\pi].$ Then the matrix $\begin{pmatrix} 1&\cos a&\cos b \\ \cos a & 1 & \cos c \\ \cos b & \cos c & 1\end{pmatrix}$ is positive semidefinite iff there are three unit vectors with pairwise angles $a,b,c$.
And all it has in the proof is the assertion that the above is clearly true.
I've a mesh specified as an half edge data structure, more specifically I've augmented the data structure in such a way that each vertex also stores a vector tangent to the surface. Essentially this set of vectors for each vertex approximates a vector field, I was wondering if there's some well k...
Consider $a,b$ both irrational and the interval $[a,b]$
Assuming axiom of choice and CH, I can define a $\aleph_1$ enumeration of the irrationals by label them with ordinals from 0 all the way to $\omega_1$
It would seemed we could have a cover $\bigcup_{\alpha < \omega_1} (r_{\alpha},r_{\alpha+1})$. However the rationals are countable, thus we cannot have uncountably many disjoint open intervals, which means this union is not disjoint
This means, we can only have countably many disjoint open intervals such that some irrationals were not in the union, but uncountably many of them will
If I consider an open cover of the rationals in [0,1], the sum of whose length is less than $\epsilon$, and then I now consider [0,1] with every set in that cover excluded, I now have a set with no rationals, and no intervals.One way for an irrational number $\alpha$ to be in this new set is b...
Suppose you take an open interval I of length 1, divide it into countable sub-intervals (I/2, I/4, etc.), and cover each rational with one of the sub-intervals.Since all the rationals are covered, then it seems that sub-intervals (if they don't overlap) are separated by at most a single irrat...
(For ease of construction of enumerations, WLOG, the interval [-1,1] will be used in the proofs) Let $\lambda^*$ be the Lebesgue outer measure We previously proved that $\lambda^*(\{x\})=0$ where $x \in [-1,1]$ by covering it with the open cover $(-a,a)$ for some $a \in [0,1]$ and then noting there are nested open intervals with infimum tends to zero.
We also knew that by using the union $[a,b] = \{a\} \cup (a,b) \cup \{b\}$ for some $a,b \in [-1,1]$ and countable subadditivity, we can prove $\lambda^*([a,b]) = b-a$. Alternately, by using the theorem that $[a,b]$ is compact, we can construct a finite cover consists of overlapping open intervals, then subtract away the overlapping open intervals to avoid double counting, or we can take the interval $(a,b)$ where $a<-1<1<b$ as an open cover and then consider the infimum of this interval such that $[-1,1]$ is still covered. Regardless of which route you take, the result is a finite sum whi…
W also knew that one way to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ is to take the union of all singletons that are rationals. Since there are only countably many of them, by countable subadditivity this give us $\lambda^*(\Bbb{Q}\cap [-1,1]) = 0$. We also knew that one way to compute $\lambda^*(\Bbb{I}\cap [-1,1])$ is to use $\lambda^*(\Bbb{Q}\cap [-1,1])+\lambda^*(\Bbb{I}\cap [-1,1]) = \lambda^*([-1,1])$ and thus deducing $\lambda^*(\Bbb{I}\cap [-1,1]) = 2$
However, what I am interested here is to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ and $\lambda^*(\Bbb{I}\cap [-1,1])$ directly using open covers of these two sets. This then becomes the focus of the investigation to be written out below:
We first attempt to construct an open cover $C$ for $\Bbb{I}\cap [-1,1]$ in stages:
First denote an enumeration of the rationals as follows:
$\frac{1}{2},-\frac{1}{2},\frac{1}{3},-\frac{1}{3},\frac{2}{3},-\frac{2}{3}, \frac{1}{4},-\frac{1}{4},\frac{3}{4},-\frac{3}{4},\frac{1}{5},-\frac{1}{5}, \frac{2}{5},-\frac{2}{5},\frac{3}{5},-\frac{3}{5},\frac{4}{5},-\frac{4}{5},...$ or in short:
Actually wait, since as the sequence grows, any rationals of the form $\frac{p}{q}$ where $|p-q| > 1$ will be somewhere in between two consecutive terms of the sequence $\{\frac{n+1}{n+2}-\frac{n}{n+1}\}$ and the latter does tends to zero as $n \to \aleph_0$, it follows all intervals will have an infimum of zero
However, any intervals must contain uncountably many irrationals, so (somehow) the infimum of the union of them all are nonzero. Need to figure out how this works...
Let's say that for $N$ clients, Lotta will take $d_N$ days to retire.
For $N+1$ clients, clearly Lotta will have to make sure all the first $N$ clients don't feel mistreated. Therefore, she'll take the $d_N$ days to make sure they are not mistreated. Then she visits client $N+1$. Obviously the client won't feel mistreated anymore. But all the first $N$ clients are mistreated and, therefore, she'll start her algorithm once again and take (by suposition) $d_N$ days to make sure all of them are not mistreated. And therefore we have the recurence $d_{N+1} = 2d_N + 1$
Where $d_1$ = 1.
Yet we have $1 \to 2 \to 1$, that has $3 = d_2 \neq 2^2$ steps. |
First note, that hybridisation is a mathematical concept which can be applied to interpret a bonding situation. It has no physical meaning whatsoever. Instead it helps us to understand the direction of bonds better.
Second note, that the second period usually behaves quite differently from the remaining elements in a group. So in a way, ammonia behaves unnatural or anomalous.
If you compare nitrogen with phosphorus, you will note, that the former is much smaller than the latter, i.e. van der Waals radii $r(\ce{N})=155~\mathrm{pm};\ r(\ce{P})=180~\mathrm{pm}$ (ref. wikipedia), covalent radii $r(\ce{N})=71~\mathrm{pm};\ r(\ce{P})=107~\mathrm{pm}$ (ref. wikipedia). Therefore also the orbitals in nitrogen are smaller, and $\ce{s}$ and $\ce{p}$ orbitals will occupy more of the same space than in phosphorus. As a result the $\ce{N-H}$ bond distance will naturally also be shorter.
A lone pair is usually most stable in an orbital that has high $\ce{s}$ character. Bonds will most likely be formed with the higher lying $\ce{p}$ orbitals. The orientation of these towards each other is exactly $90^\circ$.
In ammonia this would lead to very close $\ce{H\cdots{}H}$ contacts, which are repulsive and therefore the hydrogen atoms are pushed away from each other. This is possible since in the second period the $\ce{s-p}$ splitting is still very small and the nitrogen $\ce{s}$ orbital is accessible for the hydrogen atoms. This will ultimately result in mixing $\ce{s}$ and $\ce{p}$ orbitals for nitrogen in the respective molecular orbitals. This phenomenon can be referred to as hybridisation - the linear combination of orbitals from the same atom. This term is therefore somewhat independent from its most common usage.
It is also very important to know, that the molecular wavefunction of a molecule has to reflect its overall symmetry. In this case it is $C_{3v}$, which means there is a threefold rotational axis and three vertical mirror planes (the axis is element of these planes). This gives also rise to degenerate orbitals. A canonical orbital picture has to reflect this property (BP86/cc-pVDZ; valence orbitals are ordered with increasing energy from left to right).
Note that the lowest lying valence molecular orbital is formed only from $\ce{s}$ orbitals (There is one additional $\ce{1s^2-N}$ core orbital.) Now Natural Bond Orbital (NBO) Theory can be used to transform these delocalised molecular orbitals to a more common and familiar bonding picture, making use of atomic hybrid orbitals. This method is called localising orbitals, but it has the expense of losing the energy eigenvalue that may be assigned to canonical orbitals (NBO@BP86/cc-pVDZ; valence NBO cannot be ordered by energy levels). In this theory you will find three equivalent $\ce{N-H}$ bonds, that are composed of $32\%~\ce{1s-H}$ and $68\%~\ce{s^{$0.87$}p^3-N}\approx\ce{sp^3-N}$ orbitals. Note that the lone pair orbital at nitrogen has a slightly higher $\ce{s}$ orbital contribution, i.e. $\ce{s^{1.42}p^3-N}\approx\ce{sp^3-N}$.
So the thermodynamically most favoured angle is found to be $107^\circ$ due to a compromise between optimal orbital overlap and least internuclear repulsion.
The canonical bonding picture in phosphine is very similar to ammonia, only the orbitals are larger. Even in this case it would be wrong to assume, that there is no hybridisation present at all. However, the biggest contribution to the molecular orbitals stems from the $\ce{p}$ orbitals at phosphorus.
Applying the localisation scheme, one end up with a different bonding picture. Here are three equal $\ce{P-H}$ bonds that are composed of $48\%~\ce{1s-H}$ and $52\%~\ce{s^{$0.5$}p^3-P}$ orbitals. The lone pair at phosphorus is composed of $57\%\ce{s} + 43\%\ce{p}$ orbitals.
One can see the difference of the molecules also in their inversion barrier, while for ammonia the inversion is readily available at room temperature, $\Delta E \approx 6~\mathrm{kcal/mol}$, it is very slow for phosphine, $\Delta E \approx 40~\mathrm{kcal/mol}$.
This is mostly due to the fact, that the nitrogen hydrogen bonds have already a significant $\ce{s}$ orbital contribution, which can be easily increase, to form the planar molecule with formally $\ce{sp^2}$ hybrids. |
To prove the theorem we are going to need the following intermediate result.
Theorem. Let $\mathbf{y} \sim N_n \left(0, \sigma^2 \mathbf{I}_n \right)$ and let $ Q = \sigma^{-2} \mathbf{y}^{\prime} \mathbf{A} \mathbf{y}$ for a symmetric matrix $\mathbf{A}$ of rank $r$. Then if $\mathbf{A}$ is idempotent, Q has a $\chi^2 (r)$ distribution.
The theorem extends to the other direction as well but we only need the sufficiency so we will just prove this and we will do so using the eigenvalue-eigenvector decomposition of an idempotent matrix.
It is important to note that we first consider the case of mean zero. We will relax this assumption afterwards. But for now recall that for a square matrix of rank r, say $\mathbf{A}$
$$\mathbf{A} = \sum_{i=1}^{r} \lambda_i \mathbf{c}_i \mathbf{c}_i^{\prime}$$
where the lambdas are the eigenvalues and the $\mathbf{c}_i$s the corresponding eigenvectors. All pretty standard so far. Now if we additionally restrict $\mathbf{A}$ to be symmetric two things happen:
The eigenvalues are real-valued
Eigenvectors corresponding to different eigenvalues are
orthogonal
These are consequences of the so-called Spectral Theorem of linear algebra and you can consult any good textbook for a proof. What does that do for us? You are about to see why symmetry is required. Let's write down the decomposition of our $\mathbf{A}$ in our quadratic form and see what happens.
$$\sigma^{-2} \mathbf{y}^{\prime} \mathbf{A} \mathbf{y} = \sigma^{-2} \mathbf{y}^{\prime} \left( \sum_{i=1}^r \lambda_i \mathbf{c}_i \mathbf{c}_i^{\prime} \right) \mathbf{y} = \sum_{i=1}^r \lambda_i \left( \sigma^{-1} \mathbf{c}_i^{\prime} \mathbf{y} \right)^2 \tag{1}$$
We have written our quadratic form as weighted squared projections onto orthogonal axes. Let's now investigate the distribution of $\mathbf{c}_i ^{\prime} \mathbf{y}$ and $ \mathbf{c}_j ^{\prime} \mathbf{y}$, $i \neq j$. By basic rules
$$\sigma^{-1} \mathbf{c}_i ^{\prime} \mathbf{y} \sim N(0, \underbrace{\mathbf{c}_i^{\prime} \mathbf{c}_i}_{=1} ) $$
as the eigenvectors are not unique and therefore can be rescaled without loss of generality to have length one. Next,
$$Cov\left(\sigma^{-1} \mathbf{c}_i ^{\prime} \mathbf{x} , \sigma^{-1} \mathbf{c}_j ^{\prime} \mathbf{x} \right) = \sigma^{-2}\mathbf{c}_i ^{\prime} I_n \mathbf{c}_j = 0, \ \ i \neq j $$
by the second implication of the Spectral Theorem. Thus our summands are uncorrelated. By the normality they are also independent. It is easy to see then that the sum consists of weighted $\chi^2$ random variables (weighted by the eigenvalues). In this thread it was asked whether this follows the $\chi^2$ distribution regardless. The answer is no of course.
Enter the idempotence. It easily follows from the definition of an idempotent matrix and the eigenvalue/eigenvector problem that an idempotent matrix has eigenvalues equal to either one or zero. Since by assumption the matrix $\mathbf{A}$ has rank $r$, there are $r$ eigenvalues equal to one. Therefore
$$\sigma^{-2} \mathbf{y}^{\prime} \mathbf{A} \mathbf{y} = \sum_{i=1}^r \left( \sigma^{-1} \mathbf{c}_i^{\prime} \mathbf{y} \right)^2 \sim \chi^2 (r) $$
And this completes the proof.
What happens now if the vector $y$ has nonzero mean? No harm done, we will just use the definition of the non-central $\chi^2$ distribution to conclude that if
$$\mathbf{y} \sim N_n \left( \boldsymbol{\mu}, \sigma^2 \mathbf{I}_n \right)$$
then
$$\sigma^{-2} \mathbf{y}^{\prime} \mathbf{A} \mathbf{y} ~ \sim \chi^2 \left(r, \boldsymbol{\mu}^{\prime} \mathbf{A} \boldsymbol{\mu} \right) $$
where the second term indicates the non-centrality parameter. (Due to force of habit, I will skip the division by $2$ but you can just do it your way).
We are now ready to prove the required result.
Theorem. Suppose $\Sigma$ is a $n \times n$ positive definite and symmetric matrix, $A$ is a $n \times n$ symmetric matrix with rank $r$, and $(A\Sigma)^2 = A\Sigma$.
Then $\mathbf{y} \sim N(\boldsymbol{\mu}, \Sigma) \implies Q = \mathbf{y}^{\prime}A\mathbf{y} \sim \chi^2 \left(r, \boldsymbol{\mu}^{\prime} \mathbf{A}\boldsymbol{\mu} \right)\text{.}$
We are given that
$$\mathbf{A\Sigma A \Sigma} = \mathbf{A\Sigma}$$
from which it follows that
$$\mathbf{A\Sigma A} = \mathbf{A}$$
and hence we may rewrite our quadratic form as
$$Q = \left( \boldsymbol{\Sigma}^{-1/2} \mathbf{y} \right)^{\prime} \boldsymbol{\Sigma}^{1/2} \mathbf{A} \boldsymbol{\Sigma}^{1/2} \left( \boldsymbol{\Sigma}^{-1/2} \mathbf{y} \right) \tag{2} $$
Since by assumption $\boldsymbol{\Sigma}$ is a positive definite matrix, its square root is always well-defined and you can check that using the eigenvalue/eigenvector decomposition (You can also check that equation ($2$) is equivalent to equation ($1$) !)
Of course you would agree that $\boldsymbol{\Sigma}^{-1/2} \mathbf{y} \sim N_n \left( \boldsymbol{\Sigma}^{-1/2} \boldsymbol{\mu} , \mathbf{I}_n \right)$. If we temporarily assume that $\boldsymbol{\mu}=0$ we would be in the situation of the first theorem, right? Well, not exactly - we still have to show that the middle matrix is idempotent but that is easy enough under our assumptions.
$$\boldsymbol{\Sigma}^{1/2} \mathbf{A} \boldsymbol{\Sigma}^{1/2} \boldsymbol{\Sigma}^{1/2} \mathbf{A} \boldsymbol{\Sigma}^{1/2} = \boldsymbol{\Sigma}^{1/2} \mathbf{A} \boldsymbol{\Sigma}^{1/2}$$
Therefore if $\mathbf{y} \sim N_n \left(\mathbf{0}, \boldsymbol{\Sigma} \right)$, $Q \sim \chi^2 \left(r \right)$ which implies that if we now switch back to the situation of nonzero mean, we would have
$$ Q = \mathbf{y}^{\prime}\mathbf{A}\mathbf{y} \sim \chi^2 \left(r, \boldsymbol{\mu}^{\prime} \mathbf{A}\boldsymbol{\mu} \right)$$
as required. $\square$ |
Speaker
Mr Jakkree Boonlakhorn (Materials Science and Nanotechnology Program, Faculty of Science, Khon Kaen University, Khon Kaen, THAILAND 40002)
Description
In this research work, the giant dielectric response in Ca${}_{1-3x/2}$Yb${}_x$Cu${}_3$Ti${}_4$O${}_{12}$ ($x$ = 0, 0.05, 0.15) ceramics prepared by a modified sol-gel method and sintered at 1100 ${}^{\circ}$C for 6 and 12 h were investigated as functions of temperature and frequency. A single phase of CaCu${}_3$Ti${}_4$O${}_{12}$ was obtained in all ceramic samples. Grain growth of Ca${}_{1-3x/2}$Yb${}_x$Cu${}_3$Ti${}_4$O${}_{12}$ ceramics was effectively inhibited by Yb${}^{3+}$ doping ions, which can be explained to the effect of solute drag of Yb${}^{3+}$ doping ions. High dielectric permittivity ($\sim$10${}^4$) and very low loss tangent ($\sim$0.01$-$0.02) at 1 kHz with good temperature stability of ${\varepsilon}^{\prime}$ ranging from -55 to 125 ${}^{\circ}$C were achieved in a Ca${}_{0.925}$Yb${}_{0.05}$Cu${}_3$Ti${}_4$O${}_{12}$ ceramic. Furthermore, the dielectric permittivity was found to be nearly independent of frequency (10${}^2 -$10${}^6$ Hz) and dc bias voltage (0$-$40 V). Interestingly, the grain boundary resistances of Ca${}_{1-3x/2}$Yb${}_x$Cu${}_3$Ti${}_4$O${}_{12}$ ceramics at room temperature were calculated from the activation energies and found to be $\sim$0.7$-$12.5 G$\Omega$.cm. The effect of annealing in O${}_2$ atmosphere on the dielectric properties was also investigated. It was suggested that variations in dielectric properties of Ca${}_{1-3x/2}$Yb${}_x$Cu${}_3$Ti${}_4$O${}_{12}$ ceramics due to Yb${}^{3+}$ substitution and annealing were associated with the electrical response at grain boundaries.
Primary author
Mr Jakkree Boonlakhorn (Materials Science and Nanotechnology Program, Faculty of Science, Khon Kaen University, Khon Kaen, THAILAND 40002)
Co-authors
Mr Bundit Putasaeng (National Metal and Materials Technology Center (MTEC), Thailand Science Park, Pathumthani, THAILAND 12120) Dr Prasit Thongbai (Department of Physics, Faculty of Science, Khon Kaen University, Khon Kaen, THAILAND 40002) Prof. Santi Maensiri (School of Physics, Institute of Science, Suranaree Universiy, Nakhon Ratchasima, THAILAND 30000) Dr Teerapon Yamwong (National Metal and Materials Technology Center (MTEC), Thailand Science Park, Pathumthani, THAILAND 12120) |
Dark Energy Survey Year 1 Results: Cross-correlation between DES Y1 galaxy weak lensing and SPT+Planck CMB weak lensing Abstract
We cross-correlate galaxy weak lensing measurements from the Dark Energy Survey (DES) year-one (Y1) data with a cosmic microwave background (CMB) weak lensing map derived from South Pole Telescope (SPT) and Planck data, with an effective overlapping area of 1289 deg$$^{2}$$. With the combined measurements from four source galaxy redshift bins, we reject the hypothesis of no lensing with a significance of $$10.8\sigma$$. When employing angular scale cuts, this significance is reduced to $$6.8\sigma$$, which remains the highest signal-to-noise measurement of its kind to date. We fit the amplitude of the correlation functions while fixing the cosmological parameters to a fiducial $$\Lambda$$CDM model, finding $$A = 0.99 \pm 0.17$$. We additionally use the correlation function measurements to constrain shear calibration bias, obtaining constraints that are consistent with previous DES analyses. Finally, when performing a cosmological analysis under the $$\Lambda$$CDM model, we obtain the marginalized constraints of $$\Omega_{\rm m}=0.261^{+0.070}_{-0.051}$$ and $$S_{8}\equiv \sigma_{8}\sqrt{\Omega_{\rm m}/0.3} = 0.660^{+0.085}_{-0.100}$$. These measurements are used in a companion work that presents cosmological constraints from the joint analysis of two-point functions among galaxies, galaxy shears, and CMB lensing using DES, SPT and Planck data.
Authors: Publication Date: Research Org.: Argonne National Lab. (ANL), Argonne, IL (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brookhaven National Lab. (BNL), Upton, NY (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) Sponsoring Org.: USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25) Contributing Org.: DES; SPT OSTI Identifier: 1487048 Report Number(s): arXiv:1810.02441; FERMILAB-PUB-18-513-AE 1697154 DOE Contract Number: AC02-07CH11359 Resource Type: Journal Article Journal Name: TBD Additional Journal Information: Journal Name: TBD Country of Publication: United States Language: English Subject: 79 ASTRONOMY AND ASTROPHYSICS Citation Formats
Omori, Y., and et al.
Dark Energy Survey Year 1 Results: Cross-correlation between DES Y1 galaxy weak lensing and SPT+Planck CMB weak lensing. United States: N. p., 2018. Web.
Omori, Y., & et al.
Dark Energy Survey Year 1 Results: Cross-correlation between DES Y1 galaxy weak lensing and SPT+Planck CMB weak lensing. United States.
Omori, Y., and et al. Thu . "Dark Energy Survey Year 1 Results: Cross-correlation between DES Y1 galaxy weak lensing and SPT+Planck CMB weak lensing". United States. https://www.osti.gov/servlets/purl/1487048.
@article{osti_1487048,
title = {Dark Energy Survey Year 1 Results: Cross-correlation between DES Y1 galaxy weak lensing and SPT+Planck CMB weak lensing}, author = {Omori, Y. and et al.}, abstractNote = {We cross-correlate galaxy weak lensing measurements from the Dark Energy Survey (DES) year-one (Y1) data with a cosmic microwave background (CMB) weak lensing map derived from South Pole Telescope (SPT) and Planck data, with an effective overlapping area of 1289 deg$^{2}$. With the combined measurements from four source galaxy redshift bins, we reject the hypothesis of no lensing with a significance of $10.8\sigma$. When employing angular scale cuts, this significance is reduced to $6.8\sigma$, which remains the highest signal-to-noise measurement of its kind to date. We fit the amplitude of the correlation functions while fixing the cosmological parameters to a fiducial $\Lambda$CDM model, finding $A = 0.99 \pm 0.17$. We additionally use the correlation function measurements to constrain shear calibration bias, obtaining constraints that are consistent with previous DES analyses. Finally, when performing a cosmological analysis under the $\Lambda$CDM model, we obtain the marginalized constraints of $\Omega_{\rm m}=0.261^{+0.070}_{-0.051}$ and $S_{8}\equiv \sigma_{8}\sqrt{\Omega_{\rm m}/0.3} = 0.660^{+0.085}_{-0.100}$. These measurements are used in a companion work that presents cosmological constraints from the joint analysis of two-point functions among galaxies, galaxy shears, and CMB lensing using DES, SPT and Planck data.}, doi = {}, journal = {TBD}, number = , volume = , place = {United States}, year = {2018}, month = {10} } Figures / Tables: i s(z) for the 4 tomographic bins for Metacalibration. The black line shows the CMB lensing kernel. |
The values below are relative permittivity \(\epsilon_r \triangleq \epsilon/\epsilon_0\) for a few materials that are commonly encountered in electrical engineering applications, and for which permittivity emerges as a consideration. Note that “relative permittivity” is sometimes referred to as
dielectric constant.
Here we consider only the physical (real-valued) permittivity, which is the real part of the complex permittivity (typically indicated as \(\epsilon'\) or \(\epsilon_r'\)) for materials exhibiting significant loss.
Permittivity varies significantly as a function of frequency. The values below are representative of frequencies from a few kHz to about 1 GHz. The values given are also representative of optical frequencies for materials such as silica that are used in optical applications. Permittivity also varies as a function of temperature. In applications where precision better than about 10% is required, primary references accounting for frequency and temperature should be consulted. The values presented here are gathered from a variety of references, including those indicated in “Additional References.”
Free Space (vacuum): \(\epsilon_r \triangleq 1\)
Material \(\epsilon_r\) Common uses Styrofoam\(^1\) 1.1 Teflon\(^2\) 2.1 Polyethylene 2.3 coaxial cable Polypropylene 2.3 Silica 2.4 optical fiber\(^3\) Polystyrene 2.6 Polycarbonate 2.8 Rogers RO3003 3.0 PCB substrate FR4 (glass epoxy laminate) 4.5 PCB substrate
\(^1\) Properly known as
extruded polystyrene foam (XPS). \(^2\) Properly known as polytetrafluoroethylene (PTFE). \(^3\) Typically doped with small amounts of other materials to slightly raise or lower the index of refraction (\(=\sqrt{\epsilon_r}\)).
Non-conducting spacing materials used in discrete capacitors exhibit \(\epsilon_r\) ranging from about 5 to 50.
Semiconductorscommonly appearing in electronics – including carbon, silicon, geranium, indium phosphide, and so on – typically exhibit \(\epsilon_r\) in the range 5–15. Glassexhibits \(\epsilon_r\) in the range 4–10, depending on composition. Gasses, including air, typically exhibit \(\epsilon_r\cong 1\) to within a tiny fraction of a percent. Liquid watertypically exhibits \(\epsilon_r\) in the range 72–81. Distilled water exhibits \(\epsilon_r \approx 81\) at room temperature, whereas sea water tends to be at the lower end of the range. Other liquidstypically exhibit \(\epsilon_r\) in the range 10–90, with considerable variation as a function of temperature and frequency. Animal flesh and blood consists primarily of liquid matter and so also exhibits permittivity in this range. Soiltypically exhibits \(\epsilon_r\) in the range 2.5–3.5 when dry and higher when wet. The permittivity of soil varies considerably depending on composition. Contributors
Ellingson, Steven W. (2018) Electromagnetics, Vol. 1. Blacksburg, VA: VT Publishing. https://doi.org/10.21061/electromagnetics-vol-1 Licensed with CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0. Report adoption of this book here. If you are a professor reviewing, adopting, or adapting this textbook please help us understand a little more about your use by filling out this form. |
Generally speaking, if you have two or three sources of noise, you are still going to be
much better off pricing American options on a lattice than via LSMC. Too often, LSMC becomes the refuge of academics lacking patience to learn proper lattice techniques.
Now, you can frequently reduce the difficulty of pricing American options by considering the
american exercise premium $P$, defined as the difference in value between an american-exercise option and its european-exercise equivalent
$$P = A - E$$
If you have some complicated stochastic model, but enjoy a technique $f(\cdot)$ for pricing european-exercise options
$$\tilde{E} = f(x_E;\vec\mu)$$
and you can define some much simpler model $g(\cdot)$ that is good enough for estimating the premium
$$\tilde{P} \approx g(x_A; \vec\nu) - g(x_E; \vec\nu)$$
then your american option price can be estimated as
$$\tilde{A} \approx \tilde{E} + \tilde{P}$$
If the american exercise premium is large then relative error in $\tilde{P}$ will be important and this trick will not work as well.
Also, if exercise probability is large, or exercise is likely to happen long before the option tenor, then the trick will fail, since we have introduced a dependency on $\vec\mu$ at (european) timescales well past the relevant timescales for the actual american option. |
CentralityBin () CentralityBin (const char *name, Float_t low, Float_t high) CentralityBin (const CentralityBin &other) virtual ~CentralityBin () CentralityBin & operator= (const CentralityBin &other) Bool_t IsAllBin () const Bool_t IsInclusiveBin () const const char * GetListName () const virtual void CreateOutputObjects (TList *dir, Int_t mask) virtual Bool_t ProcessEvent (const AliAODForwardMult *forward, UInt_t triggerMask, Bool_t isZero, Double_t vzMin, Double_t vzMax, const TH2D *data, const TH2D *mc, UInt_t filter, Double_t weight) virtual Double_t Normalization (const TH1I &t, UShort_t scheme, Double_t trgEff, Double_t &ntotal, TString *text) const virtual void MakeResult (const TH2D *sum, const char *postfix, bool rootProj, bool corrEmpty, Double_t scaler, Int_t marker, Int_t color, TList *mclist, TList *truthlist) virtual bool End (TList *sums, TList *results, UShort_t scheme, Double_t trigEff, Double_t trigEff0, Bool_t rootProj, Bool_t corrEmpty, Int_t triggerMask, Int_t marker, Int_t color, TList *mclist, TList *truthlist) Int_t GetColor (Int_t fallback=kRed+2) const void SetColor (Color_t colour) TList * GetResults () const const char * GetResultName (const char *postfix="") const TH1 * GetResult (const char *postfix="", Bool_t verbose=true) const void SetDebugLevel (Int_t lvl) void SetSatelliteVertices (Bool_t satVtx) virtual void Print (Option_t *option="") const const Sum * GetSum (Bool_t mc=false) const Sum * GetSum (Bool_t mc=false) const TH1I * GetTriggers () const TH1I * GetTriggers () const TH1I * GetStatus () const TH1I * GetStatus ()
Calculations done per centrality. These objects are only used internally and are never streamed. We do not make dictionaries for this (and derived) classes as they are constructed on the fly.
Definition at line 701 of file AliBasedNdetaTask.h.
Calculate the Event-Level normalization.
The full event level normalization for trigger \(X\) is given by
\begin{eqnarray*} N &=& \frac{1}{\epsilon_X} \left(N_A+\frac{N_A}{N_V}(N_{-V}-\beta)\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(1+\frac{1}{N_V}(N_T-N_V-\beta)\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(1+\frac{N_T}{N_V}-1-\frac{\beta}{N_V}\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(\frac{1}{\epsilon_V}-\frac{\beta}{N_V}\right) \end{eqnarray*}
where
\(\epsilon_X=\frac{N_{T,X}}{N_X}\) is the trigger efficiency evaluated in simulation. \(\epsilon_V=\frac{N_V}{N_T}\) is the vertex efficiency evaluated from the data \(N_X\) is the Monte-Carlo truth number of events of type \(X\). \(N_{T,X}\) is the Monte-Carlo truth number of events of type \(X\) which was also triggered as such. \(N_T\) is the number of data events that where triggered as type \(X\) and had a collision trigger (CINT1B) \(N_V\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), and had a vertex. \(N_{-V}\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), but no vertex. \(N_A\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), and had a vertex in the selected range. \(\beta=N_a+N_c-N_e\) is the number of control triggers that were also triggered as type \(X\). \(N_a\) Number of beam-empty events also triggered as type \(X\) events (CINT1-A or CINT1-AC). \(N_c\) Number of empty-beam events also triggered as type \(X\) events (CINT1-C). \(N_e\) Number of empty-empty events also triggered as type \(X\) events (CINT1-E).
Note, that if \( \beta \ll N_A\) the last term can be ignored, and the expression simplyfies to
\[ N = \frac{1}{\epsilon_X}\frac{1}{\epsilon_V}N_A \]
Parameters
t Histogram of triggers scheme Normalisation scheme trgEff Trigger efficiency ntotal On return, the total number of events to normalise to. text If non-null, fill with normalization calculation Returns \(N_A/N\) or negative number in case of errors.
Definition at line 1784 of file AliBasedNdetaTask.cxx.
Referenced by End(). |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
Yes, this is true. The case for Lie groups is pretty easy (and I explained it here). For the general case, one can prove this using the fact compact groups are inverse limits of Lie groups.
Let $G$ be a compact abelian group, and denote $A=G/G^0$. By a corollary of Peter-Weyl, every identity neighbourhood contains a subgroup which is co-Lie (i.e. the quotient by it is a Lie group). Thus we can form a net of subgroups $N_\alpha$, ordered by reverse inclusion (so $\alpha \geq \beta$ if and only if $N_\beta \subseteq N_\alpha$), such that $N_\alpha\subseteq G^0$ for all $\alpha$ (sine $G^0$ is open) and such that $\bigcap N_\alpha=\{1\}$. Denote $G_\alpha=G/N_\alpha$ and denote by $p_\alpha:G\to G_\alpha$ the natural quotient map.
Since each $p_\alpha$ is open, continuous and surjective, the subgroup $p_\alpha(G^0)$ is open and connected, and hence is the connected component of $G_\alpha$. Since $N_\alpha\subseteq G^0$, we have by the third isomorphism theorem (for topological groups) that $G_\alpha/G_\alpha^0=(G/N_\alpha)/(G^0/N_\alpha)\cong G/G^0 = A$. Thus, by the case for Lie groups, we know $G_\alpha \cong G_\alpha^0\times A_\alpha$ for $A_\alpha\cong A$.
Denote $A'_\alpha = p_\alpha^{-1}(A_\alpha)$ and $A'=\bigcap A'_\alpha$. I will show that $A'$ is mapped isomorphically onto $A$ via the quotient $q:G\to G/G^0$, which shows the sequence does indeed split.
First let us see that it is mapped injectively into $A$ via $q$. We have, for every $\alpha$, $$p_\alpha(A'_\alpha\cap G^0)\subseteq p_\alpha(A'_\alpha)\cap p_\alpha(G^0)=A_\alpha\cap G_\alpha^0=\{1\}.$$ Thus $A'_\alpha\cap G^0\subseteq N_\alpha$ for every $\alpha$, so $\bigcap A'_\alpha\cap G^0\subseteq\bigcap N_\alpha=\{1\}$. This precisely mean that the intersection of $A'=\bigcap A'_\alpha$ with $\ker q=G^0$ is trivial, i.e. $q$ mapps $A'$ injectively into $A$.
It remains to show $q(A')=A$. I think this is pretty clear. For every $\alpha$ we know $q(A'_\alpha)=A$. To see this, first note that $p_\alpha(A'_\alpha\cdot G^0)$ contains both $p_\alpha(A'_\alpha)=A_\alpha$ and $p_\alpha(G^0)=G_\alpha^0$ and hence is all of $G_\alpha=G_\alpha^0\times A_\alpha$, and therefore $A'_\alpha\cdot G^0\cdot \ker p_\alpha = G$, but $\ker p_\alpha = N_\alpha \subseteq G^0$, so $A'_\alpha\cdot G^0=G$. Since $\ker q= G^0$ we have $q(A'_\alpha)=A$. But by compactness this implies $q(A')=A$ (since if $a\in A$ then for every $\alpha$ there is $a_\alpha\in A'_\alpha$ such that $q(a_\alpha)=a$; this net has some converging subnet which converges to some $a'\in A$, and by continuity $q(a')=a$).
This answer turned out a lot longer than I expected... Using the language of inverse limits it would have been a lot shorter, I think, if you know how to prove $G\cong \varprojlim G_\alpha$ in a natural way and not just that every neighbourhood contains a co-Lie subgroup. |
Let $f:D:=\{z\in\mathbb{C}:\Re(z)<0\}\to\mathbb{C}$ defined by $z\mapsto\int_0^\infty{e^{zt}\over t+1}dt$. Show that $f$ is holomorphic.
In my solution I'm not using that the domain of $f$ satisfies $\Re(z)<0$ which is make me doubt in this solution. My question is where is the mistake in the solution and what is the right way? Thanks.
Attampt:
Let $\Gamma$ be a boundary of rectangle $M\subset D$. $$ \int_\Gamma f(z)dz=\int_\Gamma\int_0^\infty{e^{tz}\over t+1}dtdz \\ =\int_0^\infty{1\over t+1}(\int_\Gamma e^{tz}dz)dt=\int_0^\infty{1\over t+1}\cdot 0\cdot dt=0 $$ By Morera's theorem $f$ is analytic in $D$. |
Votes cast (19)
all time by type month 19 up 5 question 1 0 down 14 answer
8
If the cardinality of $f^{-1}$ is at most $f(x)^2$ then $f$ is differentiable almost everywhere.
8
$\int \frac{1}{f’(x)}$ diverges
6
If $f$ takes every value at most $k$ times, then f is differentiable almost everywhere.
5
Zeroes of $\sin(z)-z^2$
4
Radon-Nikodym derivative of a finitely additive measure
+5 Law of a stochastic processes with continuous path +35 $\sigma$-algebra generated by all countable and co-countable sets +5 Measure of the set $\{x\in [0,1]: \text{the decimal expansion of } x \text{ contains infinitely many 7.} \}$. +5 If the cardinality of $f^{-1}$ is at most $f(x)^2$ then $f$ is differentiable almost everywhere.
4
How to solve this sequence $165,195,255,285,345,x$
3
What is the difference between an inner product space and an Algebra over a field?
2
$\sigma$-algebra generated by all countable and co-countable sets
2
let $M$ be a Hermitian matrix of order $n\times n$ with rank $k (\neq n)$
1
Proof of Euclidian Algorithim from Terrence Tao's Analysis 1
all time by type month 19 up 5 question 1 0 down 14 answer |
Ultimately, you'll need a mathematical proof of correctness. I'll get to some proof techniques for that below, but first, before diving into that, let me save you some time: before you look for a proof, try random testing.Random testingAs a first step, I recommend you use random testing to test your algorithm. It's amazing how effective this is: in my ...
It's difficult to answer the question "how often". But as with all "underlying structures" the benefit comes from recognizing that the underlying problem one is trying to solve has a matroid (or greedoid) structure. It's not just matroid problems. The matroid intersection problem has a specific model (bipartite matching).Nick Harvey did his Ph.D thesis ...
I will use the following simple sorting algorithm as an example:repeat:if there are adjacent items in the wrong order:pick one such pair and swapelsebreakTo prove the correctness I use two steps.First I show that the algorithm always terminates.Then I show that the solution where it terminates is the one I want.For the first point,...
A coin system is canonical if the number of coins given in change by the greedy algorithm is optimal for all amounts.The paper D. Pearson. A Polynomial-time Algorithm for the Change-Making Problem. Operations Reseach Letters, 33(3):231-234, 2005 offers an $O(n^3)$ algorithm for deciding whether a coin system is canonical, where $n$ is the number of ...
We first observe the following: There is an optimal cover $C$, and no leaf is in $C$. This is true since in any optimal cover $X$ you can replace all leaves in $X$ with their parents, and you get a vertex cover which is not larger than $X$.Now take any optimal cover $C$ that does not contain leaves. Since no leave is selected, all parents of the leaves ...
Let's start with the following observation:Let $max$ denote the maximum of the sequence $a_1,...,a_n$, and let $min$ denote its minimum. If $a_1=max$, then choosing $b_1=b_2=...=b_n=\lfloor(max+min)/2\rfloor$ is optimal.Why is this the case? Well, since the sequence starts with the maximum, either we choose $b_1$ large, and suffer a large deviation ...
It would be nice if you stated the problem. I assume that you have $n$ items $x_i$, each having profit $p_i$ and weight $w_i$. You want to maximize your profit under the constraint that the total weight is at most $W$. For each item $x_i$, you are allowed to put any fraction $\theta \in [0,1]$ of it, which will give you profit $\theta p_i$ and weight $\theta ...
Your algorithm makes the wrong choice between the following two paths:5 channels with a reliability of 50% (combined reliability 3.125%), weight $5 \cdot {1 \over 0.50} = 10$.A single channel with a reliability of 8%, weight ${1 \over 0.08} = 12.5$.
The graph of overlapping jobs is an interval graph. Interval graphs are perfect graphs. So what you are trying to do is find a maximum weight independent set (i.e., no two overlap) in a perfect graph. This can be solved in polynomial time. The algorithm is given in "Polynomial Algorithms for Perfect Graphs", by M. Grötschel, L. Lovász, and A. Schrijver....
In simple words, an algorithm is normally considered "greedy" ifit builds solutions step by step without backtrackingin each step it picks what's best in the current state.To learn more about it, check this pdf out.The animated gif above illustrates a greedy algorithm for finding the path that adds up to the biggest number.It does so by choosing ...
Greedy algorithm can't help in that case. And it couldn't be compared with both fractional or 0-1 knapsack problems. The first could be resolved by greedy algorithm in O(n) and the second is NP.The problem you have could be brute-forced in O(2^n). But you could optimize it using dynamic programming.1) Sort intervals by start time.2) Initialize int[] ...
The running time of your algorithm is at most $N (N-1) (N-2) \cdots (N-K+1)$, i.e., $N!/(N-K)!$. This is $O(N^K)$, i.e., exponential in $K$.Justification: There are $N$ possible choices for what you put into the first blank, and in the worst case you might have to explore each. There are $N-1$ choices for the second blank, and so on. You can draw a tree ...
Here is a simple definition for greedy algorithm.There are many greedy algorithms for different problems and in order to understand them you must also know well the subject of the problem. This question on stackoverflow gives some examples of greedy algorithms usage.EDIT : After you made your question more clear I will try to sketch the algorithms by ...
The connection is that if a you can represent the structure underlying your optimisation problem as matroid, you can use the canonical greedy algorithm to optimise the sum of any positive weight function. If your optimisation goal fits this paradigm, you can solve your problem with the greedy approach.ExampleConsider the minimum spanning tree problem ...
Yes, this is the idea of greedy algorithms, also known as myopic algorithms. There is still a lot of freedom in deciding what the myopic choice is based on. Allan Borodin has developed a theory of priority algorithms formalizing the notion of greedy algorithm. Such a theory can be used to analyze what greedy algorithms cannot do.Sometimes greedy algorithms ...
Overview of the problemIf you takes teenagers as vertices of a graph, and have an edgewhenever the two teenagers are compatible. This gives you anundirected graph, and what you need is a Hamiltonian path in thisgraph (a path that contains every node exactly once). Maybe searchingthe web on this abstract version of the problem will yield more...
It's unclear why you single out the greedy algorithm; there are many different algorithms for combinatorial optimization, the greedy algorithm (or rather, greedy-like algorithms, also known as myopic algorithms) being only one of them. That said, I have a positive answer and a negative answer for you.Positive answer. Consider the problem of maximizing a ...
You don't state why you think that your algorithm is correct. In fact, it is incorrect. Here is an example. Consider the problem of computing the product of matrices of dimensions $2\times 1$, $1\times 2$, $2 \times 5$. Your algorithm first multiplies the first two at a cost of $4$, and then multiplies the remaining matrices at a cost of $20$, to a total of $...
The starting point is the trivial random algorithm that chooses $S$ completely at random. Each directed edge is cut with probability $1/4$ (why?), and so in expectation, this random algorithm gives a $1/4$ approximation.We can derandomize this algorithm using the method of conditional expectations. Arrange the points in order: $1,\ldots,n$. At step $i$, we ...
Reduction from 3-SAT:a variable in 3-SAT becomes a character in your problem and is paired with its negation. Each clause becomes a word.e.g.3 SAT: (a,b,-c) && (-b,c) =>pairs: (a,-a), (b,-b), (c,-c).words: (a,b,-c), (-b,c)Selecting a character in your problem means setting that literal to true in the 3-SAT instance. The corresponding ...
There is no such thing as the correct generalization of the greedy selection technique, because it's an informal technique. That said, there has been some effort at modeling the greedy heuristic, with a view toward understanding its limitations. This study has been initiated by Borodin, Nielsen and Rackoff, (Incremental) priority algorithms, and continued ...
So this question has been bothering me: why a cactus, if there's already a linear-time algorithm for a more general class?The primal problem is known as the fractional matching problem, and, unsurprisingly, it has been studied as well. Balinski (whose result was made known to me via Schrijver's book Combinatorial Optimization) characterized the extreme ...
One could implement this in O(nlogn)Steps:Sort the intervals based on end timedefine p(i) for each interval, giving the biggest end point which is smaller than the start point of i-th interval. Use binary search to obtain nlogndefine d[i] = max(w(i) + d[p(i)], d[i-1]).initialize d[0] = 0The result will be in d[n] n- the number of intervals....
Here is some Python code that should implement Greedy Set Cover in linear time:(Warning, it empties the input sets during the processing!)from collections import defaultdictF = [set([1,2,3]),set([3,4,5,6]),set([2])]# First prepare a list of all sets where each element appearsD = defaultdict(list)for y,S in enumerate(F):for a in S:...
The idea of the backtracking algorithm is simple, though somewhat cumbersome to express. Perhaps it's easiest to explain it working through the example in the question. We start by putting $T_1$ on chair 1. We then put $T_2$ on chair 2. Then we put $T_3$ on chair 3, and we discover a conflict. So we backtrack, replacing $T_3$ with the next available student. ...
Your understanding is completely wrong: what you describe is known as hill climbing or gradient descent in the continuous case, and local search in the discrete case.The best way to understand what greedy algorithms are is by an example. Consider the following optimization problem:Given a set $S$ of positive integers and a number $n$, choose a subset ...
Greedy algorithms can be used whenever you can think of the solution to the problem being reached in steps. The strategy is then just to choose the next step that looks best in some (usually simple, "local") sense, without ever undoing a step and trying an alternative path.The classical example is finding a minimal cost spanning tree of a graph. One ...
If I am right, the configuration below leads to a 7 blocks greedy solution (on the left). By symmetry, all four directions.But there is an 8 blocks solution (on the right).The problem with a greedy approach is that "consuming" a block can destroy two other possible blocks, and have a negative impact.Repeating the search in different directions will not ...
Suppose that numbers are $x_1, \ldots x_{2n}$, and let us rename them as $a_1, \ldots a_n, b_1, \ldots, b_n$, where $a_i \geq b_j$ for any $i, j$, $a_1 \geq a_2 \geq \ldots \geq a_n$, and $b_1 \leq b_2 \leq \ldots \leq b_n$. In this notation, the suggested optimum solution is $(a_1, b_1), \ldots (a_n, b_n)$.Given some arbitrary pairing, let us show we can ... |
Say we are in a BS world where the (conditional on t) price of a call is given by the usual
$$V(S_t)=V(S_t;K,r,\sigma,T|F_t) = \Phi(d_1)S_t - \Phi(d_2)Ke^{-r(T-t)}$$
Now, what about the unconditional (or actually conditional on s < t, say t=0) expectation of this price? That is, what does the following equal to
$$E[V(S_t)|F_0] = \int_{0}^{\infty}V(S)f_{S}dS = ?$$ where $f_{S}$ is the distribution of a log-normal rv
$$S=S_0e^{(\mu - 0.5\sigma)t+\sigma\sqrt{t}Z}$$
And what about $$E[S_tV(S_t)|F_0] = ?$$ and $$E[S_t^2V(S_t)|F_0] = ?$$
Also, the price is computed as the expectation, say
$$V(t)=e^{-r(T-t)}E[V(T)|F_t]$$,
but what about other moments? What is, for example the variance $$Var[V(T)|F_t] = E[V^2(T)|F_t] - (E[V(T)|F_t])^2 = ?$$
For a call I get to this
$$E[V^2(T)|F_t]=E[(S(T)-K)^2|F_t,S(T)>K]P(S(T)>K|F_t)=\left(E[S^2(T)|...]- 2KE[S(T)|...]+ K^2 \right)P(S(T)>K|F_t) = \left(E[S^2(T)|...] - KE[S(T)|...] \right)P(S(T)>K|F_t) - K\left(E[S(T)|...] - K \right)P(S(T)>K|F_t) = \left(E[S^2(T)|...] - KE[S(T)|...] \right)P(S(T)>K|F_t) - KV_C(t)$$
but then it needs a conditional expectation of a square of log-normal RV $E[S^2(T)|F_t,S(T)>K]$ which I haven't been able to work out so far. I think it could be solved by writing it as
$$S^2(T) = S^2(t)e^{2(\mu - 0.5\sigma)(T-t)+2\sigma\sqrt{T-t}Z}$$
and so it seems to be also log-normal with $\times 2$ the location and scale parameters.
Add 1
Since for European calls under BS we have
$$\frac{\partial V_C(t)}{\partial S(t)} = e^{-q\tau}\Phi(d_1)$$
we have that the density function of the value of the European call option is
$$f_C(v) = \frac{e^{q\tau}}{\Phi(d_1(s))}f_S(s)$$
where
$$ f_{S}(s; \mu, \sigma, t) = \frac{1}{\sqrt{2 \pi}}\, \frac{1}{s \sigma \sqrt{t}}\, \exp \left( -\frac{ \left( \ln s - \ln S_0 - \left( r - q - \frac{1}{2} \sigma^2 \right) t \right)^2}{2\sigma^2 t} \right).$$
Since the price of a European call is monotonic in $S(t)$, "all we have to do" is to find the inverse $s = V^{-1}_C(v)$ and then we get complete information on the distribution of $V_C$, not just its expectation as we do at the moment. Unfortunately, I have not been able to find that inverse and perhaps there is no expression for it in terms of the "common" functions. However, it seems to me that the distribution of $V_C$ should have been already derived by someone somewhere, but I haven't been able to find such literature.
Add 2
If we consider the distribution at the expiry, then we have for a European call under the BS framework
$$f_C(v;T|v>0) = \frac{1}{\sqrt{2 \pi \sigma^2 T}}\, \frac{1}{v+K}\, \exp \left( -\frac{ \left( \ln(v+K) - \ln S_0 - \left( r - q - \frac{1}{2} \sigma^2 \right) T \right)^2}{2\sigma^2 T} \right)$$
and
$$f_C(v;T|v=0) = \delta(v)P(S_T<K).$$
Thus, since we know the terminal density, I would have thought that it is possible to apply the backwards diffusion equation to derive it for times $t<T$. |
Forgot password? New user? Sign up
Existing user? Log in
∫−∝0log(n+ex)\Large ∫_{-∝}^{0} log (n+e^{x})∫−∝0log(n+ex) n∈Rn\in Rn∈R
Does any value of nnn exists such that the above integral becomes 0\Large 00 ?
Note by Akash Shukla 3 years, 4 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
The integral doesn't exist if n≠1n \neq 1n=1. If the integral exists when n=1n=1n=1, then the integral is positive as the integrand is always positive.
Hence, no value of nnn exists such that the integral vanishes.
Edit:
∫−∞0ln(1+ex)dx=π212\int_{-\infty}^0 \ln (1+e^x) dx = \frac{\pi^2}{12} ∫−∞0ln(1+ex)dx=12π2
Log in to reply
I didn't get it. Why the integral doesn't exist for 0<n<10<n<10<n<1, as you said the integral exists only for n=1n=1n=1, Thank you.
If n≠1n \neq 1n=1, then limx→−∞log(n+ex)≠0\lim_{x \to -\infty} \log (n+e^x) \neq 0limx→−∞log(n+ex)=0 and hence the integral diverges.
@Deeparaj Bhat – I have used the series of logloglog to solve it.
log(n+x)=log[1−(1−n−ex)]log(n+x) = log[1-(1-n-e^{x})] log(n+x)=log[1−(1−n−ex)],
Let, 1−n−ex=A1-n-e^{x} = A1−n−ex=A,
So, ∫−∝0log(1−A)=A21∗2+A32∗3+A43∗4⋯ ∫_{-∝}^{0} log (1-A) = \dfrac{A^2}{1*2}+\dfrac{A^3}{2*3}+\dfrac{A^4}{3*4}\cdots ∫−∝0log(1−A)=1∗2A2+2∗3A3+3∗4A4⋯,
Here it can be seen that it will converge for some values of nnn which is nearer to 111.
@Akash Shukla – You should have −1≤A<1-1\leq A <1−1≤A<1
Plus, you didn't integrate in the right way. If you did, you'll end up with infinty right from the first term.
@Deeparaj Bhat – Yes. As A=1−N−exA=1-N-e^xA=1−N−ex, so 0<n<10<n<10<n<1 and 0<ex<10<e^x<10<ex<1,because we have −∝<x<0-∝<x<0−∝<x<0, will satisfy the condition.
@Akash Shukla – You didn't integrate properly. If you did, you'd have got infinity a lot of times.
@Deeparaj Bhat – I got. But not for values of n<1,n>0n<1, n>0n<1,n>0 . Is there any mistake in the method I have shown?
@Akash Shukla – A basic test of convergence is that if limx→−∞f(x)≠0\lim_{x \to -\infty} f(x) \neq 0 limx→−∞f(x)=0, then, ∫−∞0f(x)dx\int_{-\infty}^0 f(x) dx ∫−∞0f(x)dx diverges.
I think you've not the integration correctly.
Problem Loading...
Note Loading...
Set Loading... |
Hi,
I would like to define two types (if I think properly even more) of epsilon tensors which would not interfere with each other when running "epsilon
todelta". I thought that if I define each epsilon tensor with its own "delta" (like \delta and \bar{\delta} respectively) it will work, but that's actually not true. Is there a way to do this?
Thanks,
Andrei
Here is the notebook I used
\dalpha::LaTeXForm("\dot{\alpha}").
\dbeta::LaTeXForm("\dot{\beta}").
\bdelta{#}::LaTeXForm("\bar{\delta}").
{\dot{#}, \bar{#}}::Symbol;
{\alpha, \beta, \gamma}::Indices(chiral, position=fixed);
{\dalpha, \dbeta}::Indices(antichiral, position=fixed);
{\alpha, \beta, \gamma, \delta}::Integer(1..2);
{\dalpha, \dbeta}::Integer(1..2);
\delta{#}::KroneckerDelta(chiral);
\bdelta{#}::KroneckerDelta(antichiral);
\epsilon_{\alpha \beta}::EpsilonTensor(delta=\delta);
\epsilon^{\dalpha \dbeta}::EpsilonTensor(delta=\bdelta);
And now the epsilons
ex:=\epsilon_{\alpha \beta} \epsilon^{\dalpha \dbeta};
epsilon_to_delta(_);
which yield a mixed index \delta. |
Appendix 1: all pairs $(u,v)$ in the tree depicted satisfy $u \geq 2v.$ As a result, $$ k = u^2 - 2 v^2 \geq 4 v^2 - 2v^2 = 2 v^2, $$so $$2 v^2 \leq k$$and $$ \color{blue}{ v \leq \sqrt {\frac{k}{2}}}. $$
Appendix 2:we may demand$$ v \leq \frac{u}{2}. $$Therefore$$ 2 v^2 \leq \frac{u^2}{2}, $$$$ -2 v^2 \geq - \frac{u^2}{2}, $$$$ k = u^2 -2 v^2 \geq u^2 - \frac{u^2}{2} = \frac{u^2}{2}, $$$$ u^2 \leq 2 k, $$$$ \color{blue}{ u \leq \sqrt {2k}}. $$
preliminary: I already think you are roughly correct. The Conway topograph method deals most directly with $u+v$ when both are positive. The largest variables come from$$ u = 2n + 1, \; \; v = n, \; \; u^2 - 2 v^2 = 2 n^2 + 4 n + 1 $$ Note that this "branch" of the tree illustrates both inequalities well, $ u \leq \sqrt {2k} $ and $ v \leq \sqrt {\frac{k}{2}}. $
I have answered several questions with these diagrams, also the book describing the method is at CONWAY. The point is that any (positive) number represented occurs in the first tree on the positive side of the river:
Just noticed the absolute values in the original question. If you are willing to represent $-k$ instead of $k,$ you get the bounds you wanted. This happens in an upside-down tree below the river where we have $u^2 - 2 v^2 = -k$ for positive $k,$ and with $u,v > 0$ and $v \geq u.$ Similar arguments to the above give your desired bounds, $$ u \leq \sqrt k, \; \; \; v \leq \sqrt k. $$ |
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever."
Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field.
"You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. "
so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force.
For the buoyancy do I: density of water * volume of water displaced * gravity acceleration?
so: mass of bottle * gravity = volume of water displaced * density of water * gravity?
@EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$?
As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern...
You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer.
Though as it happens I have to go now - lunch time! :-)
@JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth.
Anonymous
Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P
I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure
Not sure about that, but the converse is certainly false :P
Derrida has received a lot of criticism from the experts on the fields he tried to comment on
I personally do not know much about postmodernist philosophy, so I shall not comment on it myself
I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger.
I can see why a man of that generation would be leaned towards that idea. I do too. |
Let $G$ be
multipartite directed weighted graph with $k$ independent sets (we will call them "layers"). We select exactly one node from each layer and form the induced subgraph $H_k$. That is, $H_k$ has exactly $k$ nodes (one from each layer) and contains all edges from $G$ that have both endpoints in $H_k$.
Our goal is to find $H_k$ such that the total weight of all of its edges is
minimized:$$\min_{H_k \subset G}\sum_{e \in H_k} weight(e)$$
(you can assume that the graph is connected, so a solution always exists)
Case #1: Graph is flat
To better illustrate the problem I will give some examples. Consider a special case where all edges in $G$ are from layer $i$ to layer $i+1$:
This problem can be easily solved, by adding 2 new nodes
entry and exit to $G$. Then we add edges with $0$ weight from entry to every node in layer #1 and from every node in layer #$k$ to exit. Finally the solution to our problem is the shortest path from entry to exit.
In our example, the minimum weight 4-induced subgraph will be: $A_3, B_1, C_1, D_1$, with total weight $20$.
Case #2: Graph has backward edges
In this case, we allow a layer to have
backward edges; that is, a layer $i$ can have edges to any layer $j$ as long as $i \ne j$. For instance, consider the graph from the previous example, but this time add some backward edges(with blue color):
Unfortunately, the previous approach does not work anymore, as the previous approach will give us the same solution $A_3, B_1, C_1, D_1$ with a total weight of $70$, but the minimum subgraph is $A_3, B_2, C_1, D_2$ with total weight $34$
Case #3: Re-define the problem
Clearly, the introduction of "layers" can make the analysis hard. So, we can redefine the problem without requiring $G$ to be multipartite. That is, instead of having layers, we add an edge with $\infty$ weight between every pair on the same layer. Then the minimum weight k-induced subgraph $H_k$, cannot have two nodes from the same layer, as this would imply that $H_k$ contains an edge with $\infty$ weight. Back in our example, the previous graph becomes:
The case #3 is NP-hard
Unfortunately in the general case
this problem is NP-hard (because it is an optimization problem), as there is a reduction from k-clique:
Let $R$ be an undirected unweighted graph that we want to check whether it has a $k$-clique. That is, we want to check whether $clique(R,k)$ is
True ornot. Thus, we create a new directed graph $R\space'$ as follows:
$R\space'$ contains all the nodes from $R$
$\forall$ edge $(u,v)\in R$, we add the edges $(u,v)$ and $(v,u)$ in $R\space'$ with $weight = 1$
$\forall$ edge $(u,v)\notin R$, we add the edges $(u,v)$ and $(v,u)$ in $R\space'$ with $weight = \infty$
Then we find the
minimum weight k-induced subgraph $H_k$ in $R'$. It is true that:
$$\sum_{e \in H_k} weight(e) < \infty \Leftrightarrow clique(R,k) = True$$ $:\Rightarrow$ If the total edge weight of $H_k$ is not $\infty$, this implies that for every pair of nodes in $H_k$, there is an edge with weight $1$ in $R\space'$ and thus an edge in $R$. This by definition means that the nodes of $H_k$ form a k-clique in $R$. Otherwise (the total edge weight of $H_k$ is $\infty$) it means that it does not exist a set of $k$ nodes in $R\space'$ that has all edge weights $< \infty$.
$:\Leftarrow$If $R$ has a k-clique, then there will be a set of $k$ nodes that are fully connected. This set of nodes will have no edge with $\infty$ weight in$R\space'$. Thus, these nodes will form an
induced subgraph of $R\space'$and the total weight will be smaller than $\infty$. (proof is not formal; I just describe the general idea) The Question
Although the problem that I described is
NP-hard (assuming that my analysisis correct), I want to find an approximation algorithm (along with a proof) thatcan give me a solution that is at most $n$ times worse than optimal (obviouslywe want $n$ to be as small as possible).
There is also a paper that solves a similar problem, but I don not know if that helps. |
I try to reduce my problem to a more general statement from which I want to know whether this is true in general.
I have a sequence of continuous-time stochastic processes $X_t^{(n)}, t \geq 0$ with values in some Polish space $E$ for which I know that they all are stochastically continuous and jointly measurable. In particular, the paths are Borel measurable. As $n \to \infty$ this sequence convergences in distribution to a stochastic process $Y_t$ which is not necessarily stochastically continuous any more.
Is the limit process $Y_t$ jointly measurable? If 1. is not true, is it then at least true that $Y_t$ (or some version) has Borel measurable sample paths (or Lebesgue measurable)?
In general, there are of course processes $Y_t$ such that $Y_t$ has non-measurable sample paths, e.g. taking $Y_t \in \{ 0, 1 \}$ uniformly distributed and independent for each $t$. Moreover, this process is not jointly measurable. However, I have a process $Y_t$ that arises as a limit of processes with nice properties.
I hope to have found some suggestions for an answer to question 2 in "Probability With a View Towards Statistics" by Hoffman-Jorgensen, Exc. 9.3-9.6
(i) A set $A \subseteq E^{[0, \infty)}$ is called thick if $E^{[0, \infty)}$ is the only measurable set in the product $\sigma$-algebra $\mathscr{B}(E)^{\otimes [0, \infty)}$ that contains $A$.
(ii) The set $M([0, \infty), E) := \{ \omega : [0, \infty) \to E \ | \ \omega \text{ measurable} \}$ is thick (and also the set of non-measurable paths is thick).
(iii) If $A$ is a thick set then every stochastic process has a version with paths in $A$. In particular, every stochastic process has a version with measurable sample paths (and also a version with non-measurable sample paths).
So, it only remains then to check whether 1. is true in general. |
Help talk:Displaying a formula Please add new topics to the bottom of this page Contents 1 LaTeX symbols 2 Converting LaTeX to SVG 3 some vandalism 4 Results of a formula 5 Long arrows 6 For reference: TeX to SVG via DVI and EPS 7 please, add support for comparision symbols! 8 description for \left| and \right| ??? 9 Displaying ampersand (&) 10 Integer 11 Html???? 12 ???? 13 Translation system 14 Mediawiki math markup interpretation upgrade: generate MathML (not PNG) and automatically embed hyperlinks for each symbol 15 Math symbol code 16 Sections screwed up 17 Displaying formulas in arabic and other RTL languages LaTeX symbols[edit]
Hello. I'd like to add an alternative layout for the LaTeX symbols. There is a sketch at
en:Table of LaTeX symbols ia:Wikipedia:LaTeX symbols. I think that this layout would be very helpful for many people. What is your opinion? --Julian 00:53, 12 November 2008 (UTC) Very nice layout! Helder 01:23, 12 November 2008 (UTC)
I posted this suggestion at the talk page on Wikipedia. If you have any suggestions, please respond there. Ryan Reich 00:11, 14 January 2009 (UTC)
some vandalism[edit]
There is some spam/vandalism with some "kpss"-links on this help page, which sometimes replaced old links. Example: [1]. I currently don't have time to search all original link targets. --95.208.4.121 21:25, 17 May 2009 (UTC)
Results of a formula[edit]
I am working on Tables for a Wiki that I am wondering if i could get the results to display. this article appears to work for displaying the formula itself. I am building a Template and when an editor/creator fills in the field of the template, it will populate a cell in the table with the value. --Christopher.perkins 18:07, 11 September 2009 (UTC)
For doing computations see Help:Calculation and mw:Help:Extension:ParserFunctions. The syntax of the formulas is different from that for displaying a formula .--Patrick (talk) 21:02, 11 September 2009 (UTC) Long arrows[edit]
How can that ugly rendering of the long arrows be fixed? -- 79.217.232.40 20:01, 24 November 2009 (UTC)
For reference: TeX to SVG via DVI and EPS[edit]
Here’s an alternative way of going from TeX to SVG, of some use in commutative diagrams. It’s not useful for most purposes, hence I’m removing it from the main page (as I was the one who put it there), but is of some reference value, so I include it here:
latex comm.tex dvips -E -y 2500 -o comm.eps comm.dvi eps2eps -dNOCACHE comm.eps comm2.eps pstoedit -f sk comm2.eps comm.sk inkscape -z -f comm.sk -l comm.svg # pstoedit -f svg comm2.eps comm.svg # this may also work “These produce a DVI file, convert it to EPS (rescaling by 2.5x), convert fonts to outlines, and convert to SVG via Sketch. One advantage is that rescaling makes smaller or more complex diagrams more legible, as xy-pic normally sizes for printed matter.” Nbarth 11:06, 2 December 2009 (UTC) please, add support for comparision symbols![edit]
those symbols are in unicode and a way much easier to input than use google to find some indirect way to math-syntax help! symbols are "≤" and "≥" Tex was designed for old-time punch cards, that lacked maaaany symbols. Now that symbols are given, please make use of them! —The preceding unsigned comment was added by 93.157.184.200 (talk) 15 December 2009
Even if texvc supported those characters in "math" code (which it apparently doesn't), many people would still not be able to generate those characters on their keyboards... - dcljr (talk) 08:53, 28 April 2012 (UTC) description for \left| and \right| ???[edit] The absolute value function is implemented using the vertical bar (
|). It can be used with \left and \right. See "Bars and double bars" under Parenthesizing big expressions, brackets, bars. I've added a parenthetical remark on the help page. - dcljr (talk) 08:46, 28 April 2012 (UTC)
Displaying ampersand (&)[edit]
When I try \&, I get a can't parse error. How does one get a displayed ampersand?
Thanks. Russ Abbott 04:01, 17 May 2011 (UTC)
Integer[edit] For computations see Help:Calculation. You can use round or trunc.--Patrick (talk) 08:02, 27 February 2013 (UTC) Html????[edit]
Newbie reader here, quite confused by the use of "html" in this article. Near as I can tell, what's being called "html" looks like a wiki macro called math. (Certainly when I copy the "html" into a notepad file, save it as html and open it with my browser.... well, it doesn't look at all like a formula should. If there are straight html tags that one can use, even when the math macro isn't available, I'd sure like to know about them.) And... in the meantime, maybe the "html" coding, should be renamed wiki-macro coding. I doubt it works anywhere except on wikimedia sites. 173.206.176.77 22:43, 3 April 2013 (UTC)
The article distinguishes between html and the special TeX code between math tags.--Patrick (talk) 23:17, 3 April 2013 (UTC) ????[edit]
Formula in MathJax displays good, but in PNG displays bad. Here is a source of the formula:
\begin{matrix}\mbox{trilelea} &=& 100 \underbrace{????…????} \\& & 99 \underbrace{????…????} \\& & 98 \underbrace{????…????} \\& & \,\,\qquad\vdots\qquad\,\, \\& & 4 \underbrace{????…????}\\& & 3?? \\\end{matrix}
Translation system[edit]
What do you think about enabling the Translate extension on this page? It is a lot easier to update the translations using it... Helder.wiki 13:12, 20 November 2013 (UTC)
I think now we need some translation admin to mark the current version of the page for translation. @Nemo bis: do you know if that is right? Or maybe I need to do something else to implement the feature here? Helder.wiki 14:41, 7 June 2014 (UTC) Needs much more work than this, sorry. Main needs: add newlines after headers, don't put entire lists in one unit (I suggest to make the "Pro" sections use bullets), exclude non-translatable bits from translation, especially when they include tables and they are on their own line, mark as no longer outdated once it's up to date. For more, see the manual. --Nemo 14:50, 7 June 2014 (UTC) Needs much more work than this, sorry. Main needs: [edit]
Probably the most unhelpful part of Wikipedia is its implementation of mathematical formulae (the generation of static images from math markup prevents user interaction). For a general user encountering a new equation, they are often shown a) various undefined operators/functions, and b) various undefined variables/constants. I suggest the following updates to Mediawiki/Wikipedia to extend its high level of usability to its interpretation of math markup;
a) i. Latex format should be retained for user equation editing, however the equation renderer must be updated substantially. The Latex code should be converted by the Mediawiki server to MathML before being sent to the client's browser. Users without native MathML support should be encouraged to update to a W3C standards compliant browser asap (e.g. Firefox). ii. Wikipedia should automatically generate MathML hyperlinks for every operator/operation/function (e.g. <msqrt href="en.wikipedia.org/wiki/Square_root">) and variable/constant (e.g. <mi href="en.wikipedia.org/wiki/Euclidean_vector">x</mi>) in the formula.
b) Consider creating a new Wikipedia Template specific to ambiguous mathematical formulae (with one or more undefined variable/constant) called "Template:Definition needed".
Hello comminity, I really like the simple but very clear version of Julian Mendez, the link is in the first or second row of this discussion. Is anybody out there, who has the possibility to complete or better said to fulfill his version, an maybe finally give a better findable heading to it, so its easier to find it on google. Something like "Tex help" or "overview Tex" or something like that. I think it is a really helpful tool for all the beginners and the pros, also. Thanks! Math symbol code[edit]
Hi. I wonder how come the Greek alphabet letters are not independently coded like the regular coding of Anglo-Latino alphabet? Why write
\alpha and not α? These letters are used so much.
And the same with symbols like
∅ instead of {\empty,\emptyset,\varnothing}, ℕ instead of \N and so on.
In any way there is no consistency in many codings, like \N is but \A isn't but rather undefined.
Sections screwed up[edit]
Sorry if I got something wrong, I am just a wiki noob who tries to understand some of this magic here. If I edit a single section, what I see is not what I expected: Tried to edit Formatting issues, where I end up to edit instead is this:
===Integrals=== <!--T:109--> <math>\int_a^x \!\!\!\int_a^s f(y)\,dy\,ds = \int_a^x f(y)(x-y)\,dy</math> <math>\int_a^x \!\!\!\int_a^s f(y)\,dy\,ds = \int_a^x f(y)(x-y)\,dy</math>
If you try to edit a section way down this article, e. g. Integrals, you end up with an error message:
"You tried to edit a section that does not exist. It may have been moved or deleted while you were viewing the page." Maybe an issue with the translation plugin? -15:20, 20 April 2017 (UTC) Displaying formulas in arabic and other RTL languages[edit]
I seem unable to figure out how to display formulas in RTL properly, the \text{} is indeed allowing to use non latin letters, but the RTL logic is otherwise missing, or symbols needed for that are missing (e.g. a right to left sigma). this is getting more problematic with articles being translated to arabic and showing equations and formulas for readers in a form and language they are not familiar with leaving behind most of the readers up to high school in arabic speaking countries.--Uwe a (talk) 18:57, 2 July 2017 (UTC) |
@DavidReed the notion of a "general polynomial" is a bit strange. The general polynomial over a field always has Galois group $S_n$ even if there is not polynomial over the field with Galois group $S_n$
Hey guys. Quick question. What would you call it when the period/amplitude of a cosine/sine function is given by another function? E.g. y=x^2*sin(e^x). I refer to them as variable amplitude and period but upon google search I don't see the correct sort of equation when I enter "variable period cosine"
@LucasHenrique I hate them, i tend to find algebraic proofs are more elegant than ones from analysis. They are tedious. Analysis is the art of showing you can make things as small as you please. The last two characters of every proof are $< \epsilon$
I enjoyed developing the lebesgue integral though. I thought that was cool
But since every singleton except 0 is open, and the union of open sets is open, it follows all intervals of the form $(a,b)$, $(0,c)$, $(d,0)$ are also open. thus we can use these 3 class of intervals as a base which then intersect to give the nonzero singletons?
uh wait a sec...
... I need arbitrary intersection to produce singletons from open intervals...
hmm... 0 does not even have a nbhd, since any set containing 0 is closed
I have no idea how to deal with points having empty nbhd
o wait a sec...
the open set of any topology must contain the whole set itself
so I guess the nbhd of 0 is $\Bbb{R}$
Btw, looking at this picture, I think the alternate name for these class of topologies called British rail topology is quite fitting (with the help of this WfSE to interpret of course mathematica.stackexchange.com/questions/3410/…)
Since as Leaky have noticed, every point is closest to 0 other than itself, therefore to get from A to B, go to 0. The null line is then like a railway line which connects all the points together in the shortest time
So going from a to b directly is no more efficient than go from a to 0 and then 0 to b
hmm...
$d(A \to B \to C) = d(A,B)+d(B,C) = |a|+|b|+|b|+|c|$
$d(A \to 0 \to C) = d(A,0)+d(0,C)=|a|+|c|$
so the distance of travel depends on where the starting point is. If the starting point is 0, then distance only increases linearly for every unit increase in the value of the destination
But if the starting point is nonzero, then the distance increases quadratically
Combining with the animation in the WfSE, it means that in such a space, if one attempt to travel directly to the destination, then say the travelling speed is 3 ms-1, then for every meter forward, the actual distance covered by 3 ms-1 decreases (as illustrated by the shrinking open ball of fixed radius)
only when travelling via the origin, will such qudratic penalty in travelling distance be not apply
More interesting things can be said about slight generalisations of this metric:
Hi, looking a graph isomorphism problem from perspective of eigenspaces of adjacency matrix, it gets geometrical interpretation: question if two sets of points differ only by rotation - e.g. 16 points in 6D, forming a very regular polyhedron ...
To test if two sets of points differ by rotation, I thought to describe them as intersection of ellipsoids, e.g. {x: x^T P x = 1} for P = P_0 + a P_1 ... then generalization of characteristic polynomial would allow to test if our sets differ by rotation ...
1D interpolation: finding a polynomial satisfying $\forall_i\ p(x_i)=y_i$ can be written as a system of linear equations, having well known Vandermonde determinant: $\det=\prod_{i<j} (x_i-x_j)$. Hence, the interpolation problem is well defined as long as the system of equations is determined ($\d...
Any alg geom guys on? I know zilch about alg geom to even start analysing this question
Manwhile I am going to analyse the SR metric later using open balls after the chat proceed a bit
To add to gj255's comment: The Minkowski metric is not a metric in the sense of metric spaces but in the sense of a metric of Semi-Riemannian manifolds. In particular, it can't induce a topology. Instead, the topology on Minkowski space as a manifold must be defined before one introduces the Minkowski metric on said space. — baluApr 13 at 18:24
grr, thought I can get some more intuition in SR by using open balls
tbf there’s actually a third equivalent statement which the author does make an argument about, but they say nothing about substantive about the first two.
The first two statements go like this : Let $a,b,c\in [0,\pi].$ Then the matrix $\begin{pmatrix} 1&\cos a&\cos b \\ \cos a & 1 & \cos c \\ \cos b & \cos c & 1\end{pmatrix}$ is positive semidefinite iff there are three unit vectors with pairwise angles $a,b,c$.
And all it has in the proof is the assertion that the above is clearly true.
I've a mesh specified as an half edge data structure, more specifically I've augmented the data structure in such a way that each vertex also stores a vector tangent to the surface. Essentially this set of vectors for each vertex approximates a vector field, I was wondering if there's some well k...
Consider $a,b$ both irrational and the interval $[a,b]$
Assuming axiom of choice and CH, I can define a $\aleph_1$ enumeration of the irrationals by label them with ordinals from 0 all the way to $\omega_1$
It would seemed we could have a cover $\bigcup_{\alpha < \omega_1} (r_{\alpha},r_{\alpha+1})$. However the rationals are countable, thus we cannot have uncountably many disjoint open intervals, which means this union is not disjoint
This means, we can only have countably many disjoint open intervals such that some irrationals were not in the union, but uncountably many of them will
If I consider an open cover of the rationals in [0,1], the sum of whose length is less than $\epsilon$, and then I now consider [0,1] with every set in that cover excluded, I now have a set with no rationals, and no intervals.One way for an irrational number $\alpha$ to be in this new set is b...
Suppose you take an open interval I of length 1, divide it into countable sub-intervals (I/2, I/4, etc.), and cover each rational with one of the sub-intervals.Since all the rationals are covered, then it seems that sub-intervals (if they don't overlap) are separated by at most a single irrat...
(For ease of construction of enumerations, WLOG, the interval [-1,1] will be used in the proofs) Let $\lambda^*$ be the Lebesgue outer measure We previously proved that $\lambda^*(\{x\})=0$ where $x \in [-1,1]$ by covering it with the open cover $(-a,a)$ for some $a \in [0,1]$ and then noting there are nested open intervals with infimum tends to zero.
We also knew that by using the union $[a,b] = \{a\} \cup (a,b) \cup \{b\}$ for some $a,b \in [-1,1]$ and countable subadditivity, we can prove $\lambda^*([a,b]) = b-a$. Alternately, by using the theorem that $[a,b]$ is compact, we can construct a finite cover consists of overlapping open intervals, then subtract away the overlapping open intervals to avoid double counting, or we can take the interval $(a,b)$ where $a<-1<1<b$ as an open cover and then consider the infimum of this interval such that $[-1,1]$ is still covered. Regardless of which route you take, the result is a finite sum whi…
W also knew that one way to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ is to take the union of all singletons that are rationals. Since there are only countably many of them, by countable subadditivity this give us $\lambda^*(\Bbb{Q}\cap [-1,1]) = 0$. We also knew that one way to compute $\lambda^*(\Bbb{I}\cap [-1,1])$ is to use $\lambda^*(\Bbb{Q}\cap [-1,1])+\lambda^*(\Bbb{I}\cap [-1,1]) = \lambda^*([-1,1])$ and thus deducing $\lambda^*(\Bbb{I}\cap [-1,1]) = 2$
However, what I am interested here is to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ and $\lambda^*(\Bbb{I}\cap [-1,1])$ directly using open covers of these two sets. This then becomes the focus of the investigation to be written out below:
We first attempt to construct an open cover $C$ for $\Bbb{I}\cap [-1,1]$ in stages:
First denote an enumeration of the rationals as follows:
$\frac{1}{2},-\frac{1}{2},\frac{1}{3},-\frac{1}{3},\frac{2}{3},-\frac{2}{3}, \frac{1}{4},-\frac{1}{4},\frac{3}{4},-\frac{3}{4},\frac{1}{5},-\frac{1}{5}, \frac{2}{5},-\frac{2}{5},\frac{3}{5},-\frac{3}{5},\frac{4}{5},-\frac{4}{5},...$ or in short:
Actually wait, since as the sequence grows, any rationals of the form $\frac{p}{q}$ where $|p-q| > 1$ will be somewhere in between two consecutive terms of the sequence $\{\frac{n+1}{n+2}-\frac{n}{n+1}\}$ and the latter does tends to zero as $n \to \aleph_0$, it follows all intervals will have an infimum of zero
However, any intervals must contain uncountably many irrationals, so (somehow) the infimum of the union of them all are nonzero. Need to figure out how this works...
Let's say that for $N$ clients, Lotta will take $d_N$ days to retire.
For $N+1$ clients, clearly Lotta will have to make sure all the first $N$ clients don't feel mistreated. Therefore, she'll take the $d_N$ days to make sure they are not mistreated. Then she visits client $N+1$. Obviously the client won't feel mistreated anymore. But all the first $N$ clients are mistreated and, therefore, she'll start her algorithm once again and take (by suposition) $d_N$ days to make sure all of them are not mistreated. And therefore we have the recurence $d_{N+1} = 2d_N + 1$
Where $d_1$ = 1.
Yet we have $1 \to 2 \to 1$, that has $3 = d_2 \neq 2^2$ steps. |
I am struggling with the following question:
Let $\kappa$ be a regular uncountable cardinal. Show that $Fn(\kappa \times \omega , \omega)$ has the countable chain condition. Where $Fn(I,\omega)$ is the partial order of all finite partial functions $p: I \rightarrow \omega$ with extension relation superset. (For an infinite index set $I$)
I am struggling with the following question:
Suppose that $\{p_i\mid i\in I\}$ is an uncountable family of conditions, by the $\Delta$-system lemma, there is an uncountable $J\subseteq I$ such that $\{\operatorname{dom} p_j\mid j\in J\}$ form a $\Delta$-system.
Suppose that the root of the system is $A$ which is a finite subset of $\kappa\times\omega$. There are only countably many functions from $A$ to $\omega$, so there is an uncountable $J'\subseteq J$ such that $\{p_j\mid j\in J'\}$ all agree on their common domain. And of those, any two are compatible.
In fact, the proof shows that this is not only ccc, but in fact Knaster: every uncountable family of conditions has an uncountable subfamily, such that
any two are compatible. |
Question:
In noisy factory environments, it's possible to use a loudspeaker to cancel persistent low-frequency machine noise at the position of one worker. The details of practical systems are complex, but we can present a simple example that gives you the idea. Suppose a machine 6.0 m away from a worker emits a persistent 90 Hz hum. To cancel the sound at the worker's location with a speaker that exactly duplicates the machine's hum, how far from the worker should the speaker be placed? Assume a sound speed of 340 m/s.
The Speed of Sound
Sound events comprise periodic compressions and rarefacations of the material that the sound event is traveling through. This implies that the density of this material (i.e. the coupling between neighboring atoms) plays a key role determining the speed of these sound events. For instance, sound travels much faster in solids than in liquids and faster in liquids than in gases. Examples for the speed of sound are:
Granite: 5950 m/s Water: 1484 m/s Air: 343 m/s.
The given values are approximate and depend on temperature (strongly in gases) and (in fluids) on pressure.
Answer and Explanation:
At a frequency of 90 Hz, the humming sound has a wavelength of $$\lambda = \frac{c}{f} = \rm \frac{340~m/s}{90~Hz} = 3.8~m~. $$ To cancel the sound of the machine, the identical sound coming from the speaker has to have a phase shift of half a wavelength, which is 1.9 m. Therefore, two possible locations for the loudspeaker are:
6.0 m - 1.9 m = 4.1 m 6.0 m + 1.9 m = 7.9 m
The first option is better in so far as that the speaker has to be driven at a lower amplitude, since it is closer to the worker's ears.
Note that the speaker has to be placed on the line defined by the position of the worker and the position of the machine to yield the largest possible quiet zone. Another option would be to co-locate the speaker with the machine and drive the speaker with a signal, that is phase-shifted by 180{eq}^{\circ} {/eq}.
Become a member and unlock all Study Answers
Try it risk-free for 30 daysTry it risk-free
Ask a question
Our experts can answer your tough homework and study questions.Ask a question Ask a question
Search Answers Learn more about this topic:
from MTEL Physics (11): Practice & Study GuideChapter 16 / Lesson 5 |
Question
Verify that the units of $\dfrac{\Delta \Phi}{\Delta t}$ are volts. That is, show that $1 \textrm{T} \cdot \textrm{m}^2 \textrm{/s} = 1 \textrm{V}$
Final Answer
See the video solution for dimensional analysis.
Video Transcript
This is College Physics Answers with Shaun Dychko Our job is to show that Tesla meters squared per second is the same as volts. So, the rate of change of flux, in other words, we're trying to show that the units of that are volts. So, we know that flux's units of Tesla meters squared because flux is magnetic field strength multiplied by area. So this is Tesla for the magnetic field strength, and meters squared for area. So that's where this comes from. And then, of course, change in time is units per seconds. And then, for the next step, I want to replace Tesla with some more base units. This is one formula that I remember that concludes magnetic field. So, you could've chosen other formulas that include magnetic field. But I have that the force on a moving charge, in the presence of magnetic field, equals a charge multiplied by its speed times the magnetic field strength. And we can solve this for
Bby dividing both sides by qv. And you get Bis F over qV, which means that Tesla is the same as Newtons from the force, divided by Coulombs, q, and meters per second from v. And then, we simplify this a little bit. So, this thing here ends up on...we can multiply top and bottom by seconds. So multiply the Newtons by seconds, and multiply this denominator by seconds, in which case, it cancels there. So left with Newtons, times seconds, times meter squared on the top. And then, multiply this whole top by Coulomb meters, and then multiply the bottom by Coulomb meters, and these Coulomb meters cancel, and then we're left with seconds times, Coulomb, times meters in the bottom. And then, the seconds cancel completely. And meters squared divided by meters is meters. So we have, Newtons, times meters, divided by Coulombs. And Newtons times meters is Joules, because definition of work is force multiplied by displacement. And so we have Newtons times meters, force times displacement, and that makes work, and work is units of Joules. And Joules per Coulomb is the definition of volts. And so, quod erat demonstrandum. Quite easily done. We're finished there. |
Making your proof more rigorous
The way I read your proof, you use the $=$ sign to denote what you
intend to proove, not what you already have established. You might make that clearer by writing $\overset?=$ instead. But I'll leave that notational aspect to other answers, and concentrate on what I believe you are trying to say.
You could improve your derivation by stating clearly what set of axioms you assume, what theorems you already derived from these axioms (as far as they are or at least might be relevant), and which of these axioms or theorems you use at each step.
If you look at e.g. the definition of a ring in Wikipedia, and compare that with your proof, you could end up with something like this:
Let $a$ be an arbitrary element of the ring. You could simplify the proof by specifically choosing $a=0$ or $a=1$, but I'll follow your approach. The important fact is that the underlying set cannot be empty, which is guaranteed by the existence of the additive and multiplicative identities (which might be the same). Let $-a$ be the additive inverse of $a$, so $a+(-a)=0$. Writing this as $a-a$ is a syntactic simplification which can obscure what exactly you may assume at any given point, so I'd not do this here. Substitute that into both occurrences of $0$ in $0\cdot 0$, to obtain $\bigl(a+(-a)\bigr)\bigl(a+(-a)\bigr)$. It might be enough to substitute one, since in fact $0\cdot a=0$ for any $a$, but again I follow your approach. Use left distributivity to split the right paren: $\bigl(a+(-a)\bigr)a + \bigl(a+(-a)\bigr)(-a)$. Use right distributivity twice to obtain $\bigl(a^2 + (-a)a\bigr) + \bigl(a(-a)+(-a)^2\bigr)$. Next you essentially simplify $(-a)a$ to $-(a^2)$. But how do you know that's valid? You might be tempted to derive that from $a^2+\bigl(-(a^2)\bigr)=0=0\cdot a=\bigl(a+(-a)\bigr)a=a^2+(-a)a$. But at the second $=$ you'd need to show that $0=0\cdot a$, which is a more general version of what you're about to prove.
At this point you can see that your proof is flawed, and can either look for ways to fix it, or start in a different direction. And I hope you will notice how shorthand notation like the use of minus as a binary operator can lead to oversights when dealing with axiomatic systems at such a low level.
A working example of a pretty rigorous proof
If you apply the same notation to a variant of the proof Mirko suggested, you get
Consider $0\cdot a=b$ which includes the special case of $a=0$. Since $0$ is the additive identity, you have $0+0=0$. Substitute 2. into 1. to get $(0+0)\cdot a=b$. Use right distributivity to obtain $0\cdot a+0\cdot a=b$. Substitute 1. into this to obtain $b+b=b$. Add $-b$, the additive inverse of $b$, to both sides: $(b+b)+(-b)=b+(-b)$. Apply associativity on the left hand side to obtain $b+\bigl(b+(-b)\bigr)=b+(-b)$. Use the fact that $b+(-b)=0$ since $-b$ was chosen to be the additive inverse. So you get $b+0=0$. And since $0$ is the additive identity this simplifies to $b=0$.
Taking everything together you conclude that $\forall a:0\cdot a=0$.Using essentially the same steps, you can show that $\forall a:a\cdot 0=0$. Either of these will include $0\cdot0=0$ as a special case.
If you don't like the use of $b$ as an abbreviation for $0\cdot a$, or if you prefer dealing with terms instead of equations, you can also write this whole thing as a sequence of such term transformations:
\begin{align*}0\cdot a&= 0\cdot a + 0&&\textbf{additive identity}\\&= 0\cdot a + \Bigl(0\cdot a + \bigl(-(0\cdot a)\bigr)\Bigr)&&\textbf{additive inverse}\\&= (0\cdot a + 0\cdot a) + \bigl(-(0\cdot a)\bigr)&&\textbf{associativity}\\&= (0 + 0)\cdot a + \bigl(-(0\cdot a)\bigr)&&\textbf{right distributivity}\\&= 0\cdot a + \bigl(-(0\cdot a)\bigr)&&\textbf{additive identity}\\&= 0&&\textbf{additive inverse}\end{align*}
Alternate subject areas
If you are not talking about rings, then what else are you talking about?
If you are talking about natural numbers, there are several ways to define those – or rather the multiplication operation on these. One could define them using Peano arithmetic, and as CommonerG pointed out, $a\cdot0=0$ is part of the definition of multiplication there. One could define them as the cardinals of finite sets, with multiplication defined the way Martín-Blas Pérez Pinilla used it. Other set-theoretic definitions represent the numbers themselves as sets. I'm no expert on how multiplication is defined in each of these formalisms, but I guess it will likely boil down to Peano arithmetic again. You might be talking about integers, rationals, reals or complex numbers. Each of these is usually either defined axiomatically in a way that includes the ring axioms, or constructed (ℤ, ℚ, ℝ, ℂ) in a way that eventually builds on natural numbers, so you'd prove it there and then use the details of the construction to propagate that fact. Transfinite cardinal arithmetic is a generalization of this cardinal-based set-theoretic definition to infinite cardinals. Doesn't make a difference for $0$. Transfinite ordinal arithmetic has different definitions, so be precise which one you use. |
Regarding the proof by Tony Padilla and Ed Copeland that $1+2+3+\dotsb=-\frac{1}{12}$ popularized by a recent Numberphile video, most seem to agree that the proof is incorrect, or at least, is not sufficiently rigorous. Can the proof be repaired to become rigorously justifiable? If the proof is wrong, why does the result it computes agree with other more rigorous methods?
Critiques of the proof seem to fall into two classes.
One class of responses is to appeal to higher math for justification. For example, appeal to zeta function regularization, as is done by Edward Frenkel in a followup video for Numberphile and a more recent video by Mathologer, or to use an exponential regulator, as shown by Luboš Motl here on M.SE, or a smooth cutoff regulator as in Terry Tao's wonderful blog post on the subject. These are great, but don't really address what's wrong with the naive computation.
Another response is that the sum is infinite, the series is divergent, and the manipulations are wrong, because manipulations of divergent series can lead to inconsistent results. See for example the answer by robjohn at this question, where he uses similar manipulations to show that the sum must also be $-\frac{7}{12}$. A similar contradiction is shown at the beginning of Tao's post. And Wikipedia's article on the series has a subheading showing that any stable and linear summation method which sums the series implies 0=1. See also Hagen von Eitzen's answer here for some excellent discussion. A reference to the Riemann series theorem may also be appropriate here. These responses are perhaps too dismissive, since there are a variety of rigorous ways to assign finite numbers to divergent sums. People say you have to be careful with divergent series, but few seem to be willing to say what steps the careful observer may take.
Proofs of the type given in that first Numberphile video can be valid if one is careful. For comparison, without specifying a summation method, but just manipulating series in a similar fashion, you can show that the geometric series $1+x+x^2+\dotsb$ sums to $\frac{1}{1-x}$, which is valid for any value of $x$ for which there exists any stable linear summation method. So for example once we know that $1+2+4+8+_\dotsb$ converges 2-adically, without any further information we know that its sum must be $-1$, even though classical summation cannot sum the divergent series.
With that in mind, let's reexamine the computation in the Numberphile video (which I've pasted below in its entirety, copied from Kenny LJ from this question with some edits for understability and rigor). He uses the Grandi series $1-1+1-1+\dotsb = \frac{1}{2}$ and the series $1-2+3-4+\dotsb=\frac{1}{4}$ to derive his result. These series are Cesàro summable, and Cesàro summation is stable and linear, and which allows the manipulations to be justified. I think this first half of the computation is completely justifiable. All we lack is a proof that the two series are Cesàro summable, which is not hard.
However the third series $1+2+3+\dotsb$ is not Cesàro summable, nor indeed can any stable linear method sum it, as already mentioned. Zeta function regularization can sum it. Given a series $\sum a_n$, we may perform analytic continuation of $\sum \dfrac{1}{a_n^s}$ to $s=-1$. This is stable, but not linear. Alternatively the Dirichlet series can sum it. That is analytic continuation of $\sum \dfrac{a_n}{n^s}$ to $s=0$, which is linear but not stable. Either method can sum $1+2+3+\dotsb$ and in fact the two methods coincide for this series.
So I think is the error in the Numberphile video, where they write $(0 + 4 + 0 + 8 + 0 + 12 + \dotsb) = 4+8+12+\dotsb$. If you want to use a linear summation method, you should not also assume stability.
Can the calculation be saved? Can we show by a naive computation using only linearity but not stability, or vice versa only stability but not linearity, that $1+2+3+\dotsb=-\frac{1}{12}$, without an explicit choice of a summation method?
This question is mostly a duplicate of What mistakes, if any, were made in Numberphile's proof that $1+2+3+\cdots=-1/12$?, or perhaps What consistent rules can we use to compute sums like 1 + 2 + 3 + ...? or What can be computed by axiomatic summation? which I am reposting with more context, because I feel that the answers there did not completely engage or address the question.
Numberphile's Proof of $1+2+3+\dotsb=-\frac{1}{12}$.
$S_1 = 1 - 1 + 1 - 1 + 1 - 1 + \dotsb$
$S_1 = 1 + (-1 + 1 - 1\dotsb)$ using stability
$-1 + 1 - 1\dotsb = -S_1$ using linearity
hence $S_1=1-S_1$ and $S_1=\frac{1}{2}$.
$S_2 = 1 - 2 + 3 - 4 + \dotsb $
$S_2' = 0 + 1 - 2 + 3 - 4 + \dotsb = 0 + S_2 = S_2$ by stability
$S_2 + S_2' = 1 - 1 + 1 - 1 \dotsb = 2S_2$ by linearity
hence $2S_2 = S_1 = \frac{1}{2}$ and $S_2=\frac{1}{4}.$.
$S_3 = 1 + 2 + 3 + 4 + \cdots $
Finally, take
\begin{align} S_3 - S_2 & = 1 + 2 + 3 + 4 + \cdots \\ & - (1 - 2 + 3 - 4 + \cdots) \\ & = 0 + 4 + 0 + 8 + \cdots \\ & = 4 + 8 + 12 + \cdots \\ & = 4S_3 \end{align} here we used linearity to get to line 3, stability to get to line 4, and linearity to get to line 5.
Hence $-S_2=3S_3$ or $-\frac{1}{4}=3S_3$.
And so $S_3=-\frac{1}{12}$. $\blacksquare$
Wikipedia's proof that stable linear summation methods cannot sum $1+2+3+\dotsb$
$S_3=1 + 2 + 3 + \dotsb$
$S_3' = 0 + 1 + 2 + 3 + \dotsb = 0 + S_3 = S_3$ by stability.
$S_4 = S_3 - S_3' = 1 + 1 + 1 + \dotsb = S_3 - S_3 = 0$
$S_4' = 0 + 1 + 1 + 1 + \dotsb = 0 = S_4$ by stability again,
and $S_4 - S_4' = 1 + 0 + 0 + \dotsb = 1$ by linearity, which is a contradiction. |
Jupyter Notebook here: https://git.io/fjRjL PDF Article here: http://dx.doi.org/10.13140/RG.2.2.17472.58886 For an explicit compressible algorithm the maximum timestep one can take is dictated by both the advective and acoustic speeds in the flow. This is given by the famous CFL condition \begin{equation} V \frac{\delta t}{\Delta x} \leq 1 \end{equation} where $V$ is the maximum speed in the …
If you’ve worked in computational fluid dynamics, then you’re probably aware of the Taylor-Green vortex – at least the two-dimensional case. A simple google search will land you on this wikipedia page. The classic solution there is presented in the form \begin{equation} u = \cos x \sin y F(t);\quad v = -\sin x \cos y …
LaTeX subequations produce a suprious space or indent immediately after. To get rid of this space, do one of two things Place your subequations label at the begining of the subequations environment, If you insist on placing the label at the end of the subequations environment, then place % sign after that.
Kronecker products can be used to efficiently and easily create 2D and 3D finite difference (and other) operators based on simple 1D operators for derivatives. Here’s a Jupyter notebook that shows you how to do this. You can find this notebook on nbviewer here.
This is from a short talk I gave at our group meeting last week.
In a recent article, I discussed how to sync your zotero library with your box account. However, as many have informed me, there seems to be a problem in setting up the “initial” folder. In fact, I just encountered this same exact problem today on a new computer that I am setting up. Today I …
For Faster video speeds use: ffmpeg -i input.mov -filter:v “setpts=0.5*PTS” output.mov For Slower video speeds use: ffmpeg -i input.mov -filter:v “setpts=2*PTS” output.mov Note the factor multiplying PTS. If that factor is less than 1, then you get a faster video. The opposite otherwise. Thanks to: Modify Video Speed with ffmpeg
May all the best that time and chance have to offer be yours – Jeff Bendock: friend, mentor, and role model. Today (Aug 10) my wife and I went through an extraordinary experience. But what we experienced was not unique. We’re not the first or last people to experience it by any means. There was …
Here’s a neat trick I learned today to display all matplotlib plots as vector format rather than raster in Jupyter notebooks:
You can view your (public) jupyter notebooks hosted by gitlab using nbviewer by changing the “blob” work in the hyperlink to raw.
Convert and display in place: jupyter-nbconvert –to slides mynotebook.ipynb –reveal-prefix=reveal.js –post serve Convert to html: jupyter-nbconvert –to slides mynotebook.ipynb –reveal-prefix=reveal.js as always, try: jupyter-nbconvert –help
This issue appears to have showed up on Sierra. Here’s a simple fix: Edit your bash profile (emacs ~/.bash_profile) Add export BROWSER=open Close your shell or source it (source ~/.bash_profile) Things should get back to normal now. Ref: https://github.com/conda/conda/issues/5408
make sure you don’t fall into the trap of copying pointers in Python: import numpy as np a = np.zeros(2) print(“a = “, a) b = np.zeros(2) print(“b = “, b) b = a # this assignment is simply a pointer copy – b points to the same data pointed to by a a[0]=33.33 # …
First install a utility called highlight. It is available through macports and homebrew. sudo port install highlight Now that highlight is installed, you can “apply” it to a file and pipe it to the clipboard: highlight -O rtf MyCode.cpp | pbcopy Now go to Kenote and simply paste from clipboard (or command + v). Highlight … |
Suppose $k$ is a local field with ring of integers $\mathfrak o$. Let $\mf m \subseteq \mf o$ denote the unique maximal ideal, and define $q = [\mf o: \mf m]$. The maximal ideal is principal and we fix a generator, or
uniformizer, $\varpi$ so that $\mf m = \varpi \mf o$. There is a unique absolute value $| \cdot |$ on $k$ with $$ | \varpi | = \frac{1}{q}. $$ In this context $\mf o = \{ \alpha : |\alpha| \leq 1\}$ and $\mf m = \{\alpha : |\alpha| < 1\}$. As a locally compact abelian group, $k$ has a Haar measure $\mu$ which can be made unique by specifying $\mu(\mf o) = 1$.
Imagine two charged particles, $p$-adic electrons if you will, identified with $\alpha, \alpha’ \in \mf o$ and whose interaction energy is given by
$$ E(\alpha, \alpha’) = -\log|\alpha’ – \alpha|. $$ This vaguely corresponds with our intuition as to how electrons ($p$-adic or not) should behave in the sense that the energy is minimized when the distance between $\alpha$ and $\alpha’$ is maximal, and the energy is infinite if the electrons are on top of each other, that is when $\alpha = \alpha’$. Since we have restricted $\alpha$ and $\alpha’$ to $\mf o$, the farthest apart they can be is $| \alpha – \alpha’| = 1$, and in this case the interaction energy is 0. This observation is central in understanding how these particles behave when we introduce thermal fluctuations to the mix.
Moving from $2$ particles to multiple particles, there are three main systems or
ensembles we will consider. The Microcanonical Ensemble The system contains $N$ particles at a specified energy $E$. The Canonical Ensemble The system contains $N$ particles at a specified temperature $T$. Energy can be exchanged with a heat bath, and is now variable. The Grand Canonical Ensemble The system is now in contact with a heat bath and a particle reservoir so that $E$ and $N$ are both variable. The temperature $T$ and a new quantity, the chemical potential, are fixed and control the average energy and particle number. The Microcanonical Ensemble
In this setting, the energy of the $N$ particles located at the coordinates of $\boldsymbol \alpha \in \mf o^N$ is given by
$$ E(\bs \alpha) = -\sum_{m < n} \log|\alpha_n – \alpha_m|. $$ All states with the same energy are assumed to be equally probable, and the main problem for the micro canonical ensemble is the determination of the measure of the set with prescribed energy.
For reasons that will become apparent, it is sometimes useful to deal with the exponentiated energy
$$ e^{-E(\bs \alpha)} = \prod_{n |
You are separating the spatial and spin part of your two wavefunctions. I don't think that is a good idea here, because it does only work for one of the two configurations.
The wavefunction of a configuration is given by a Slater determinant build up from
spin orbitals. In the following I will neglected the normalization constant of $\frac{1}{\sqrt2}$ for simplicity. The wavefunction for the first configuration ($\uparrow - \downarrow$) is
\begin{align} \Psi_1 &= \begin{vmatrix} p_-(1)\alpha(1) & p_+(1)\beta(1)\\ p_-(2)\alpha(2) & p_+(2)\beta(2)\\ \end{vmatrix}\\ &= p_-(1)\alpha(1)p_+(2)\beta(2) - p_+(1)\beta(1)p_-(2)\alpha(2)\end{align}
Note that this differs from your suggested wavefunction by the spin part of the orbitals, and there is no way to separate them out.
And for the other configuration ($-\uparrow\downarrow -$) we have\begin{align} \Psi_2 &= \begin{vmatrix} p_0(1)\alpha(1) & p_0(1)\beta(1)\\ p_0(2)\alpha(2) & p_0(2)\beta(2)\\ \end{vmatrix}\\ &= p_0(1)\alpha(1)p_0(2)\beta(2) - p_0(1)\beta(1)p_0(2)\alpha(2)\end{align}
Both, $\Psi_1$ and $\Psi_2$ look quite similar now. However, since the spatial part of both orbitals in $\Psi_2$ is the same here, we can factor them out
\begin{equation} \Psi_2 = p_0(1)p_0(2)\underbrace{[\alpha(1)\beta(2)-\beta(1)\alpha(2)]}_{\chi}\end{equation}
And we get what you have suggested for this configuration. But this does only work for $\Psi_1$!
Now, why is $\Psi_1$ assigned to $L=2$, while $\Psi_2$ is $L=0$?
I assume your confusion is because if you add up the orbital quantum numbers $m_l$ you get $0$ in both cases ($1-1=0$ and $0+0=0$). However, those sums give you $M_L$, not $L$. The many-electron quantum number $L$ has $2L+1$ components with quantum numbers $M_L=-L,-L+1,\dots 0, \dots L-1,L$. So for $L=2$ we have $M_L=-2,-1,0,1,2$, much like there are three orbitals $p_-$, $p0$ and $p+$ for $l=1$
So $\Psi_1$ would be the $M_L=0$ component of $L=2$, while $\Psi_2$ is the $M_L=0$ component of $L=0$.
But there is one issue with such assignments: You cannot really do it, as the configurations get mixed together. The point is, that the atom as a whole needs to be spherical symmetric (unless there is some external electric or magnetic field). But the individual $p$ orbitals are not. For example $p_0$ is (in the usual convention) aligned along the $z$-axis and zero in the $xy$ plane.
Mathematically speaking the configurations form a basis (Hilbert space) in which the actual electronic states are expanded. This is known as the
configuration interaction method.
So for example the electronic state with $L=0$ would be\begin{equation}\Psi_2 = \frac{1}{\sqrt3}(|\uparrow\downarrow - -\rangle + |-\uparrow\downarrow - \rangle + |--\uparrow\downarrow \rangle)\end{equation}
where each $|\dots\rangle$ denotes a Slater determinant. |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$
Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*}
(a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$.
(b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$.
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$.
After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\]
(a) What is the dimension of $V$?
(b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$?
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\]
(a) Show that $W$ is a subspace of $V$.
(b) Find a basis of $W$.
(c) Find the dimension of $W$.
(The Ohio State University, Linear Algebra Midterm)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.
Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*}
(a) Find the coefficient matrix $A$ of the system.
(b) Find the inverse matrix of the coefficient matrix $A$.
(c) Using the inverse matrix of $A$, find the solution of the system.
(Linear Algebra Midterm Exam 1, the Ohio State University) |
Forgot password? New user? Sign up
Existing user? Log in
Notice this :
1×9+21\times9+21×9+2= 111111
12×9+312\times9+312×9+3= 111111111
123×9+4123\times9+4123×9+4= 111111111111
1234×9+51234\times9+51234×9+5= 111111111111111
12345×9+612345\times9+612345×9+6= 111111111111111111
123456×9+7123456\times9+7123456×9+7= 111111111111111111111
1234567×9+81234567\times9+81234567×9+8= 111111111111111111111111
12345678×9+912345678\times9+912345678×9+9= 111111111111111111111111111
123456789×9+10123456789\times9+10123456789×9+10= 111111111111111111111111111111
Magic of mathematics
Can anyone prove the cause of the sequence
Ps: without without actual multiplication and Induction
Note by Parth Lohomi 4 years, 9 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
Proof: Math is awesome. And we are done.
Log in to reply
LOLLOlLOLOLOLOLOL!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Hint: Mathematical Induction!
Will you elaborate.
Sure. I used induction on the number of digits in the number 123.....n‾\overline{123.....n}123.....n, which is 111 less than the number of ones, which, in turn, is the number with which we multiply 123.....n‾\overline{123.....n}123.....n by, after adding 999. Firstly, I proved it true for the base case, that is, n=1n=1n=1, then let it be true for 1,2,3,...,k1,2,3,...,k1,2,3,...,k, and then consecutively proved it right for n=k+1n=k+1n=k+1. I've recently learnt induction, and am new to it, so I may have made a mistake. Please correct me if I have! Cheers :)
@Satvik Golechha – @Satvik Golechha nice!! My teacher also told me but can anyone prove it without induction??
seems like division algorithm
Isn't it awesome!!
question to which human brain can't answer #powerofmaths is 0/0 = ?
Problem Loading...
Note Loading...
Set Loading... |
I have found some pretty complete lists (I think) of mathematical symbols here and here, but I don't see a symbol for the word "and" on either list. A person could easily just write the word "and" or use an ampersand, but I was wondering if there was an actual mathematical symbol for the word "and". Also, if anyone knows any lists that are more complete than the ones I have linked to please provide a link.
The logical "and" is $\wedge$ (and the corresponding "or" is $\vee$).
I'll also add that, perversely, the comma can mean either "and" or "or", depending on context. For example, in classical sequent calculus, $\{ P, Q \} \vdash \{ R, S \}$ means $P \land Q \vdash R \lor S$. Also, in set-builder notation $\{ \ldots : \ldots \}$, in a certain sense, commas in the left half are disjunctions and commas in the right half are conjunctions... which is the exact opposite of $\vdash$.
The ampersand & is unmistakeable and just about right in semi-formal statements where "and" would be too wordy and a comma would be not very clear. The notation $\land$ is appropriate for formal logic, but isn't used much in general mathematics. |
As we go to press, media around the world have been reporting the latest round of awards of the coveted Fields Medal (popularly called the "Nobel Prize for Mathematics") which are awarded every four years.
We sometimes see in newspapers or on television situations where a straight line is drawn so as to approximately fit some data points. This can always be done by eye, using human judgment, but the results would then tend to vary depending on the person drawing the line.
Two separate events happily combined to suggest the topic for this issue’s column. In the first place, I devoted my previous column to a somewhat controversial attempt to apply Mathematics to the "softer sciences" such as Biology and Linguistics.
Support vector machines emerged in the mid-1990s as a flexible and powerful means of classification. Classification is a very old problem in Statistics but, in our increasingly data-rich age, remains as important as ever.
Problem 1. An American football field is 100 yards long, and its width is half the average of its length and its diagonal. Find its area. Prize Winners – Senior Division First Prize Graham Robert White James Ruse Agricultural High School Q1211. Solve $$ (2+\sqrt{2})^{\sin^2x} - (2+\sqrt{2})^{\cos^2x} + (2-\sqrt{2})^{\cos2x} = \left(1+\frac{1}{\sqrt{2}}\right)^{\cos 2x}$$ |
Category: Group Theory
Group Theory Problems and Solutions.
Popular posts in Group Theory are:
Problem 625
Let $G$ be a group and let $H_1, H_2$ be subgroups of $G$ such that $H_1 \not \subset H_2$ and $H_2 \not \subset H_1$.
(a) Prove that the union $H_1 \cup H_2$ is never a subgroup in $G$.
Add to solve later
(b) Prove that a group cannot be written as the union of two proper subgroups. Problem 616
Suppose that $p$ is a prime number greater than $3$.
Consider the multiplicative group $G=(\Zmod{p})^*$ of order $p-1$. (a) Prove that the set of squares $S=\{x^2\mid x\in G\}$ is a subgroup of the multiplicative group $G$. (b) Determine the index $[G : S]$.
Add to solve later
(c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$. Problem 613
Let $m$ and $n$ be positive integers such that $m \mid n$.
(a) Prove that the map $\phi:\Zmod{n} \to \Zmod{m}$ sending $a+n\Z$ to $a+m\Z$ for any $a\in \Z$ is well-defined. (b) Prove that $\phi$ is a group homomorphism. (c) Prove that $\phi$ is surjective.
Add to solve later
(d) Determine the group structure of the kernel of $\phi$. If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Problem 497
Let $G$ be an abelian group.
Let $a$ and $b$ be elements in $G$ of order $m$ and $n$, respectively. Prove that there exists an element $c$ in $G$ such that the order of $c$ is the least common multiple of $m$ and $n$.
Also determine whether the statement is true if $G$ is a non-abelian group.Add to solve later |
After coming across the assertion that given
$$ x^2 = y^2 \pmod n \\ x \neq \pm y \pmod n $$
we can then conclude that n factors into
$$ n = \mathrm{gcd}(n, x-y) \mathrm{gcd}(n, x+y). $$
in this article, I attempted my own example but it doesn't seem to work. I think there's a missing hypothesis, but more importantly, I'm interested in understanding why this would work under the correct conditions.
First, the example given was
$$ \begin{align*} &6^2 = 1^2 \pmod {35} \\ &x+y = 7 \\ &x-y = 5 \end{align*} $$
So applying the gcd with $35$ simply retains the values, yielding the correct factorization
$$35 = 7 \times 5.$$
However, my test example was
$$ \begin{align*} &8^2 = 4^2 \pmod {24} \\ &x+y = 12 \\ &x-y = 4 \end{align*} $$ If we just blindly applied the stated result, we would get
$$24 = 12 \times 4.$$
I would guess the problem is that my selected factors are not coprime. Does anyone have any intuition for why this method works (when it works) or any hints towards a proof?
After some brief pondering, I've noticed it certainly works if the values $x+y$ and $x-y$ are coprime. Their product divides $n$ yet they contain no overlapping factors, so by collecting the factors of $x+y$ and $x-y$ which are also factors of $n$, we will end up with precisely all the factors of $n$ and nothing extra. |
We owe Paul Dirac two excellent mathematical jokes. I have amended them with a few lesser known variations.
A.
Square root of the Laplacian: we want $\Delta$ to be $D^2$ for some first order differential operator (for example, because it is easier to solve first order partial differential equations than second order PDEs). Writing it out,
$$\sum_{k=1}^n \frac{\partial^2}{\partial x_k^2}=\left(\sum_{i=1}^n \gamma_i \frac{\partial}{\partial x_i}\right)\left(\sum_{j=1}^n \gamma_j \frac{\partial}{\partial x_j}\right) = \sum_{i,j}\gamma_i\gamma_j \frac{\partial^2}{\partial x_i x_j},$$
and equating the coefficients, we get that this is indeed true if
$$D=\sum_{i=1}^n \gamma_i \frac{\partial}{\partial x_i}\quad\text{and}\quad \gamma_i\gamma_j+\gamma_j\gamma_i=\delta_{ij}.$$
It remains to come up with the right $\gamma_i$'s. Dirac realized how to accomplish it with $4\times 4$ matrices when $n=4$; but a neat follow-up joke is to simply define them to be the elements $\gamma_1,\ldots,\gamma_n$ of
$$\mathbb{R}\langle\gamma_1,\ldots,\gamma_n\rangle/(\gamma_i\gamma_j+\gamma_j\gamma_i - \delta_{ij}).$$
Using symmetry considerations, it is easy to conclude that the commutator of the $n$-dimensional Laplace operator $\Delta$ and the multiplication by $r^2=x_1^2+\cdots+x_n^2$ is equal to $aE+b$, where $$E=x_1\frac{\partial}{\partial x_1}+\cdots+x_n\frac{\partial}{\partial x_n}$$ is the Euler vector field. A boring way to confirm this and to determine the coefficients $a$ and $b$ is to expand $[\Delta,r^2]$ and simplify using the commutation relations between $x$'s and $\partial$'s. A more exciting way is to act on $x_1^\lambda$, where $\lambda$ is a formal variable:
$$[\Delta,r^2]x_1^{\lambda}=((\lambda+2)(\lambda+1)+2(n-1)-\lambda(\lambda-1))x_1^{\lambda}=(4\lambda+2n)x_1^{\lambda}.$$
Since $x_1^{\lambda}$ is an eigenvector of the Euler operator $E$ with eigenvalue $\lambda$, we conclude that
$$[\Delta,r^2]=4E+2n.$$
B.
Dirac delta function: if we can write
$$g(x)=\int g(y)\delta(x-y)dy$$
then instead of solving an inhomogeneous linear differential equation $Lf=g$ for each $g$, we can solve the equations $Lf=\delta(x-y)$ for each real $y$, where a linear differential operator $L$ acts on the variable $x,$ and combine the answers with different $y$ weighted by $g(y)$. Clearly, there are fewer real numbers than functions, and if $L$ has constant coefficients, using translation invariance the set of right hand sides is further reduced to just one, $\delta(x)$. In this form, the joke goes back to Laplace and Poisson.
What happens if instead of the ordinary geometric series we consider a doubly infinite one? Since
$$z(\cdots + z^{-n-1} + z^{-n} + \cdots + 1 + \cdots + z^n + \cdots)= \cdots + z^{-n} + z^{-n+1} + \cdots + z + \cdots + z^{n+1} + \cdots,$$
the expression in the parenthesis is annihilated by the multiplication by $z-1$, hence it is equal to $\delta(z-1)$. Homogenizing, we get
$$\sum_{n\in\mathbb{Z}}\left(\frac{z}{w}\right)^n=\delta(z-w)$$
This identity plays an important role in conformal field theory and the theory of vertex operator algebras.
Pushing infinite geometric series in a different direction,
$$\cdots + z^{-n-1} + z^{-n} + \cdots + 1=-\frac{z}{1-z} \quad\text{and}\quad 1 + z + \cdots + z^n + \cdots = \frac{1}{1-z},$$
which add up to $1$. This time, the sum of doubly infinite geometric series is zero!Thus the point $0\in\mathbb{Z}$ is the sum of all lattice points on the non-negative half-line and all points on the positive half-line:
$$0=[\ldots,-2,-1,0] + [0,1,2,\ldots] $$
A vast generalization is given by Brion's formula for the generating function for the lattice points in a convex lattice polytope $\Delta\subset\mathbb{R}^N$ with vertices $v\in{\mathbb{Z}}^N$ and closed inner vertex cones $C_v\subset\mathbb{R}^N$:
$$\sum_{P\in \Delta\cap{\mathbb{Z}}^N} z^P = \sum_v\left(\sum_{Q\in C_v\cap{\mathbb{Z}}^N} z^Q\right),$$
where the inner sums in the right hand side need to be interpreted as rational functions in $z_1,\ldots,z_N$.
Another great joke based on infinite series is the Eilenberg swindle, but I am too exhausted by fighting the math preview to do it justice. |
Here's an argument essentially due to Bondi.
It is physically motivated by radar measurements.
First, an introduction to
Bondi's k-calculus. (This is based on a diagram from Bondi's "E=mc2: An Introduction to Relativity" (http://www.worldcat.org/title/emc2-an-introduction-to-relativity/oclc/156217827),which accompanied Bondi's series of lectures "E=mc2: Thinking Relativity Through", a series of ten lectures on BBC TV running from Oct 5 to Dec 7, 1963. It had a typo that I corrected.)
Two inertial observers (Bondi will call) Alfred and Brian meet at event O.
Alfred performs a
radar measurement to assign coordinates to event P on Brian's worldline.
After a time $T$ on Alfred's wristwatch, he sends a light signal to Brian.Brian receives the signal at a time $kT$ on Brian's watch (event P), where $k$ is a proportionality constant (independent of $T$). [This $k$ turns out to be the Doppler factor].
When this light-signal is reflected by Brian's worldline (at event P), the reflected signal back arrives at Alfred's worldline when Alfred's watch reads $k(kT)$,where the same factor of $k$ is used because of the Principle of Relativity.(We've also used that the speed of light is the same for these observers.)
[Side note: These two triangles, with two timelike legs and one lightlike leg, are similar in Minkowski spacetime.]
So, Alfred can assign a time-coordinate and a space-coordinate to the distant event P (displacements from event O):$$\Delta t_{P}=(\mbox{half of the elapsed time})=\frac{t_{rec}+t_{send}}{2}=\frac{k^2T+T}{2}$$$$\Delta x_{P}=(\mbox{half of the roundtrip distance})=c\frac{t_{rec}-t_{send}}{2}=c\frac{k^2T-T}{2}.$$
By division, one can get $\quad v_{BA}=\displaystyle\frac{\Delta x_P}{\Delta t_P}=\frac{k^2-1}{k^2+1}\quad$ (independent of $T$),
which can be solved for $k$ to get the Doppler formula.
Note that
by addition: $\quad \Delta t_{P}+(1/c)\Delta x_{P}=t_{rec}\quad$, and by subtraction: $\Delta t_{P}-(1/c)\Delta x_{P}=t_{send}$.
Now consider
two inertial observers making radar measurements, assigning coordinates to a distant event (call it Q).
Each observer sends a light-signal and waits for its echo to be received, noting his wristwatch reading at these two events on his worldline.(Geometrically, we have the light-cone of Q intersecting the two inertial worldlines that met at event O.)
[Side note: Although not necessary, event Q could be on the worldline of a third observer (call her Carol). Then these radar measurements would involve $k_{CB}$ and $k_{CA}$, relating Carol and Brian and Carol and Alfred.
The "$k$" used above in the first part and in the part below could be called $k_{BA}$ to relate Brian and Alfred.]
(The diagram is from Bondi's "Relativity and Common Sense".)
Their wristwatch readings are related by$$\left( \Delta t_Q' - \frac{\Delta x_Q'}{c}\right) = k\left( \Delta t_Q - \frac{\Delta x_Q}{c} \right)$$and$$\left( \Delta t_Q' + \frac{\Delta x_Q'}{c}\right) = \frac{1}{k}\left( \Delta t_Q + \frac{\Delta x_Q}{c} \right)$$
By multiplication, we get the following equation:$$\left({\bf \mbox{invariant square interval}}\right)=\left( \Delta t_Q'^2 - \frac{\Delta x_Q'^2}{c^2}\right) =\left( \Delta t_Q^2 - \frac{\Delta x_Q^2}{c^2}\right),$$ with its
minus-sign in front of the spatial coordinate. (Calling this "the invariant square interval" and not "minus the invariant square interval" is thechoice of sign convention.)
(Side note: By addition and subtraction, one gets the Lorentz transformations.)
The reason why this method works is that we are working in the eigenbasis ofthe Lorentz Transformation, where the the lightlike directions are the eigenvectorsand the Doppler factor and its reciprocal are the eigenvalues.
This is based on a blog entry that I contributed here
https://www.physicsforums.com/insights/relativity-using-bondi-k-calculus/ |
Solutions Colligative Properties Colligative properties :- (1) The properties of dilute solution those depend on the number of solute particles irrespective to their nature. (2) Colligative properties are classified into four types a. Relative lowering of vapour pressure b. Elevation of boiling point c. Depression of freezing point d. Osmotic pressure (3) Normal colligative properties :- When neither association nor dissociation of solute particle take place. (i) Relative lowering of vapour pressure \tt \frac{P^{o} - P}{P} = X_{solute} (ii) Elevation of boiling point ΔT b = k b m (iii) Depression of freezing point ΔT f = k f m (iv) Osmotic pressure π = CRT (or) π = CST
(i)
Relative lowering of vapour pressure:- \tt RLVP : \frac{P^{o} - P}{P^{o}} = X_{solute} = \frac{n}{n + N} Trick:- (a) For dilute solution (whose mass/mass % ≤ 5) \tt \frac{P^{o} - P}{P^{o}} = \frac{n}{N} (b) For concentrated solution (Whose mass/mass % > 5) \tt \frac{P^{o} - P}{P^{o}} = \frac{n}{n + N} (c) To find out molecular mass of solute for types of solutions (dilute (or) concentrated) we can use \tt \frac{P^{o} - P}{P} = \frac{n}{n + N} (d) Molality (m) = \tt \frac{P^{o} - P}{P} \times \frac{1000}{M(in \ gm \ mole^{-1})} n = number of moles of solute N = number of moles of solvent M = Molecular mass of solvent P 0 = Vapour pressure of solvent P = Vapour pressure of solution
(b) Ostwald walker method :
Loss in weight of solution containers α p
Loss in weight of solvent containers α (P 0 − P) Gain in weight of dehydrating agent α P 0 \tt \frac{P^{o} - P}{P^{0}} = \frac{Loss \ in \ weight \ of \ solvent}{Gain \ in \ weight \ of \ dehydrating \ agent}
(ii)
Elevation in boiling point:- (a) ΔT b = k b m where \tt \Delta T_{b} = T_{b} - T_{b}^{0} T b = Boiling point of solution \tt T_{b}^{0} = Boiling point of pure liquid (solvent) k b = Boiling point elevation constant (or) ebullioscopic constant m = molality of solution (b) \tt k_{b} = \frac{R(T_{b}^{0})^{2}}{1000 \ L_{v}} L v = Latent heat of vaporization per gram (c) \tt k_{b} = \frac{MR(T_{b}^{0})^{2}}{1000 \ \Delta H_{vapour}} ΔH vap = Enthalpy of vaporization per mole M = Molar mass of solvent (in g/mol)
(iii)
Depression in freezing point :- (a) Δ T f = k f m where \tt \Delta T_{f} = T_{f} - T_{f}^{0} k f = Freezing point depression constant (or) cryoscopic constant. (b) \tt k_{f} = \frac{R(T_{f}^{0})^{2}}{1000 \ L_{f}}; L f = Latent heat of fusion per gram (c) \tt k_{f} = \frac{MR(T_{f}^{0})^{2}}{1000 \ \Delta H_{fus}} ΔH fus = Enthalpy of fusion per mole M = molar mass of solvent (in g/mol) \tt T_{f}^{0} = Freezing point of solvent
(iv)
Osmotic pressure:-(π) (a) The hydro static pressure built up on the solution which just stops osmosis. In other words "the pressure which must be applied on the concentrated solution side to just stop osmosis" (b) For dilute solutions π = CRT = hdg C = concentration of solution (if must be in molarity) R = Solution constant which is equivalent to universal gas constant h = Height developed by the column of the concentrated solution. Ρ = density of the solution in the column. (c) On the basis of osmotic pressure, the solution can be classified in to three classes.
(i)
Isotonic solutions:- Two solutions having same osmotic pressure are called isotonic solutions ⇒ C 1 = C 2 at given T
(ii)
Hypertonic and hypotonic solution:- When two solutions are being compared, then the solution with higher osmotic pressure is termed as hypertonic. The solution with lower osmotic pressure is termed as hypotonic View the Topic in this Video from 0:26 to 1:00:46
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. The relative lowering of vapour pressure is
\tt \frac{p_1^\star-p_1}{p_1^\star}=\frac{{p_1^\star-x_1}{p_1}}{p_1^\star}=1-x_{1}=x_{2}\ or\ -\frac{\Delta\ p_{1}}{p_1^\star}=x_{2} (where \tt \Delta p_{1}=p_{1}-p_1^\star)
2. \tt \Delta T_{b}=K_{b}m where K
b is known as boiling point elevation constant.
3. −ΔT
f = K f m where K f is known as freezing point depression constant.
4. Osmotic pressure ∏ = cRT |
Forgot password? New user? Sign up
Existing user? Log in
Let u^\hat{u}u^ and v^\hat{v}v^ be unit vectors and w⃗\vec{w}w be a vector such that w⃗+(w⃗ ×u^)\vec{w}+(\vec{w}\ \times \hat{u})w+(w ×u^) === v^\hat{v}v^.
The angle in degrees between u^\hat{u}u^ and v^\hat{v}v^ such that ∣(u^×v^)⋅w⃗∣|(\hat{u} \times \hat{v}) \cdot \vec{w}|∣(u^×v^)⋅w∣ is maximized is θ\thetaθ and the maximum value of ∣(u^×v^)⋅w⃗∣|(\hat{u} \times \hat{v}) \cdot \vec{w}|∣(u^×v^)⋅w∣ is MMM. Find the value of θ+M\theta + Mθ+M.
ImageImageImage Credit:Credit :Credit: WikipediaWikipediaWikipedia
Problem Loading...
Note Loading...
Set Loading... |
The first thing you should note is that if $\tau, \sigma \in S_n$, then $\tau \circ \sigma \in S_n$. This means that if you have two permutations, then their product is also a permutation of the same permutation group.
You also know that $S_n$ has exactly $n!$ elements.
Now imagine I give you a set $A$ where
all of its elements are permutations from $S_n$. Mathematically speaking this means that $A \subset S_n$.
Now let's think about how many elements $A$ can have.
If $A = S_n$, then $A$ contains all permutations from $S_n$. So how many elements does $A$ have? Exactly $n!$, since there are exactly $n!$ permutations in $S_n$.
What if $A$ contains exactly $n!$ permutations from $S_n$? Then $A$ must contain all permutations from $S_n$, since there are only $n!$ elements in $S_n$. So $A = S_n$
What I have shown is that any set $A$ that only contains permutations from $S_n$ has exactly $n!$ elements if and only if $A = S_n$.
The key point here is that $A$ only has permutations from $S_n$ as its elements. And if it has $n!$ permutations (that is, all permutations) as its elements, then it must be equal to $S_n$ (and vice versa).
This is essentially all the proposition says.
The set $\left\{\sigma \circ \tau : \sigma \in S_n\right\}$ is a subset of $S_n$, that is it only contains permutations from $S_n$. Why? See my first statement at the top of this post.
So $\left\{\sigma \circ \tau : \sigma \in S_n\right\} = S_n$ is equivalent to saying that $\left\{\sigma \circ \tau : \sigma \in S_n\right\}$ has exactly $n!$ elements.
The same holds for $\left\{\tau \circ \sigma: \sigma \in S_n\right\} = S_n$ |
$\require{cancel}$
I am trying to do an exercise from Scattering Amplitudes By Elvang (Exercise 2.9) which states:
Show that $A_5(f^-\bar{f}^-\phi\phi\phi) = g^3\frac{[12][34]^2}{[13][14][23][24]} + 3\leftrightarrow 5 + 4\leftrightarrow 5$ in Yukawa theory
So, I draw the feynman diagram, which I think looks something like this (the interaction term is $L_i = g\phi\psi\bar{\psi}$):
Is this diagram correct? Using the Feynman rules for Yukawa theory (in the Massless Spinor Helicity formalism) I evaluate this to be:
$$ A_5(f^-\bar{f}^-\phi\phi\phi) = g^3\langle2|\frac{(\cancel{p_1} + \cancel{p_2})}{(({p_1} + p_2)^2}\frac{(\cancel{p_1} + \cancel{p_2} + \cancel{p_3})}{(p_1 + p_2 + p_3)^2}|5\rangle \\~~~\\+ ~1\leftrightarrow 3 + ~1\leftrightarrow 4 + ~3\leftrightarrow 4 $$
My strategy thus far has been calculate the first term then simply do the permutations at the very end. In general, is this a good strategy to take with diagrams like this?
Doing this, I end up with the following for the first term:
$$ A_5^{(1)} = g^3\langle2|\frac{s_{13}}{s_{12}(s_{12} + s_{13} + s_{23})}|5\rangle $$
Where $s_{ij} = -(p_i + p_j)^2 = 2p_i\cdot p_j$ and I have used the Weyl equation $\langle 2|p_2 = 0$.
I can go further, using the fact that $s_{ij} = \langle ij\rangle[ij]$, to end up with:
$$ A_5^{(1)} = g^3\langle2|\frac{\langle 13\rangle[13]}{\langle 12\rangle[12](\langle 12\rangle[12] + \langle 13\rangle[13] + \langle 23\rangle[23])}|5\rangle $$
I can't seem to simplify this further. Am I going the right away about solving this? Are there any tricks I am missing? |
Answer
C=20$\pi$$\approx$63 m A=100$\pi$$\approx$314 m$^2$
Work Step by Step
2r=d 2r=20 r=$\frac{20}{2}$ r=10 C=d$\pi$ C=10$\pi$$\approx$63 m A=$\pi$r$^2$ A=$\pi$10$^2$ A=$\pi$(100) A=100$\pi$$\approx$314 m$^2$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Electrostatic Potential and Capacitance Potential Energy of a System of Charges and in an External Field
Two point charge system contains charges
q 1 and qseparated by a distance 2 ris given by
U = \frac{1}{4 \pi \varepsilon_{0}} \frac{q_{1}q_{2}}{r}
Three point charge system
U = \frac{1}{4 \pi \varepsilon_{0}} \cdot \left[\frac{q_{1}q_{2}}{r_{1}} + \frac{q_{2}q_{3}}{r_{2}} + \frac{q_{3}q_{1}}{r_{3}}\right]
View the Topic in this video From 07:05 To 43:16
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Electric potential energy of a system of two charges is
U = \frac{1}{4\pi \varepsilon_{0}} \frac{q_{1}q_{2}}{r_{12}}
2. Electric field at the surface of a charged conductor
\overrightarrow{E} = \frac{\sigma}{\varepsilon_{0}}\hat{n}
3. Electric potential energy of a system of
n point charges U = \frac{1}{4 \pi \varepsilon_{0}} \sum_{all \ pairs} \frac{q_{j}q_{k}}{r_{jk}}
4. This work is stored as the potential energy of the system
U(\theta) = pE \cos\frac{\pi}{2} - \cos \theta = - pE \cos \theta = -p.E |
No, this is very false in general. For example, if $X=Y=S^n$ ($n>0$), the connected components of the space $Y^X$ of continuous maps from $X$ to $Y$ are in bijection with $\mathbb{Z}$ (the integer corresponding to a map $f:X\to Y$ is known as the degree of $f$). For $n>1$, $X$ and $Y$ are simply connected. In general, determining the connected components such spaces $Y^X$ (when $Y$ and $X$ are reasonably nice, at least) is a deep geometric problem and is one of the central problems of the entire field of algebraic topology. Just as an example, classifying all of the connected components of $Y^X$ in the case $Y=S^m$ and $X=S^n$ for arbitrary values of $m$ and $n$ is fantastically difficult and the answer is so complex we will probably never have any satisfactory complete description of it (see https://en.wikipedia.org/wiki/Homotopy_groups_of_spheres for an overview of the problem).
One case where it is true is if $X$ or $Y$ is contractible: we say $X$ is contractible if there exists a continuous map $H:X\times [0,1]\to X$ such that $H(x,0)=x$ for all $x$ and $H(x,1)$ is constant (i.e., the same for all $x$). For example, if $X=\mathbb{R}^n$, then $X$ is contractible via the map $H(x,t)=(1-t)x$.
If $X$ is contractible, then for any $f\in Y^X$, there is a continuous map $F:[0,1]\to Y^X$ given by $F(t)(x)=f(H(x,t))$. Note that $F(0)=f$, and $F(1)$ is the constant function with value $f(H(-,0))$. It follows that $Y^X$ is path-connected, since every element of $Y^X$ can be connected to a constant function by a path, and all constant functions can be connected by paths since you have assumed $Y$ is simply connected (in particular, path-connected). A similar argument shows that when $Y$ is contractible, $Y^X$ is path-connected (in fact, more strongly, $Y^X$ is contractible, without needing any hypothesis like path-connectedness on $X$).
A version of the argument above also works if you are only considering continuous linear maps between topological vector spaces, since the contraction $H(x,t)=(1-t)x$ only passes through linear maps. |
I'm having trouble proving the following inequality:
$$\forall p>1 \quad \forall m\geq 0 \quad \dfrac{m^2\Gamma(\dfrac{2m}{p})\Gamma(\dfrac{2m}{q})}{\Gamma(\dfrac{2m+2}{p})\Gamma(\dfrac{2m+2}{q})}\geq\dfrac{1}{4}p^2(p-1)^{\frac{2}{p}-2},$$ where as usual $q=\dfrac{p}{p-1}$. In fact, it seems clear from Mathematica that for a fixed $p$, the LHS is a decreasing function of $m$ (strictly unless $p=2$, in which case it's constant). The RHS can be seen to be the limit as $m\to \infty$. I actually only care about integer $m\geq 0$, but I don't find that helpful.
I have tried both a direct approach (three known inequalities that are nice enough to apply here, but lead to wrong inequalities) and working with the derivative, which naturally involves instances of the digamma function. Proving that the LHS is decreasing is equivalent to the following inequality: $$\forall p>1 \quad \forall m\geq 0 \quad \dfrac{1}{m}+\dfrac{1}{p}(\psi(\dfrac{2m}{p})-\psi(\dfrac{2m+2}{p}))+\dfrac{1}{q}(\psi(\dfrac{2m}{q})-\psi(\dfrac{2m+2}{q}))\leq0,$$ which again seems to be correct (if you're wondering, the limit as $m\to 0$ is negative for $p\neq2$). Much like before, I tried using two inequalities (for the digamma function), as well as the series representation. They seemed promising at first, but the inequalities gave me positive upper bounds, while the series converges too slowly to be useful (I suspect that any partial sum is positive for large enough $m$).
Any advice would be much appreciated. I'll be glad to explain more about the inequalities I've tried if requested. |
I was wondering about important/famous mathematical constants, like $e$, $\pi$, $\gamma$, and obviously the golden ratio $\phi$. The first three ones are really well known, and there are lots of integrals and series whose results are simply those constants. For example:
$$ \pi = 2 e \int\limits_0^{+\infty} \frac{\cos(x)}{x^2+1}\ \text{d}x$$
$$ e = \sum_{k = 0}^{+\infty} \frac{1}{k!}$$
$$ \gamma = -\int\limits_{-\infty}^{+\infty} x\ e^{x - e^{x}}\ \text{d}x$$
Is there an interesting integral
* (or some series) whose result is simply $\phi$?
*
Interesting integral means that things like
$$\int\limits_0^{+\infty} e^{-\frac{x}{\phi}}\ \text{d}x$$
are not a good answer to my question. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
A concept of solution and numerical experiments for forward-backward diffusion equations
1.
Dipartimento di Matematica, Università di Roma 'Tor Vergata', 00133, Roma
2.
Dipartimento di Matematica Pura e Applicata, Università de L’Aquila, I-67100 L’Aquila, Italy
3.
Dipartimento di Matematica Pura e Applicata, Università de L'Aquila, I-67100 L'Aquila
non convex, and with its singular perturbation $F_\phi^\varepsilon(u)$:=$\frac{1}{2}\int_I (\varepsilon^2 (u_{x x})^2 + \phi(u_x))dx$. We discuss, with the support of numerical simulations, various aspects of the global dynamics of solutions $u^\varepsilon$ of the singularly perturbed equation $u_t = - \varepsilon^2 u_{x x x x} + \frac{1}{2} \phi''(u_x)u_{x x}$ for small values of $\varepsilon>0$. Our analysis leads to a reinterpretation of the unperturbed equation $u_t = \frac{1}{2} (\phi'(u_x))_x$, and to a well defined notion of a solution. We also examine the conjecture that this solution coincides with the limit of $u^\varepsilon$ as $\varepsilon\to 0^+$. Keywords:singular perturbations, nonconvex functionals, Forward-backward parabolic equations, fourth order regularization, microstructures, stiff problems.. Mathematics Subject Classification:Primary: 35B25; Secondary: 47j06, 35K9. Citation:G. Bellettini, Giorgio Fusco, Nicola Guglielmi. A concept of solution and numerical experiments for forward-backward diffusion equations. Discrete & Continuous Dynamical Systems - A, 2006, 16 (4) : 783-842. doi: 10.3934/dcds.2006.16.783
[1] [2]
Flavia Smarrazzo, Alberto Tesei.
Entropy solutions of forward-backward parabolic equations
with Devonshire free energy.
[3]
Yufeng Shi, Tianxiao Wang, Jiongmin Yong.
Optimal control problems of forward-backward
stochastic Volterra integral equations.
[4] [5]
Flavia Smarrazzo, Andrea Terracina.
Sobolev approximation for two-phase solutions of forward-backward parabolic problems.
[6]
Xin Chen, Ana Bela Cruzeiro.
Stochastic geodesics and forward-backward stochastic differential equations on Lie groups.
[7]
Peng Gao.
Carleman estimates for forward and backward stochastic fourth order Schrödinger equations and their applications.
[8]
Fabio Paronetto.
A Harnack type inequality and a maximum principle for an elliptic-parabolic and forward-backward parabolic De Giorgi class.
[9]
Lianzhang Bao, Zhengfang Zhou.
Traveling wave in backward and forward parabolic equations from population dynamics.
[10]
Xiao Ding, Deren Han.
A modification of the forward-backward splitting method for maximal monotone mappings.
[11]
Jie Xiong, Shuaiqi Zhang, Yi Zhuang.
A partially observed non-zero sum differential game of forward-backward stochastic differential equations and its application in finance.
[12]
Z. B. Ibrahim, N. A. A. Mohd Nasir, K. I. Othman, N. Zainuddin.
Adaptive order of block backward differentiation formulas for stiff ODEs.
[13]
Andrea L. Bertozzi, Ning Ju, Hsiang-Wei Lu.
A biharmonic-modified forward time stepping
method for fourth order nonlinear diffusion equations.
[14] [15] [16]
Jasmina Djordjević, Svetlana Janković.
Reflected backward stochastic differential equations with perturbations.
[17] [18]
Angelo Favini, Alfredo Lorenzi, Hiroki Tanabe, Atsushi Yagi.
An $L^p$-approach to singular linear parabolic equations with lower order terms.
[19]
Alan E. Lindsay.
An asymptotic study of blow up multiplicity in fourth order parabolic partial differential equations.
[20]
Lili Ju, Xinfeng Liu, Wei Leng.
Compact implicit integration factor methods for a family of semilinear fourth-order parabolic equations.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
I have this code:
S = (Sqrt[2]/2)*{{1 + Conjugate[δ], 0}, {0,1 - Conjugate[δ]}}(** Suppose a+b=1 and δ=((a-b)/(a+b))\[Conjugate] **)k = (1/Sqrt[2])*{{S[[1, 1]] + S[[2, 2]]}, {S[[1, 1]] - S[[2, 2]]}, {2 S[[1, 2]]}} // SimplifySubscript[T, 0] = Dot[k, ConjugateTranspose[k]]Subscript[T, 0] // MatrixFormSubscript[T, 0] // TraditionalForm
$$\left( \begin{array}{ccc} 1 & \delta & 0 \\ \delta ^* & \delta \delta ^* & 0 \\ 0 & 0 & 0 \\ \end{array} \right)$$
As you see at the end the product of $\delta$ and $\delta^*$ is not printed as $|\delta|^2$ but as $\delta\delta^*$
Someone told me in one of my questions that this is because:
It seems that you did not instruct Mma that δ∗ is a conjugated value of δ. Using simply a conjugate symbol is not enough. You should use Conjugate[δ] instead and then apply ComplexExpand
so far I have tried several ways like
Using the UpsetDelayed operator in the begining of code as:
δ\[Conjugate] ^:= Conjugate[δ]
or using:
ComplexExpand[Subscript[T,0], δ, TargetFunctions -> {Abs, Conjugate}]
But I couldn't change any thing?!
Following the the first answer posted to the question I wrote:
FullSimplify[Subscript[T, 0]] // TraditionalForm
$$\left(\begin{array}{ccc} 1 & \delta & 0 \\ \delta ^* & \left| \delta \right| ^2 & 0 \\ 0 & 0 & 0 \\\end{array}\right)$$
But when I continue the code and apply the same trick on another matrix, the trick doesn't work!
R[ψ_] := {{1, 0, 0}, {0, Cos[2 ψ], Sin[2 ψ]}, {0, -Sin[2 ψ], Cos[2 ψ]}}T[ψ_] := Dot[R[ψ], Subscript[T, 0], Transpose[R[ψ]]]FullSimplify[T[ψ]] // TraditionalForm
$$\left( \begin{array}{ccc} 1 & \delta (\cos (2 \psi )) & -\delta (\sin (2 \psi )) \\ \delta ^* (\cos (2 \psi )) & \delta \delta ^* \left(\cos ^2 (2 \psi )\right) & -\frac{1}{2} \delta \delta ^* (\sin (4 \psi )) \\ -\delta ^* (\sin (2 \psi )) & -\frac{1}{2} \delta \delta ^* (\sin (4 \psi )) & \delta \delta ^* \left(\sin ^2 (2 \psi )\right) \\ \end{array} \right)$$ |
In this post I want to describe in short how to write a formulas on Codeforces. In fact it is short introduction to the markup language
Unable to parse markup [type=CF_TEX]used on Codeforces.
Three important rules.
Foremost rule: formula me place in dollars (
$), as well as in parentheses.
Another important rule: if you want to apply some operation to some group of symbols it is necessary to form the block using the curly braces. For instance,
2^x+y = 2
+ x y, but
2^{x+y} = 2
. x+ y
Third rule — for perfectionists. For traffic economy Codeforces print simple formulas by usually text. Sometimes it is not very pretty:
C_{x_i+y_i-2}^{x_i-1} =
C x + i y - 2 i . Is this case you can add command x - 1 i
\relax at the beginning of the formula. Then the formula is guaranteed to be beautiful:
\relax C_{x_i+y_i-2}^{x_i-1} = .
Arithmetic operations.
Addition and subtraction can be written ordinary symbols + and -. Multiplication is usually indicated by a null character (
xy is the produst of numbers x and y) or by symbol (
\cdot). If it is necessary to multiply two complex expressions (), or are important both factors, not just the value of the product (in expressions of type
field ), use the symbol × , which may be obtained by the command
\times.
The division is somewhat more complicated. Usually in mathematics division is not written in one line, but the desire is not to write the fraction of the blue, too, is understandable. In this case, you can always write a
: or
/ (
x:y,
x/y).
If you want to write all the same fraction, it has two similar commands:
\frac and
\dfrac. After any of these commands have to write a block of the numerator and block of the denominator, for example: (
\frac{1}{4}). Using
\frac small fractions are obtained, which is suitable mainly for the simple fractions. If you want to write a serious big fraction you'll need
\dfrac: (
\dfrac{x+y}{x^2+y^2}). If the numerator and denominator of a single-character, it can not be enclosed in brackets, for example: (
\frac14x), but only if the numerator is not a letter.
The upper and lower indices.
If you want to write a lower index, you will help symbol
_ and upper index (basically it is the exponent), the symbol
^: (
x + i y ) i 2(
(x_i + y_i) ^ 2). Same manner as with the fractions in the lower or upper index block can be placed, but if a single-character index, it can not do so.
Other useful tips and special characters
Text — text (
---) — not in the formulas, and in the text. This dash, not hyphen (it is not works without surrounding text)
(
\dots) — three dots symbol.
∞ (
\infty) — infinity symbol.
→ (
\to) — arrow right, in expressions such as
x → 0. n
Many well-known mathematical functions can be typed with the '', then they will look like a formula, rather than simply as text ( =
\tg, =
\ln, =
\lim and so on)
If you want to index are top and bottom rather than the top-right and bottom-right, use the command
\limits:
=
\sum_{k = 0}^nx^k
=
\sum\limits_{k = 0}^nx^k.
If the brackets around a large expression small, you can make them suitable size, written before the left bracket command
\left, and before right bracket command
\right. For instance: =
\left( \dfrac{x+y}{x^2+y^2} \right).
Thank you for the attention! |
I want to prove following statement in Folland: If $f_n \to f$ almost uniformly, then $f_n \to f$ a.e. and in measure.
This is what I did: For $k \in \mathbb{N}$, we choose $F_k$ s.t. $\mu(F_k) < \frac{1}{k}$ and $f_n \to f$ uniformly on $F_k^c$ (so I can do this, how can I verify that there exists such $F_k$?). Then, take $E = \bigcup_1^{\infty} F_k$. Then $f_n \to f$ on $E^c$ (is it uniform or not?), and I try to verify $\mu(E) = 0$ (but have problems also), so if this is valid, things are OK for a.e. convergence.
What can I further do for convergence in measure?
This is a homework question, so if you give reasonable hints, I will be very happy. Thanks! |
Tagged: finite group If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Problem 455
Let $G$ be a finite group.
The centralizer of an element $a$ of $G$ is defined to be \[C_G(a)=\{g\in G \mid ga=ag\}.\]
A
conjugacy class is a set of the form \[\Cl(a)=\{bab^{-1} \mid b\in G\}\] for some $a\in G$. (a)Prove that the centralizer of an element of $a$ in $G$ is a subgroup of the group $G$.
Add to solve later
(b) Prove that the order (the number of elements) of every conjugacy class in $G$ divides the order of the group $G$. Problem 420
In this post, we study the
Fundamental Theorem of Finitely Generated Abelian Groups, and as an application we solve the following problem.
Add to solve later
Problem. Let $G$ be a finite abelian group of order $n$. If $n$ is the product of distinct prime numbers, then prove that $G$ is isomorphic to the cyclic group $Z_n=\Zmod{n}$ of order $n$. Problem 302
Let $R$ be a commutative ring with $1$ and let $G$ be a finite group with identity element $e$. Let $RG$ be the group ring. Then the map $\epsilon: RG \to R$ defined by
\[\epsilon(\sum_{i=1}^na_i g_i)=\sum_{i=1}^na_i,\] where $a_i\in R$ and $G=\{g_i\}_{i=1}^n$, is a ring homomorphism, called the augmentation map and the kernel of $\epsilon$ is called the augmentation ideal. (a) Prove that the augmentation ideal in the group ring $RG$ is generated by $\{g-e \mid g\in G\}$.
Add to solve later
(b) Prove that if $G=\langle g\rangle$ is a finite cyclic group generated by $g$, then the augmentation ideal is generated by $g-e$. Read solution |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
Letter to the Editor Open Access Published: Exploring the potential confounder of nitrogen fertilizers in the relationship between pesticide exposures and risk of leukemia: a Poisson regression with two-way fixed-effects analysis Chinese Journal of Cancer volume 36, Article number: 58 (2017) Article metrics
916 Accesses
2 Citations
0 Altmetric
Dear Editor,
Research discussing potential environmental toxins that may be related to the etiology of childhood leukemia has been growing. The suspected environmental contaminants include solvents, air pollutants, pesticides, and tobacco smoke. Exposure to various pesticides has come under particular scrutiny, with positive associations with childhood leukemia [1]. Poynter et al. [2] conducted a population-based study assessing the association between self-reported chemical exposures and odds of acute myeloid leukemia (AML) and myelodysplastic syndromes (MDS). The authors found no clear association between pesticides and AML or MDS; however, they did report significant associations of AML and MDS with other chemicals, including benzene, vinyl chloride, and fertilizers [2]. While it is certainly important to identify exposure to pesticides as a potential etiological factor in leukemia onset, it is also important to address confounding variables, such as fertilizers, that may be both strongly associated with pesticide use and empirically associated with leukemic outcomes.
We have previously identified environmental exposure to nitrous oxide (N
2O), an agricultural and combustion pollutant, as a likely effect modifier to the proposed relationship between the use of the herbicide, glyphosate, and neurodevelopmental outcomes like attention-deficit hyperactivity disorder (ADHD) [3]. We found that use of glyphosate was closely tied to the use of nitrogen fertilizers in agriculture at a county urbanization level [3]. Therefore, it is possible that pesticide exposures may act as a proxy for air pollutants (i.e., N 2O) directly related to the use of anthropogenic nitrogen in agriculture. Prior studies have identified pre-morbid ADHD and other developmental abnormality in children newly diagnosed with leukemia [4]. Therefore, if environmental N 2O is a trigger for neurodevelopmental disorders, as we have suggested, and developmental abnormalities may precede childhood leukemia, we hypothesize that chronic exposure to N 2O in the environment, and not necessarily pesticide exposures, may foster both neurodevelopmental and hematologic abnormalities.
To investigate the possible association between farm use of nitrogen fertilizers—as the most relevant environmental proxy for N
2O emissions—and hospitalization for blood-related cancers, we have replicated our earlier work [3] using the database from the Healthcare Cost and Utilization Project (HCUP). We conducted a Poisson regression analysis using a two-way fixed-effects model. This approach minimizes the likelihood of omitted variable bias due to unobserved or unmeasured factors that influence the outcome. Briefly, a random variable Y is said to have a Poisson distribution with parameter μ if it takes integer values y = 0, 1, 2, … with probability.
for
μ > 0. The mean and variance of this distribution can be shown to be E( Y) = var( Y) = μ. We have a sample of n observations of discharges related to blood-related cancers, y 1, y 2,…, y n Y ij, ~ P( μ ij), where i represents a state and j an observation year. We let the logarithm of the mean depend on a vector of time-varying explanatory variables, x ij, such that the log-linear model is the following: \(\log (\mu_{ij} ) = x_{ij}^{\prime} \beta_{1}\). We have a multiplicative model for the mean discharges by exponentiation: \(\mu_{ij} = \exp \{ x_{ij}^{\prime} \beta_{1} \}\). In each case, the exponentiated regression coefficient exp{ β 1ijk} yields an incidence rate ratio (IRR), which represents a multiplicative effect of the kth predictor on the mean. A one log-unit increase in x k β 1k}.
Table 1 shows the results of our Poisson regression analysis using a two-way fixed-effects estimator. The unadjusted model indicates a reduced IRR for hospitalization for leukemia for every one log-unit increase in farm use of nitrogen fertilizers. No other blood cancer (non-Hodgkin’s lymphoma or multiple myeloma) was significant in the unadjusted models. When accounting for non-farm use of nitrogen fertilizers as well as the use of pesticides, including atrazine, cypermethrin, dicamba, 2,4-D, glyphosate, and 2-methyl-4-chlorophenoxyacetic acid (MCPA), a one log-unit increase in farm use of nitrogen fertilizers was protective against hospitalization for all three blood cancers. Data for many agricultural states in the USA (such as Iowa, Kansas, and Minnesota) were not available. A third adjusted model excluding the relatively rarely used pesticides, cypermethrin and MCPA, confirms the initial unadjusted model, suggesting that the extreme variability in these two pesticides, in particular, may have contributed to spurious results. However, the reduced risk of hospitalization for leukemia remained statistically significant for all models, lessening the likelihood this result can be attributed to chance. Similar results were noted when using a specific diagnosis of AML in the International Classification of Disease, 9th Revision–Clinical Modification (ICD-9-CM) as the dependent variable.
Including an interaction term of time with farm use of fertilizer heightens the significance of the main effect considerably, while the interaction itself significantly increases annual hospitalization risk for leukemia by 12% (results not shown). These data suggest that a 1-year lagged indicator of farm use of nitrogen fertilizer protects against hospitalization for leukemia, but that compensatory biological mechanisms may be induced over the longer term, soon increasing risk of hospitalizations for leukemia, as supported by Poynter et al. [2].
Biological evidence indicates that N
2O may be protective against leukemic cell growth via the role of N 2O in oxidizing the cobalt ion within cobalamin, inactivating vitamin B12 [5]. Also, N 2O exposure may increase leukocyte DNA damage in patients who underwent surgery, as evidenced by a two-fold increase in the percentage of DNA intensity in the comet tail using digital fluorescence microscopy [6]. The extent of DNA damage was also positively correlated with the duration of N 2O exposure [6]. Therefore, the current epidemiological finding of a significantly protective effect of farm use of nitrogen fertilizers (an environmental proxy of N 2O emissions and exposures) against hospitalization for leukemia is consistent with the prevailing biological evidence. It is interesting to note, however, that elevated blood cobalamin has been acknowledged as a diagnostic marker for leukemia [7], although the significance of this metabolic abnormality in blood cancers is not well characterized. Moreover, the use of methylphenidate, a psychostimulant used to manage premorbid neuropsychiatric conditions like ADHD, has been found to be associated with increased risk of leukemia [8], although evidence of cytogenetic damage attributed to the use of methylphenidate is not consistent [9], suggesting that factors related to the use of psychostimulants and neurodevelopmental impairment, including environmental N 2O exposure, may increase risk of leukemia. These data point to both endogenous and pharmacologic compensatory mechanisms, including leukemic outcomes and methylphenidate use respectively, which may reverse the hematologic and neuropsychiatric effects of chronic N 2O exposure.
The present longitudinal findings provide a more nuanced perspective regarding the significantly increased odds of AML from self-reported exposure to fertilizer [2]. We suggest here that leukemic outcomes, including elevated blood cobalamin levels, may reflect compensatory mechanisms to reverse the hematologic depression from chronic environmental exposure to N
2O emissions associated with nitrogen fertilizer use [10]. Additional investigations are therefore warranted to better characterize the metabolic role of increased serum vitamin B12 in hematologic cancers. As confirmed by Poynter et al. [2], future investigations exploring links between pesticides and leukemia should consider the associated use of nitrogen fertilizers and chronic exposure to related air pollutants, such as N 2O, which has been empirically shown to affect not only leukemic outcomes but also neurodevelopmental abnormality that may often precede a leukemia diagnosis. Therefore, we think that nitrogen fertilizers and their influence on nitrous oxide emissions should be considered. Abbreviations ADHD:
attention-deficit hyperactivity disorder
AML:
acute myeloid leukemia
CCS:
Clinical Classification Software
HCUP:
Healthcare Cost and Utilization Project
IRR:
incidence rate ratio
ICD-9-CM:
International Classification of Disease, 9th Revision–Clinical Modification
MDS:
myelodysplastic syndromes
MM:
multiple myeloma
N 2O:
nitrous oxide
NHL:
non-Hodgkin’s lymphoma
References 1.
Bailey HD, Infante-Rivard C, Metayer C, Clavel J, Lightfoot T, Kaatsch P, et al. Home pesticide exposures and risk of childhood leukemia: findings from the Childhood Leukemia International Consortium. Int J Cancer. 2015;137(11):2644–63. doi:10.1002/ijc.29631.
2.
Poynter JN, Richardson M, Roesler M, Blair CK, Hirsch B, Nguyen P, et al. Chemical exposures and risk of acute myeloid leukemia and myelodysplastic syndromes in a population-based study. Int J Cancer. 2017;140(1):23–33. doi:10.1002/ijc.30420.
3.
Fluegge K, Fluegge K. Exposure to ambient PM10 and nitrogen dioxide and ADHD risk: a reply to Min and Min (2017). Environ Int. 2017;103:109–10. doi:10.1016/j.envint.2017.02.012.
4.
Janzen LA, David D, Walker D, Hitzler J, Zupanec S, Jones H, et al. Pre-morbid developmental vulnerabilities in children with newly diagnosed acute lymphoblastic leukemia (ALL). Pediatr Blood Cancer. 2015;62:2183–8. doi:10.1002/pbc.25692.
5.
Abels J, Kroes AC, Ermens AA, van Kapel J, Schoester M, Spijkers LJ, et al. Anti-leukemic potential of methyl-cobalamin inactivation by nitrous oxide. Am J Hematol. 1990;34:128–31.
6.
Chen Y, Liu X, Cheng CH, Gin T, Leslie K, Myles P, et al. Leukocyte DNA damage and wound infection after nitrous oxide administration: a randomized controlled trial. Anesthesiology. 2013;118:1322–31. doi:10.1097/ALN.0b013e31829107b8.
7.
Ermens AA, Vlasveld LT, Lindemans J. Significance of elevated cobalamin (vitamin B12) levels in blood. Clin Biochem. 2003;36:585–90.
8.
Oestreicher N, Friedman GD, Jiang SF, Chan J, Quesenberry C Jr, Habel LA. Methylphenidate use in children and risk of cancer at 18 sites: results of surveillance analyses. Pharmacoepidemiol Drug Saf. 2007;16:1268–72.
9.
Witt KL, Shelby MD, Itchon-Ramos N, Faircloth M, Kissling GE, Chrisman AK, et al. Methylphenidate and amphetamine do not induce cytogenetic damage in lymphocytes of children with ADHD. J Am Acad Child Adolesc Psychiatry. 2008;47:1375–83. doi:10.1097/CHI.0b013e3181893620.
10.
Kano Y, Sakamoto S, Sakuraya K, Kubota T, Kasahara T, Hida K, et al. Effects of nitrous oxide on human cell lines. Cancer Res. 1983;43:1493–6.
Authors’ contributions
KF gathered the data and analyzed the data using appropriate statistical software. Both authors contributed to the writing of the manuscript. Both authors read and approved the final manuscript.
Acknowledgements
The authors acknowledge Maddie Fluegge for her contributions to the article.
Competing interests
The authors declare that they have no competing interests.
Availability of data and materials
The datasets generated and/or analyzed during the current study are publicly available in the HealthCare Cost and Utilization Project repository and the United States Geological Survey [persistent weblinks to datasets can be found in the cited prior investigations]. |
For people like me who study algorithms for a living, the 21st-century standard model of computation is the
integer RAM. The model is intended to reflect the behavior of real computers more accurately than the Turing machine model. Real-world computers process multiple-bit integers in constant time using parallel hardware; not arbitrary integers, but (because word sizes grow steadily over time) not fixed size integers, either.
The model depends on a single parameter $w$, called the
word size. Each memory address holds a single $w$-bit integer, or word. In this model, the input size $n$ is the number of words in the input, and the running time of an algorithm is the number of operations on words. Standard arithmetic operations (addition, subtraction, multiplication, integer division, remainder, comparison) and boolean operations (bitwise and, or, xor, shift, rotate) on words require $O(1)$ time by definition.
Formally,
the word size $w$ is NOT a constant for purposes of analyzing algorithms in this model. To make the model consistent with intuition, we require $w \ge \log_2 n$, since otherwise we cannot even store the integer $n$ in a single word. Nevertheless, for most non-numerical algorithms, the running time is actually independent of $w$, because those algorithms don't care about the underlying binary representation of their input. Mergesort and heapsort both run in $O(n\log n)$ time; median-of-3-quicksort runs in $O(n^2)$ time in the worst case. One notable exception is binary radix sort, which runs in $O(nw)$ time.
Setting $w = \Theta(\log n)$ gives us the traditional logarithmic-cost RAM model. But some integer RAM algorithms are designed for larger word sizes, like the linear-time integer sorting algorithm of Andersson et al., which requires $w = \Omega(\log^{2+\varepsilon} n)$.
For many algorithms that arise in practice, the word size $w$ is simply not an issue, and we can (and do) fall back on the far simpler uniform-cost RAM model. The only serious difficulty comes from nested multiplication, which can be used to build
very large integers very quickly. If we could perform arithmetic on arbitrary integers in constant time, we could solve any problem in PSPACE in polynomial time.
Update: I should also mention that there are exceptions to the "standard model", like Fürer's integer multiplication algorithm, which uses multitape Turing machines (or equivalently, the "bit RAM"), and most geometric algorithms, which are analyzed in a theoretically clean but idealized "real RAM" model.
Yes, this is a can of worms. |
Two issues. Notations & terms rv: random variable iid: independent identical distribution pdf: probability density function jpdf: joint pdf $X_1,X_2,...,X_n$: $n$ iid $x_1,x_2,...,x_n$: an observation $\theta$: the wanted parameter(s) $\Theta$: parametric space (i.e. all the values $\theta$ may take)
Fact: the parameter ($\theta$) is a number but unknown, not a rv.
ISSUSE 1 MLE
The key to understand MLE is to think the same thing (jpdf) from other side, the PARAMETRIC SPACE side.
Namely, you take the PARAMETRIC SPACE as the domain of the jpdf.
Then, we denote this jpdf as $L(\theta;x_1,...,x_n), \theta\in\Theta.$
That is, for each possible $\theta_0$, there is a corresponding $L(\theta_0;x_1,...,x_n)$ which is a
number because on the one hand $x_1,...,x_n$ are given, on the other hand $\theta$ is given.
You have known that the following step is maximizing this function.
Wait a minute. What is
maximizing?This word means that you choose the maximum possible (i.e. $L(\theta)$) case among a bunch of cases (each $\theta$ in $\Theta$ representing a case).
Therefore, the cost function you choose is
legal, if the maximizing of this cost function is equivalent to the maximizing of $L(\theta;x_1,...,x_n)$ among the cases $\theta\in\Theta$.
In fact, we often choose the cost function as $\ln L(\theta;x_1,...,x_n)$ because this is convenient for many distributions.
Theorem. Maximizing $L(\theta)$ is equivalent to maximizing $\ln L(\theta)$, among all $\theta\in\Theta$.
Hints: It it obviously that $\ln x$ is monotone incresing in $x$ when $x>0$.
Meanwhile, $L(\theta)>0, \theta\in\Theta$.
Take $\theta_0\in\Theta$, s.t. $L(\theta_0)=\max_{\theta\in\Theta}L(\theta).$ Then $\ln L(\theta_0)=\max_{\theta\in\Theta}\ln L(\theta)$.
In fact, the MLE method is not the BEST. Consider the following example:
Consider $(X_1,..,X_n)\stackrel{iid}{\sim}N(\mu,\sigma^2)$.
We can calculate the MLE of $\sigma^2$ is $\hat\sigma^2=\dfrac1n\sum\limits_{i=1}^n(X_i-\bar X)^2$, which is obviously biased.
ISSUSE 2 Best way to estimate parameters of a distribution
On parameters of a distribution, there are point estimation and interval estimation.
What you mentioned is point estimation, in which the BEST estimator is called
UMVUE (uniformly minimum variance unbiased estimator).
The MLE method is not a approach to get UMVUE directly (as we've shown that MLE is sometimes biased) but an intuitive one.
On UMVUE, there are too many theories. You may google
Lehmann-Scheffe theorem (a way to find UMVUE) for more details. |
One form of Jensen's inequality for the finite case, tells us that
$$ \sum_{x \in X} p(x) \log q(x) \leq \log\sum_{x \in X} p(x) \cdot q(x) $$
For positive p(x), and $\sum_{x \in X} p(x) = 1$, $q(x)$ real, and $X$ finite. I am using the $\log$, but any concave function could be substituted.
or the probabilistic version:
$$ \mathbb{E}( \log X) \leq \log \mathbb{E}(X)$$ Where $\mathbb{E}$ is the expectation of $X$.
However, is this inequality true for countable $X$? The book I'm reading (elements of information theory, 2006), seems to prove it for the finite case, but uses the countable case without mentioning it.
Also on wikipedia the it seems the first inequality in my post is only for the finite case, whereas the probablistic verson makes no mention of cardinality of the probability space. |
The setup is as in this question:
Given a norm $N$ over ${\bf M}_n(\mathbb C)$, it is a natural question to find the best constant $C_N$ such that $$N([A,B])\le C_N N(A)N(B),\qquad\forall A,B\in{\bf M}_n(\mathbb C).$$
Equivalently, $C_N$ is the maximum of $N(AB-BA)$ provided that $N(A)=N(B)=1$.
Given examples of $C_N$ are
$C_N=\sqrt{2}$ if $N$ is the Frobenius norm $C_N=2$ if $N$ is the operator norm $\| \cdot\|_2$ $C_N=4$ if $N$ is the numerical radius$r(A)=\sup\limits_{x\ne0}\dfrac{|x^*Ax|}{\|x\|^2}$ (See this answer to an MO question).
if $N$ is the induced $p$-norm, defined for $1\le p\le\infty$ by $\|A\| _p = \sup \limits _{x \ne 0} \frac{\| A x\| _p}{\|x\|_p}$, we have $C_N=2$ for $p=\infty$ (with $\|A\|_\infty $ being just the maximum absolute row sum of the matrix). Indeed, the lower bound $2$ for $\|\cdot\|_\infty $ is obtained by taking e.g. $A=\begin{pmatrix} 1&0\\1&0\end{pmatrix}$ and $B=\begin{pmatrix} 0&1\\0&-1\end{pmatrix}$, and it should be easy to prove that $2$ is also the general upper bound for $\|\cdot\|_\infty $.
Similarly, $C_N=2$ for $p=1$ (with $\|A\|_1 $ being the maximum absolute column sum of the matrix).
Knowing that $C_N\equiv2$ for $p=1,2,\infty$, is it true that the same holds for the induced $p$-norms for all$p\ge1$?
If $N$ runs over all possible matrix norms, what is the range of $C_N$? In particular, is it bounded below and/or above?
(To avoid trivialities, let's keep it homogeneous by only considering "normalized" norms, i.e. require $N(I_n)=1$. This does not seem to be part of the standard definition of a norm.) |
A homogeneous transformation matrix $H$ is often used as a matrix to perform transformations from one frame to another frame,
expressed in the former frame. The translation vector thus includes [x,y(,z)] coordinates of the latter frame expressed in the former. Perhaps that this already answers your question, but below is a more elaborate explanation.
The transformation matrix contains information about both rotation and translation and belongs to the special Eucledian group $SE(n)$ in $n$-D. It consists of a rotation matrix $R$ and translation vector $r$. If we permit no shear, the rotation matrix contains only information about the rotation and belongs to the orthonormal group $SO(n)$. We have:
$$H=\begin{bmatrix} R & r \\ \bar{0} & 1 \end{bmatrix}$$
Let's define $H^a_b$ the transformation matrix that expresses coordinate frame $\Phi_b$ in $\Phi_a$, expressed in $\Phi_a$. $\Phi_a$ can be your origin, but it can also be an other frame.
You can use the transformation matrix to express a point $p=[p_x\ p_y]^\top$ (vectors) in another frame:$$P_a = H^a_b\,P_b$$$$P_b = H^b_c\,P_c$$with$$P = \begin{bmatrix} p \\ 1 \end{bmatrix}$$The best part is that you can stack them as follows:$$P_a = H^a_b H^b_c\,P_c = H^a_c\,P_c $$ Here a small 2 D example. Consider a frame $\Phi_b$ translated $[3\ 2]^\top$ and rotated $90^\circ$ degrees with respect to $\Phi_a$.$$H^a_b = \begin{bmatrix}\cos(90^\circ) & -\sin(90^\circ) & 3 \\ \sin(90^\circ) & \cos(90^\circ) & 2 \\ 0 & 0 & 1 \end{bmatrix}=\begin{bmatrix}0 & -1 & 3 \\ 1 & 0 & 2 \\ 0 & 0 & 1 \end{bmatrix}$$A point $p_b=[3\ 4]^\top$ expressed in frame $\Phi_b$ is $$\begin{bmatrix}p_{a,x} \\ p_{a,y} \\ 1 \end{bmatrix} = \begin{bmatrix}0 & -1 & 3 \\ 1 & 0 & 2 \\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix}3 \\ 4 \\ 1 \end{bmatrix}=\begin{bmatrix}-1 \\5 \\1 \end{bmatrix} \to p_a = \begin{bmatrix}-1\\5\end{bmatrix}$$Try to make a drawing to improve your understanding. |
直観論理では排中律は成り立たないし, 特に一般の型について \[\forall a,b,~ a = b \lor a \ne b\] は成立しない.
けど、
a
b に具体的な型が与えられていて, それが普通日常で考える常識的なモノであれば, これは普通の成り立つ.
(* Excluded Middle Law (EML) of concrete types*)Theorem eml_bool : forall a b : bool, a = b \/ a <> b.Proof. case; case; (by left) + (by right).Qed.Theorem eml_nat : forall a b : nat, a = b \/ a <> b.Proof. induction a as [|a' IHa]. case. by left. by right. case. by right. intro b'. move: (IHa b'). case. - by left; apply f_equal. - by right; injection.Qed.Theorem eml_nat_list : forall (a b : list nat), a = b \/ a <> b.Proof. induction a as [|x xs IH]. case. by left. by right. case. by right. move => y ys. move: (IH ys); case => [E|E]; move: (eml_nat x y); case => [F|F]; (rewrite E; rewrite F; by left) + (by right; injection).Qed. |
John's answer is a good one, I just wanted to add some equations and addition thought. Let me start here:
Heating is really only significant when you get a shock wave i.e. above the speed of sound.
The question asks specifically about a $200^{\circ} C$ increase in temperature in the atmosphere. This qualifies as "significant" heating, and the hypothesis that this would only happen at supersonic speeds is valid, which I'll show here.
When something moves through a fluid, heating happens of both the object and the air. Trivially, the total net heating is $F d$, the drag force times the distance traveled. The problem is that we don't know what the breakdown is between the object and the air is. This dichotomy is rather odd, because consider that in steady-state movement
all of the heating goes to the air. The object will heat up, and if it continues to move at the same speed (falling at terminal velocity for instance), it is cooled by the air the exact same amount it is heated by the air.
When considering the exact heating mechanisms, there is heating from boundary layer friction on the surface of the object and there are forms losses from eddies that ultimately are dissipated by viscous heating. After thinking about it, I must admit I think John's suggestion is the most compelling - that the compression of the air itself is what matters most. Since a $1 m$ ball in air is specified, this should be a fairly high Reynolds number, and the skin friction shouldn't matter quite as much as the heating due to stagnation on the leading edge.
Now, the exact amount of pressure increase at the stagnation point may not be exactly $1/2 \rho v^2$, but it's close to that. Detailed calculations for drag should give an accurate number, but I don't have those, so I'll use that expression. We have air, at $1 atm$, with the prior assumption the size of the sphere doesn't matter, I'll say that air ambient is at $293 K$, and the density is $1.3 kg/m^3$. We'll have to look at this as an adiabatic compression of a diatomic gas, giving:
$$\frac{T_2}{T_1} = \left( \frac{P_2}{P_1} \right)^{\frac{\gamma-1}{\gamma}}$$
Diatomic gases have:
$$\gamma=\frac{7}{5}$$
Employ the stagnation pressure expression to get:
$$\frac{P_2}{P_1} = \frac{P1+\frac{1}{2} \rho v^2}{P1} = 1+\frac{1}{2} \rho v^2 / P1 $$
Put these together to get:
$$\frac{T_2}{T_1} = \left( 1+\frac{1}{2} \rho v^2 / P1 \right)^{2/7}$$
Now, our requirement is that $T2/T1\approx (293+200)/293 \approx 1.7$. I get this in the above expression by plugging in a velocity of about $2000 mph$. At that point, however, there might be more complicated physics due to the supersonic flow. To elaborate, the compression process at supersonic speeds might dissipate more energy than an ideal adiabatic compression. I'm not an expert in supersonic flow, and you can say the calculations here assumed subsonic flow, and the result illustrates that this is not a reasonable assumption.
addition:
The Concorde could fly at about Mach 2. The ambient temperature is much lower than room temperature, but the heatup compared to ambient was about $182 K$ for the skin and $153 K$ for the nose. This is interesting because it points to boundary layer skin friction playing a bigger role than I suspected, but that is also wrapped up in the physics of the sonic wavefront which I haven't particularly studied.
You have to ask yourself, what pressure is the nose at and what pressure is the skin at. The flow separates (going under or above the craft) at some point, and that should be the highest pressure, but maybe it's not the highest temperature, and I can't really explain why. We've pretty much reached the limit of the back-of-the-envelope calculations.
(note: I messed up the $\gamma$ value at first and then changed it after a comment. This caused the value to go from 1000 mph to 2000 mph. This is actually much more consistent with the Concorde example since it gets <200 K heating at Mach 2.) |
I still think this is off-topic, but it seems I need more space than a comment to show (answer?) why that is so.
You are starting from some performance specifications and are looking to get to a set of features you need in your camera.
Here is a post from NI about stereo vision that gives a formula for depth resolution:
$$\Delta z = \frac{z^2}{fb}\Delta d \\$$
where $z$ is the depth of the object from the stereo system, $\Delta d$ is the depth resolution, $f$ is the focal length of the camera, $b$ is the baseline, and $d$ is the image disparity.
So, you want 1% depth resolution at 100 meters, or a depth resolution of 1 meter. A focal length of 8 millimeters, or 0.008 m, and a baseline of 0.5 m.
Rearranging the equation, it looks like you'll need a camera capable of registering a disparity of:
$$\Delta d = \Delta z \frac{fb}{z^2} \\\Delta d = 1 \frac{(0.008)(0.5)}{100^2} \\\Delta d = 4 x 10^-7 m \\\Delta d = 0.4 \mu m \\$$
Assuming pixel accuracy (not sub-pixel accuracy), you'll want one pixel to be 0.4 $\mu$m or smaller, so the 0.4 $\mu$m disparity is registered as a one pixel shift between cameras.
Here's a list of sensor formats and sizes. I'm assuming these cameras all do "full HD", at a resolution of 1920x1080. Looking at the 2/3" format, the sensor width is 8.8 mm.
You need to register 0.4 $\mu$m, how does that compare to the 2/3" format? Well, at a width of 0.0088 m, with 1920 pixels across that width, the 2/3" format has a pixel width of $0.0088/1920 = 4.58\mu m$. So, off by a factor of 10. You need the pixel width to be about 11 times smaller.
So let's look at the 1/3" format - as in the iPhone 6. There the width is 4.8mm, so about half as wide, meaning you still need the pixels to be about 5-6 times smaller than the camera sensor in the iPhone.
This is also assuming you want to use every pixel in a full HD format - this will result in a high computation time. Most of the stereo vision projects I've seen have used cameras with lower resolutions or downsampled the image to a format like 640x480, but of course that means that the pixels are much (3x) larger.
You ask if "IP Cameras" are "proper," but IP cameras come in lots of styles.
Hopefully this will help you as a guide for your iterations. Plainly speaking, I don't think you'll ever find anything (that is reasonably affordable) that would do the depth resolution at the baseline you're talking about. I would imagine the baseline would be more on the range of 5-10 meters to get what you need. At 10 meters, fyi, the pixel size becomes 8 $\mu$m. At that point, most/all of the HD cameras should be able to do what you want, but again HD is computationally expensive because there are so many pixels to correlate.
This will be an iterative process. Work forward and backward and forward and backward until you get the design that meets your needs. You'll find you need to make tradeoffs along the way, and that's the core of engineering - finding the "optimal" balance of specifications. Cost, performance, size, cost, weight, interface, lead time, cost, cost, cost. |
ISSN:
1531-3492
eISSN:
1553-524X
All Issues
Discrete & Continuous Dynamical Systems - B
July 2016 , Volume 21 , Issue 5
Special issue dedicated to Lishang Jiang on his 80th birthday
Select all articles
Export/Reference:
Abstract:
We dedicate this volume of the Journal of Discrete and Continuous Dynamical Systems-B to Professor Lishang Jiang on his 80th birthday. Professor Lishang Jiang was born in Shanghai in 1935. His family had migrated there from Suzhou. He graduated from the Department of Mathematics, Peking University, in 1954. After teaching at Beijing Aviation College, in 1957 he returned to Peking University as a graduate student of partial differential equations under the supervision of Professor Yulin Zhou. Later, as a professor, a researcher and an administrator, he worked at Peking University, Suzhou University and Tongji University at different points of his career. From 1989 to 1996, Professor Jiang was the President of Suzhou University. From 2001 to 2005, he was the Chairman of the Shanghai Mathematical Society.
For more information please click the “Full Text” above.
Abstract:
This paper investigates the positive solutions for second order linear elliptic equation in unbounded cylinder with zero boundary condition. We prove there exist two special positive solutions with exponential growth at one end while exponential decay at the other, and all the positive solutions are linear combinations of these two.
Abstract:
In this paper we discuss the optimal liquidation over a finite time horizon until the exit time. The drift and diffusion terms of the asset price are general functions depending on all variables including control and market regime. There is also a local nonlinear transaction cost associated to the liquidation. The model deals with both the permanent impact and the temporary impact in a regime switching framework. The problem can be solved with the dynamic programming principle. The optimal value function is the unique continuous viscosity solution to the HJB equation and can be computed with the finite difference method.
Abstract:
The following type of parabolic Barenblatt equations
min {$\partial_t V - \mathcal{L}_1 V, \partial_t V-\mathcal{L}_2 V$} = 0
is studied, where $\mathcal{L}_1$ and $\mathcal{L}_2$ are different elliptic operators of second order. The (unknown) free boundary of the problem is a divisional curve, which is the optimal insured boundary in our stochastic control problem. It will be proved that the free boundary is a differentiable curve.
To the best of our knowledge, this is the first result on free boundary for Barenblatt Equation. We will establish the model and verification theorem by the use of stochastic analysis. The existence of classical solution to the HJB equation and the differentiability of free boundary are obtained by PDE techniques.
Abstract:
Based on the optimal estimate of convergence rate $O(\Delta x)$ of the value function of an explicit finite difference scheme for the American put option problem in [6], an $O(\sqrt{\Delta x})$ rate of convergence of the free boundary resulting from a general compatible numerical scheme to the true free boundary is proven. A new criterion for the compatibility of a generic numerical scheme to the PDE problem is presented. A numerical example is also included.
Abstract:
In this note, we remove the technical assumption $\gamma>0$ imposed by Dai et. al. [SIAM J. Control Optim., 48 (2009), pp. 1134-1154] who consider the optimal investment and consumption decision of a CRRA investor facing proportional transaction costs and finite time horizon. Moreover, we present an estimate on the resulting optimal consumption.
Abstract:
Recent years have seen a dramatic increase in the number and variety of new mathematical models describing biological processes. Some of these models are formulated as free boundary problems for systems of PDEs. Relevant biological questions give rise to interesting mathematical questions regarding properties of the solutions. In this review we focus on models whose formulation includes Stokes equations. They arise in describing the evolution of tumors, both at the macroscopic and molecular levels, in wound healing of cutaneous wounds, and in biofilms. We state recent results and formulate some open problems.
Abstract:
To capture the impact of spatial heterogeneity of environment and available resource of the public health system on the persistence and extinction of the infectious disease, a simplified spatial SIS reaction-diffusion model with allocation and use efficiency of the medical resource is proposed. A nonlinear space dependent recovery rate is introduced to model impact of available public health resource on the transmission dynamics of the disease. The basic reproduction numbers associated with the diseases in the spatial setting are defined, and then the low, moderate and high risks of the environment are classified. Our results show that the complicated dynamical behaviors of the system are induced by the variation of the use efficiency of medical resources, which suggests that maintaining appropriate number of public health resources and well management are important to control and prevent the temporal-spatial spreading of the infectious disease. The numerical simulations are presented to illustrate the impact of the use efficiency of medical resources on the control of the spreading of infectious disease.
Abstract:
This paper introduces a new class of optimal switching problems, where the player is allowed to switch at a sequence of exogenous Poisson arrival times, and the underlying switching system is governed by an infinite horizon backward stochastic differential equation system. The value function and the optimal switching strategy are characterized by the solution of the underlying switching system. In a Markovian setting, the paper gives a complete description of the structure of switching regions by means of the comparison principle.
Abstract:
This paper is concerned with a coupled Navier-Stokes/Allen-Cahn system describing a diffuse interface model for two-phase flow of viscous incompressible fluids with different densities in a bounded domain $\Omega\subset\mathbb R^N$($N=2,3$). We establish a criterion for possible break down of such solutions at finite time in terms of the temporal integral of both the maximum norm of the deformation tensor of velocity gradient and the square of maximum norm of gradient of phase field variable in 2D. In 3D, the temporal integral of the square of maximum norm of velocity is also needed. Here, we suppose the initial density function $\rho_0$ has a positive lower bound.
Abstract:
We show that solutions of equations of the form \[ -u_t+D_{11}u+(x^1)D_{22}u = f \] (and also more general equations in any number of dimensions) satisfy simple Hölder estimates involving their derivatives. We also examine some pointwise properties for these solutions. Our results generalize those of Daskalopoulos and Lee, and Hong and Huang.
Abstract:
In this paper we consider the following equation $$ u_t=(u^m)_{xx}+(u^n)_x, \ \ (x, t)\in \mathbb{R}\times(0, \infty) $$ with a Dirac measure as initial data, i.e., $u(x, 0)=\delta(x)$. The solution of the Cauchy problem is well-known as source-type solution. In the recent work [11] the author studied the existence and uniqueness of such kind of singular solutions and proved that there exists a number $n_0=m+2$ such that there is a unique source-type solution to the equation when $0 \leq n < n_0$. Here our attention is focused on the nonexistence and asymptotic behavior near the origin for a short time. We prove that $n_0$ is also a critical number such that there exits no source-type solution when $n \geq n_0$ and describe the short time asymptotic behavior of the source-type solution to the equation when $0 \leq n < n_0$. Our result shows that in the case of existence and for a short time, the source-type solution of such equation behaves like the fundamental solution of the standard porous medium equation when $0 \leq n < m+1$, the unique self-similar source-type solution exists when $n = m+1$, and the solution does like the nonnegative fundamental entropy solution in the conservation law when $m+1 < n < n_0$, while in the case of nonexistence the singularity gradually disappears when $n \geq n_0$ that the mass cannot concentrate for a short time and no such a singular solutions exists. The results of previous work [11] and this paper give a perfect answer to such topical researches.
Abstract:
In this paper we will introduce for a convex domain $K$ in the Euclidean plane a function $\Omega_{n}(K, \theta)$ which is called by us the biwidth of $K$, and then try to find out the least area convex domain with constant biwidth $\Lambda$ among all convex domains with the same constant biwidth. When $n$ is an odd integer, it is proved that our problem is just that of Blaschke-Lebesgue, and when $n$ is an even number, we give a lower bound of the area of such constant biwidth domains.
Abstract:
We apply the general theory of pricing in incomplete markets, due to the author, on the problem of pricing bonds for the Hull-White stochastic interest rate model. As pricing in incomplete markets involves more market parameters than the classical theory, and as the derived risk premium is time-dependent, the proposed methodology might offer a better way for replicating different shapes of the empirically observed yield curves. For example, the so-called humped yield curve can be obtained from a normal yield curve by only increasing the investors risk aversion.
Abstract:
In this paper, we consider the compressible magnetohydrodynamic equations with nonnegative thermal conductivity and electric conductivity. The coefficients of the viscosity, heat conductivity and magnetic diffusivity depend on density and temperature. Inspired by the framework of [11], [13] and [15], we use the maximal regularity and contraction mapping argument to prove the existence and uniqueness of local strong solutions with positive initial density in the bounded domain for any dimension.
Abstract:
In this paper we present a new proof for the interior $C^{1,\alpha}$ regularity of weak solutions for a class of quasilinear elliptic equations, whose prototype is the $p$-Laplace equation.
Abstract:
An efficient parallelization method for numerically solving Lagrangian radiation hydrodynamic problems with three-temperature modeling on structural quadrilateral grids is presented. The three-temperature heat conduction equations are discretized by implicit scheme, and their computational cost are very expensive. Thus a parallel iterative method for three-temperature system of equations is constructed, which is based on domain decomposition for physical space, and combined with fixed point (Picard) nonlinear iteration to solve sub-domain problems. It can avoid global communication and can be naturally implemented on massive parallel computers. The space discretization of heat conduction equations uses the well-known local support operator method (LSOM). Numerical experiments show that the parallel iterative method preserves the same accuracy as the fully implicit scheme, and has high parallel efficiency and good stability, so it provides an effective solution procedure for numerical simulation of the radiation hydrodynamic problems on parallel computers.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:
We can always detect uniform motion with respect to a medium by a positive result to a Michelson-Morley experiment that is confined to a region of spacetime small enough to be "flat". The same experiment in the same region does not detect such motion in freespace. In such a small enough region, light in freespace is always observed to travel at $c$;
Another important difference is that mediums and gravitational lenses are fundamentally different in their effect on the polarisation of light: I talk further about this below;
A related way of saying the same as in 1. is that a photon in curved space still always behaves as a massless particle. Photons propagating through a (non-freespace) medium are not pure photons, unless they are not interacting with the medium, in which case the medium is equivalent to freespace for this discussion. In a medium, the photon becomes a quantum superposition of pure photon and excited matter states, as discussed in my answer here. It is therefore a quasiparticle (called a polariton, or plasmon, or exciton, depending on the exact kind of medium and interaction spoken about) that always has a nonzero rest mass if you insist on thinking of it as a quasi-particle. The hot topic of "slow light" belongs in this picture, and says nothing contradicting the masslessness of light in freespace;
In quantum optics, almost always the interactions involve absorptions and re-emissions of different photons: quantum and classical optics are simply different approximations for the same kind of light-matter interaction, as categorised in my answer here. The cyclic "absorption-re-emission" picture is, albeit in different language, an equivalent way of thinking about the matter light interaction as the "polariton" quasi-particle, photon-matterstate quantum superpositon picture. The "polariton" picture diagonalises the Schrödinger picture so we talk in terms of eigenmodes: the absorption-re-emission picture keeps pure photons and matter states separate: since these are no long eigenstates, the Schrödinger picture paints the situation as a oscillation back and forth between these two.
On the other hand, gravitational optics is the propagation of light in "curved", but "empty" spacetime. Right now we would tend to describe a photon being gravitationally lensed by the free, but "curved" space Maxwell equations:
$$ {A^{ a }}_{ ; a } = 0;\;\Box A^{a} = {R^{ a }}_{ b } A^{ b }$$
where $A$ is the generalised Lorenz gauged four-potential and $R_{ a b } \ \stackrel{\mathrm{def}}{=}\ {R^{ s }}_{ a s b }$ is the Ricci curvature tensor, which you get from the solution of the vacuum Einstein field equations that prevails around the lensing object(s). The photon is here interacting locally with spacetime,
not with the lensing "matter". Einstein's big gig (in GTR at least) was "locality": the notion that all physics is local and that there is no instantaneous action at a distance.
At the equation level, a
crucial difference between Maxwell's equations in curved spacetime as opposed Maxwell's equations in a (potentially light bending) medium is that gravitational lensing manifests itself as a variation from place to place in the lightspeed $1/\sqrt{\mu\,\epsilon}$, but the "characteristic impedance" $\sqrt{\mu/\epsilon}$ stays constant everywhere (recall we're solving for geodesics over a wide region, so that, from a distant, nonlocal standpoint, lightspeed can vary from place to place - this is different from, and altogether consistent with, the generalised Equivalence Principle saying that spacetime is always locally Minkowskian with the same $c$). A light-bending medium on the other hand, unless it is very special, changes both $1/\sqrt{\mu\,\epsilon}$ and the "characteristic impedance" $\sqrt{\mu/\epsilon}$. Physically what this means is that the left and right handed polarised photon eigenstates in general couple together in a medium, whereas in gravitational lensing they never couple together no matter how "severe" the gravitational lensing may be. Pure left/ right handed polarised light stays left/right handed polarised in any gravitational lensing: inhomogeneous, light bending mediums almost always mix left and right handed components. See:
Iwo Bialynicki Birula, "Photon Wave Function", in "Progress in Optics" Vol XXXVI, E. Wolf Ed. 1996
particularly §11 of this work. I have often wondered about "simulating" gravitational lensing with inhomogeneous metamaterials whose characteristic impedance is constant, but I'm not even sure it is theoretically possible to make these.
Another key difference from "quantum optics" is that the photon propagates in curved spacetime and is not thought of as being absorbed and re-emitted as with its interactions with matter. So here the picture is much simpler (conceptually) than quantum optics (of course it's quite involved and tedious in actual calculation).
However (although I am well out of my depth here for details), it's possible that a future quantum theory of gravity will come up with a "graviton field", so that we would probably then be thinking of the photon's repeated absorption and re-emission by the graviton field, which would be a picture that is more like our conception today of quantum optics.
See also some pithy discussions of some of the other differences between medium bending of light and gravitational lensing here: as Jitter amusingly puts it "Could I burn space ants?". The answer, to the space ants' collective relief, is no.
BTW I don't like that diagram of diffuse transmission, with "photons" bouncing all over the place like bullets. It seems to be quite fashionable nowadays to talk about "ballistic" photons (which I don't really understand): see my answer here for some idea of why I don't like the diagram. Diffuse transmission is still a version of what I talk about there, only a bit more complicated because the distribution of matter is more "jagged". |
The Annals of Probability Ann. Probab. Volume 16, Number 1 (1988), 375-396. Boundary Crossing Problems for Sample Means Abstract
Motivated by several classical sequential decision problems, we study herein the following type of boundary crossing problems for certain nonlinear functions of sample means. Let $X_1, X_2,\ldots$ be i.i.d. random vectors whose common density belongs to the $k$-dimensional exponential family $h_\theta(x) = \exp\{\theta'x - \psi(\theta)\}$ with respect to some nondegenerate measure $\nu$. Let $\bar{X}_n = (X_1 + \cdots + X_n)/n, \hat\theta_n = (\nabla\psi)^{-1}(\bar{X}_n)$, and let $I(\theta, \lambda) = E_\theta\log\{h_\theta(X_1)/h_\lambda(X_1)\}$ ( = Kullback-Leibler information number). Consider stopping times of the form $T_c(\lambda) = \inf\{n: I(\hat\theta_n, \lambda) \geq n^{-1}g(cn)\}, c > 0$, where $g$ is a positive function such that $g(t) \sim \alpha \log t^{-1}$ as $t \rightarrow 0$. We obtain asymptotic approximations to the moments $E_\theta T^r_c(\lambda)$ as $c \rightarrow 0$ that are uniform in $\theta$ and $\lambda$ with $|\lambda - \theta|^2/c \rightarrow \infty$. We also study the probability that $\bar{X}_{Tc(\lambda)}$ lies in certain cones with vertex $\nabla\psi (\lambda)$. In particular, in the one-dimensional case with $\lambda > \theta$, we consider boundary crossing probabilities of the form $P_\theta\{\hat\theta_n \geq \lambda \text{and} I(\hat\theta_n, \lambda) \geq n^{-1} g(cn) \text{for some} n\}$. Asymptotic approximations (as $c \rightarrow 0$) to these boundary crossing probabilities are obtained that are uniform in $\theta$ and $\lambda$ with $|\lambda - \theta|^2/c \rightarrow \infty$.
Article information Source Ann. Probab., Volume 16, Number 1 (1988), 375-396. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176991909 Digital Object Identifier doi:10.1214/aop/1176991909 Mathematical Reviews number (MathSciNet) MR920279 Zentralblatt MATH identifier 0642.60018 JSTOR links.jstor.org Subjects Primary: 60F10: Large deviations Secondary: 60G40: Stopping times; optimal stopping problems; gambling theory [See also 62L15, 91A60] 62L05: Sequential design 62L15: Optimal stopping [See also 60G40, 91A60] Citation
Lai, Tze Leung. Boundary Crossing Problems for Sample Means. Ann. Probab. 16 (1988), no. 1, 375--396. doi:10.1214/aop/1176991909. https://projecteuclid.org/euclid.aop/1176991909 |
I think that tests for normality can be useful as companions to graphical examinations. They have to be used in the right way, though. In my opinion, this means that
many popular tests, such as the Shapiro-Wilk, Anderson-Darling and Jarque-Bera tests never should be used.
Before I explain my standpoint, let me make a few remarks:
In an interesting recent paper Rochon et al. studied the impact of the Shapiro-Wilk test on the two-sample t-test. The two-step procedure of testing for normality before carrying out for instance a t-test is not without problems. Then again, neither is the two-step procedure of graphically investigating normality before carrying out a t-test. The difference is that the impact of the latter is much more difficult to investigate (as it would require a statistician to graphically investigate normality $100,000$ or so times...). It is useful to quantify non-normality, for instance by computing the sample skewness, even if you don't want to perform a formal test. Multivariate normality can be difficult to assess graphically and convergence to asymptotic distributions can be slow for multivariate statistics. Tests for normality are therefore more useful in a multivariate setting. Tests for normality are perhaps especially useful for practitioners who use statistics as a set of black-box methods. When normality is rejected, the practitioner should be alarmed and, rather than carrying out a standard procedure based on the assumption of normality, consider using a nonparametric procedure, applying a transformation or consulting a more experienced statistician. As has been pointed out by others, if $n$ is large enough, the CLT usually saves the day. However, what is "large enough" differs for different classes of distributions.
(In my definiton) a test for normality is directed against a class of alternatives if it is sensitive to alternatives from that class, but not sensitive to alternatives from other classes. Typical examples are tests that are directed towards skew or kurtotic alternatives. The simplest examples use the sample skewness and kurtosis as test statistics.
Directed tests of normality are arguably often preferable to omnibus tests (such as the Shapiro-Wilk and Jarque-Bera tests) since
it is common that only some types of non-normality are of concern for a particular inferential procedure.
Let's consider Student's t-test as an example. Assume that we have an i.i.d. sample from a distribution with skewness $\gamma=\frac{E(X-\mu)^3}{\sigma^3}$ and (excess) kurtosis $\kappa=\frac{E(X-\mu)^4}{\sigma^4}-3.$ If $X$ is symmetric about its mean, $\gamma=0$. Both $\gamma$ and $\kappa$ are 0 for the normal distribution.
Under regularity assumptions, we obtain the following asymptotic expansion for the cdf of the test statistic $T_n$:$$P(T_n\leq x)=\Phi(x)+n^{-1/2}\frac{1}{6}\gamma(2x^2+1)\phi(x)-n^{-1}x\Big(\frac{1}{12}\kappa (x^2-3)-\frac{1}{18}\gamma^2(x^4+2x^2-3)-\frac{1}{4}(x^2+3)\Big)\phi(x)+o(n^{-1}),$$
where $\Phi(\cdot)$ is the cdf and $\phi(\cdot)$ is the pdf of the standard normal distribution.
$\gamma$ appears for the first time in the $n^{-1/2}$ term, whereas $\kappa$ appears in the $n^{-1}$ term. The
asymptotic performance of $T_n$ is much more sensitive to deviations from normality in the form of skewness than in the form of kurtosis.
It can be verified using simulations that this is true for small $n$ as well. Thus Student's t-test is sensitive to skewness but relatively robust against heavy tails, and
it is reasonable to use a test for normality that is directed towards skew alternatives before applying the t-test.
As a
rule of thumb ( not a law of nature), inference about means is sensitive to skewness and inference about variances is sensitive to kurtosis.
Using a directed test for normality has the benefit of getting higher power against ''dangerous'' alternatives and lower power against alternatives that are less ''dangerous'', meaning that we are less likely to reject normality because of deviations from normality that won't affect the performance of our inferential procedure.
The non-normality is quantified in a way that is relevant to the problem at hand. This is not always easy to do graphically.
As $n$ gets larger, skewness and kurtosis become less important - and directed tests are likely to detect if these quantities deviate from 0 even by a small amount. In such cases, it seems reasonable to, for instance, test whether $|\gamma|\leq 1$ or (looking at the first term of the expansion above) $$|n^{-1/2}\frac{1}{6}\gamma(2z_{\alpha/2}^2+1)\phi(z_{\alpha/2})|\leq 0.01$$ rather than whether $\gamma=0$. This takes care of some of the problems that we otherwise face as $n$ gets larger. |
Under the auspices of the Computational Complexity Foundation (CCF)
In this paper we prove two results about $AC^0[\oplus]$ circuits.
We show that for $d(N) = o(\sqrt{\log N/\log \log N})$ and $N \leq s(N) \leq 2^{dN^{1/d^2}}$ there is an explicit family of functions $\{f_N:\{0,1\}^N\rightarrow \{0,1\}\}$ such that
$f_N$ has uniform $AC^0$ formulas of depth $d$ and size at ... more >>>
We show that there is a randomized algorithm that, when given a small constant-depth Boolean circuit $C$ made up of gates that compute constant-degree Polynomial Threshold functions or PTFs (i.e., Boolean functions that compute signs of constant-degree polynomials), counts the number of satisfying assignments to $C$ in significantly better than ... more >>>
The $\delta$-Coin Problem is the computational problem of distinguishing between coins that are heads with probability $(1+\delta)/2$ or $(1-\delta)/2,$ where $\delta$ is a parameter that is going to $0$. We study the complexity of this problem in the model of constant-depth Boolean circuits and prove the following results.
1. Upper ... more >>>
We present polynomial families complete for the well-studied algebraic complexity classes VF, VBP, VP, and VNP. The polynomial families are based on the homomorphism polynomials studied in the recent works of Durand et al. (2014) and Mahajan et al. (2016). We consider three different variants of graph homomorphisms, namely injective ... more >>>
We study the size blow-up that is necessary to convert an algebraic circuit of product-depth $\Delta+1$ to one of product-depth $\Delta$ in the multilinear setting.
We show that for every positive $\Delta = \Delta(n) = o(\log n/\log \log n),$ there is an explicit multilinear polynomial $P^{(\Delta)}$ on $n$ variables that ... more >>>
The complexity of Iterated Matrix Multiplication is a central theme in Computational Complexity theory, as the problem is closely related to the problem of separating various complexity classes within $\mathrm{P}$. In this paper, we study the algebraic formula complexity of multiplying $d$ many $2\times 2$ matrices, denoted $\mathrm{IMM}_{d}$, and show ... more >>>
We investigate the power of Non-commutative Arithmetic Circuits, which compute polynomials over the free non-commutative polynomial ring $\mathbb{F}\langle x_1,\dots,x_N \rangle$, where variables do not commute. We consider circuits that are restricted in the ways in which they can compute monomials: this can be seen as restricting the families of parse ... more >>>
In this work we consider the term evaluation problem which involves, given a term over some algebra and a valid input to the term, computing the value of the term on that input. This is a classical problem studied under many names such as formula evaluation problem, formula value problem ... more >>>
In this work we study the problem of efficiently isolating witnesses for the complexity classes NL and LogCFL, which are two well-studied complexity classes contained in P. We prove that if there is a L/poly randomized procedure with success probability at least 2/3 for isolating an s-t path in a ... more >>>
In this note, we prove that there is an explicit polynomial in VP such that any $\Sigma\Pi\Sigma$ arithmetic circuit computing it must have size at least $n^{3-o(1)}$. Up to $n^{o(1)}$ factors, this strengthens a recent result of Kayal, Saha and Tavenas (ICALP 2016) which gives a polynomial in VNP with ... more >>>
We continue the study of the shifted partial derivative measure, introduced by Kayal (ECCC 2012), which has been used to prove many strong depth-4 circuit lower bounds starting from the work of Kayal, and that of Gupta et al. (CCC 2013).
We show a strong lower bound on the dimension ... more >>>
Nisan (STOC 1991) exhibited a polynomial which is computable by linear sized non-commutative circuits but requires exponential sized non-commutative algebraic branching programs. Nisan's hard polynomial is in fact computable by linear sized skew circuits (skew circuits are circuits where every multiplication gate has the property that all but one of ... more >>>
A celebrated result of Barrington (1985) proved that polynomial size, width-5 branching programs (BP) are equivalent in power to a restricted form of branching programs -- polynomial sized width-5 permutation branching programs (PBP), which in turn capture all of NC1. On the other hand it is known that width-3 PBPs ... more >>>
SUBSET SUM is a well known NP-complete problem:
given $t \in Z^{+}$ and a set $S$ of $m$ positive integers, output YES if and only if there is a subset $S^\prime \subseteq S$ such that the sum of all numbers in $S^\prime$ equals $t$. The problem and its search ... more >>>
We show here a $2^{\Omega(\sqrt{d} \cdot \log N)}$ size lower bound for homogeneous depth four arithmetic formulas. That is, we give
an explicit family of polynomials of degree $d$ on $N$ variables (with $N = d^3$ in our case) with $0, 1$-coefficients such that for any representation of ... more >>>
A proof system for a language $L$ is a function $f$ such that Range$(f)$ is exactly $L$. In this paper, we look at proofsystems from a circuit complexity point of view and study proof systems that are computationally very restricted. The restriction we study is: they can be computed by ... more >>>
We study the arithmetic complexity of iterated matrix multiplication. We show that any multilinear homogeneous depth $4$ arithmetic formula computing the product of $d$ generic matrices of size $n \times n$, IMM$_{n,d}$, has size $n^{\Omega(\sqrt{d})}$ as long as $d \leq n^{1/10}$. This improves the result of Nisan and Wigderson (Computational ... more >>>
We define DLOGTIME proof systems, DLTPS, which generalize NC0 proof systems.
It is known that functions such as Exact-k and Majority do not have NC0 proof systems. Here, we give a DLTPS for Exact-k (and therefore for Majority) and also for other natural functions such as Reach and k-Clique. Though ... more >>>
We give a \#NC$^1$ upper bound for the problem of counting accepting paths in any fixed visibly pushdown automaton. Our algorithm involves a non-trivial adaptation of the arithmetic formula evaluation algorithm of Buss, Cook, Gupta, Ramachandran (BCGR: SICOMP 21(4), 1992). We also show that the problem is \#NC$^1$ hard. Our ... more >>>
In this paper, we give streaming algorithms for some problems which are known to be in deterministic log-space, when the number of passes made on the input is unbounded. If the input data is massive,
the conventional deterministic log-space algorithms may not run efficiently. We study the complexity of the ... more >>>
Graph Isomorphism is the prime example of a computational problem with a wide difference between the best known lower and upper bounds on its complexity. There is a significant gap between extant lower and upper bounds for planar graphs as well. We bridge the gap for this natural and ... more >>>
The parallel complexity class NC^1 has many equivalent models such as
polynomial size formulae and bounded width branching programs. Caussinus et al. \cite{CMTV} considered arithmetizations of two of these classes, #NC^1 and #BWBP. We further this study to include arithmetization of other classes. In particular, we show that counting paths ... more >>>
We re-examine the complexity of evaluating monotone planar circuits
MPCVP, with special attention to circuits with cylindrical embeddings. MPCVP is known to be in NC^3, and for the special case of upward stratified circuits, it is known to be in LogDCFL. We characterize cylindricality, which ... more >>> |
Noether's theorem relates symmetries to conserved quantities. For a central potential $V \propto \frac{1}{r}$, the Laplace-Runge-Lenz vector is conserved. What is the symmetry associated with the conservation of this vector?
1)
Hamiltonian Problem. The Kepler problem has Hamiltonian
$$ H~=~T+V, \qquad T~:=~ \frac{p^2}{2m}, \qquad V~:=~- \frac{k}{q}, \tag{1} $$
where $m$ is the 2-body reduced mass. The Laplace–Runge–Lenz vector is (up to an irrelevant normalization)
$$ A^j ~:=~a^j + km\frac{q^j}{q}, \qquad a^j~:=~({\bf L} \times {\bf p})^j~=~{\bf q}\cdot{\bf p}~p^j- p^2~q^j,\qquad {\bf L}~:=~ {\bf q} \times {\bf p}.\tag{2}$$
2)
Action. The Hamiltonian Lagrangian is
$$ L_H~:=~ \dot{\bf q}\cdot{\bf p} - H,\tag{3} $$
and the action is
$$ S[{\bf q},{\bf p}]~=~ \int {\rm d}t~L_H .\tag{4}$$
The non-zero fundamental canonical Poisson brackets are
$$ \{ q^i , p^j\}~=~ \delta^{ij}. \tag{5}$$
3)
Inverse Noether's Theorem. Quite generally in the Hamiltonian formulation, given a constant of motion $Q$, then the infinitesimal variation
$$\delta~=~ -\varepsilon \{Q,\cdot\}\tag{6}$$
is a global off-shell symmetry of the action $S$ (modulo boundary terms). Here $\varepsilon$ is an infinitesimal global parameter, and $X_Q=\{Q,\cdot\}$ is a Hamiltonian vector field with Hamiltonian generator $Q$. The full Noether charge is $Q$, see e.g. my answer to this question. (The words
on-shell and off-shell refer to whether the equations of motion are satisfied or not. The minus is conventional.)
4)
Variation. Let us check that the three Laplace–Runge–Lenz components $A^j$ are Hamiltonian generators of three continuous global off-shell symmetries of the action $S$. In detail, the infinitesimal variations $\delta= \varepsilon_j \{A^j,\cdot\}$ read
$$ \delta q^i ~=~ \varepsilon_j \{A^j,q^i\} , \qquad \{A^j,q^i\} ~=~ 2 p^i q^j - q^i p^j - {\bf q}\cdot{\bf p}~\delta^{ij}, $$ $$ \delta p^i ~=~ \varepsilon_j \{A^j,p^i\} , \qquad \{A^j,p^i\}~ =~ p^i p^j - p^2~\delta^{ij} +km\left(\frac{\delta^{ij}}{q}- \frac{q^i q^j}{q^3}\right), $$ $$ \delta t ~=~0,\tag{7}$$
where $\varepsilon_j$ are three infinitesimal parameters.
5) Notice for later that
$$ {\bf q}\cdot\delta {\bf q}~=~\varepsilon_j({\bf q}\cdot{\bf p}~q^j - q^2~p^j), \tag{8} $$
$$ {\bf p}\cdot\delta {\bf p} ~=~\varepsilon_j km(\frac{p^j}{q}-\frac{{\bf q}\cdot{\bf p}~q^j}{q^3})~=~- \frac{km}{q^3}{\bf q}\cdot\delta {\bf q}, \tag{9} $$
$$ {\bf q}\cdot\delta {\bf p}~=~\varepsilon_j({\bf q}\cdot{\bf p}~p^j - p^2~q^j )~=~\varepsilon_j a^j, \tag{10} $$
$$ {\bf p}\cdot\delta {\bf q}~=~2\varepsilon_j( p^2~q^j - {\bf q}\cdot{\bf p}~p^j)~=~-2\varepsilon_j a^j~. \tag{11} $$
6) The Hamiltonian is invariant
$$ \delta H ~=~ \frac{1}{m}{\bf p}\cdot\delta {\bf p} + \frac{k}{q^3}{\bf q}\cdot\delta {\bf q}~=~0, \tag{12}$$
showing that the Laplace–Runge–Lenz vector $A^j$ is classically a constant of motion
$$\frac{dA^j}{dt} ~\approx~ \{ A^j, H\}+\frac{\partial A^j}{\partial t} ~=~ 0.\tag{13}$$
(We will use the $\approx$ sign to stress that an equation is an on-shell equation.)
7) The variation of the Hamiltonian Lagrangian $L_H$ is a total time derivative
$$ \delta L_H~=~ \delta (\dot{\bf q}\cdot{\bf p})~=~ \dot{\bf q}\cdot\delta {\bf p} - \dot{\bf p}\cdot\delta {\bf q} + \frac{d({\bf p}\cdot\delta {\bf q})}{dt} $$ $$ =~ \varepsilon_j\left( \dot{\bf q}\cdot{\bf p}~p^j - p^2~\dot{q}^j + km\left( \frac{\dot{q}^j}{q} - \frac{{\bf q} \cdot \dot{\bf q}~q^j}{q^3}\right)\right) $$ $$- \varepsilon_j\left(2 \dot{\bf p}\cdot{\bf p}~q^j - \dot{\bf p}\cdot{\bf q}~p^j- {\bf p}\cdot{\bf q}~\dot{p}^j \right) - 2\varepsilon_j\frac{da^j}{dt}$$ $$ =~\varepsilon_j\frac{df^j}{dt}, \qquad f^j ~:=~ A^j-2a^j, \tag{14}$$
and hence the action $S$ is invariant off-shell up to boundary terms.
8)
Noether charge. The bare Noether charge $Q_{(0)}^j$ is
$$Q_{(0)}^j~:=~ \frac{\partial L_H}{\partial \dot{q}^i} \{A^j,q^i\}+\frac{\partial L_H}{\partial \dot{p}^i} \{A^j,p^i\} ~=~ p^i\{A^j,q^i\}~=~ -2a^j. \tag{15}$$
The full Noether charge $Q^j$ (which takes the total time-derivative into account) becomes (minus) the Laplace–Runge–Lenz vector
$$ Q^j~:=~Q_{(0)}^j-f^j~=~ -2a^j-(A^j-2a^j)~=~ -A^j.\tag{16}$$
$Q^j$ is conserved on-shell
$$\frac{dQ^j}{dt} ~\approx~ 0,\tag{17}$$
due to Noether's first Theorem. Here $j$ is an index that labels the three symmetries.
9)
Lagrangian Problem. The Kepler problem has Lagrangian
$$ L~=~T-V, \qquad T~:=~ \frac{m}{2}\dot{q}^2, \qquad V~:=~- \frac{k}{q}. \tag{18} $$
The Lagrangian momentum is
$$ {\bf p}~:=~\frac{\partial L}{\partial \dot{\bf q}}~=~m\dot{\bf q} \tag{19} . $$
Let us project the infinitesimal symmetry transformation (7) to the Lagrangian configuration space
$$ \delta q^i ~=~ \varepsilon_j m \left( 2 \dot{q}^i q^j - q^i \dot{q}^j - {\bf q}\cdot\dot{\bf q}~\delta^{ij}\right), \qquad\delta t ~=~0.\tag{20}$$
It would have been difficult to guess the infinitesimal symmetry transformation (20) without using the corresponding Hamiltonian formulation (7). But once we know it we can proceed within the Lagrangian formalism. The variation of the Lagrangian is a total time derivative
$$ \delta L~=~\varepsilon_j\frac{df^j}{dt}, \qquad f_j~:=~ m\left(m\dot{q}^2q^j- m{\bf q}\cdot\dot{\bf q}~\dot{q}^j +k \frac{q^j}{q}\right)~=~A^j-2 a^j . \tag{21}$$
The bare Noether charge $Q_{(0)}^j$ is again
$$Q_{(0)}^j~:=~2m^2\left(\dot{q}^2q^j- {\bf q}\cdot\dot{\bf q}~\dot{q}^j\right) ~=~-2a^j . \tag{22}$$
The full Noether charge $Q^j$ becomes (minus) the Laplace–Runge–Lenz vector
$$ Q^j~:=~Q_{(0)}^j-f^j~=~ -2a^j-(A^j-2a^j)~=~ -A^j,\tag{23}$$
similar to the Hamiltonian formulation (16).
While Kepler second law is simply a statement of the conservation of angular momentum (and as such it holds for all systems described by central forces), the first and the third laws are special and are linked with the unique form of the newtonian potential $-k/r$. In particular, Bertrand theorem assures that
only the newtonian potential and the harmonic potential $kr^2$ give rise to closed orbits (no precession). It is natural to think that this must be due to some kind of symmetry of the problem. In fact, the particular symmetry of the newtonian potential is described exactly by the conservation of the RL vector (it can be shown that the RL vector is conserved iff the potential is central and newtonian). This, in turn, is due to a more general symmetry: if conservation of angular momentum is linked to the group of special orthogonal transformations in 3-dimensional space $SO(3)$, conservation of the RL vector must be linked to a 6-dimensional group of symmetries, since in this case there are apparently six conserved quantities (3 components of $L$ and 3 components of $\mathcal A$). In the case of bound orbits, this group is $SO(4)$, the group of rotations in 4-dimensional space.
Just to fix the notation, the RL vector is:
\begin{equation} \mathcal{A}=\textbf{p}\times\textbf{L}-\frac{km}{r}\textbf{x} \end{equation}
Calculate its total derivative:
\begin{equation}\frac{d\mathcal{A}}{dt}=-\nabla U\times(\textbf{x}\times\textbf{p})+\textbf{p}\times\frac{d\textbf{L}}{dt}-\frac{k\textbf{p}}{r}+\frac{k(\textbf{p}\cdot \textbf{x})}{r^3}\textbf{x} \end{equation}
Make use of Levi-Civita symbol to develop the cross terms:
\begin{equation}\epsilon_{sjk}\epsilon_{sil}=\delta_{ji}\delta_{kl}-\delta_{jl}\delta_{ki} \end{equation}
Finally:
\begin{equation} \frac{d\mathcal{A}}{dt}=\left(\textbf{x}\cdot\nabla U-\frac{k}{r}\right)\textbf{p}+\left[(\textbf{p}\cdot\textbf{x})\frac{k}{r^3}-2\textbf{p}\cdot\nabla U\right]\textbf{x}+(\textbf{p}\cdot\textbf{x})\nabla U \end{equation}
Now, if the potential $U=U(r)$ is central:
\begin{equation} (\nabla U)_j=\frac{\partial U}{\partial x_j}=\frac{dU}{dr}\frac{\partial r}{\partial x_j}=\frac{dU}{dr}\frac{x_j}{r} \end{equation}
so
\begin{equation} \nabla U=\frac{dU}{dr}\frac{\textbf{x}}{r}\end{equation}
Substituting back:
\begin{equation}\frac{d\mathcal A}{dt}=\frac{1}{r}\left(\frac{dU}{dr}-\frac{k}{r^2}\right)[r^2\textbf{p}-(\textbf{x}\cdot\textbf{p})\textbf{x}]\end{equation}
Now, you see that if $U$ has
exactly the newtonian form then the first parenthesis is zero and so the RL vector is conserved.
Maybe there's some slicker way to see it (Poisson brackets?), but this works anyway.
The symmetry is an example of an open symmetry, i.e. a symmetry group which varies from group action orbit to orbit. For bound trajectories, it's SO(4). For parabolic ones, it's SE(3). For hyperbolic ones, it's SO(3,1). Such cases are better handled by groupoids.
Conservation of the Runge-Lenz vector does not correspond to a symmetry of the Lagrangian itself. It arises from an invariance of the integral of the Lagrangian with respect to time, the classical action integral. Some time ago I wrote up a derivation of the conserved vector for any spherically symmetric potential:
The derivation is at the level of Goldstein and is meant to fill in the gap left by its omission from graduate-level classical mechanics texts.
(This post may be old, but we can add some precisions) The conservation of the RL vector is not trifling, it goes with the fact that you consider a central force, lead here by a Newtonian potential $\frac{1}{r}$ which has the property to be invariant under rotations (as $\frac{1}{r^n}$ but it works only for $n=1$ as shown by @quark1245).
Therefore, the S0(3) which has not 6 conserved quantities as said before but 3, the 3 generators of the symmetry $J_i$, i=1..3 such that the symmetry transformation under an infinitesimal change $x \rightarrow x + \epsilon$ is given in the canonical formalism by $$ \delta_i X = \{X, J_i(\epsilon) \} $$ and the algebra is $$ \{ J_i, J_j \} = \epsilon_{ij}^k J_k. $$ They are conserved because, at least for the Kepler problem, the system is invariant w.r.t a time translation, and the Hamiltonian is also conserved, and the calculations show that $$ \{H,J_i\}= 0. $$
Before their redefinition as shown on Wikipedia to see that the previous algebra is fulfilled, the generators of the rotations are : one is the angular momentum $L$ which shows that the movement is planar, therefore invariant under rotation around $L$, one is the RL vector which is in the plan, therefore perpendicular to $L$ and parallel to the major axis of the ellipse, and the third one has a name I don't remember, but is parallel to the minor axis.
We can see that their are only 3 degrees of freedom if we take place in the referential such that $\vec{J}_1 = \vec{L} = (0,0,L_z)$, then the planar generators are $A = (A_x,0,0)$ and $B = (0,B_y,0)$.
It has been shown that they can be constructed from the Killing-Yano tensors (which mean symmetry), and it works also at dimensions greater than 3. A nice review about the LRL vector derivation can be found in HeckmanVanHaalten
Looking at https://arxiv.org/pdf/1207.5001.pdf one gets a very nice solution. If one is not very keen into mathematics, their basic idea is to use the infinitesimal transformation $$\delta x^i=\epsilon L^{ik}$$ where $L^{ik}=\dot{x}^ix^k-x^k\dot{x}^i$. Since angular momentum is conserved, kinetic energy won't change. On the other hand, the potential changes up to order $\epsilon^2$ like $$\frac{k}{r+\delta r}=\frac{k}{((x^i+\delta x^i)(x_i+\delta x_i))^{1/2}}=\frac{k}{r}\left(1-\frac{x_i\delta x^i}{r^2}\right)=\frac{k}{r}-\epsilon\frac{kx_iL^{ik}}{r^3}=\frac{k}{r}-\epsilon\frac{d}{dt}\left(\frac{kx^k}{r}\right).$$
Therefore, the change in the action is $$\epsilon\left[m\dot{x}_iL^{ik}\right]=[m\dot{x}_i\delta x^i]_{t_1}^{t_2}=\delta S=\epsilon\left[\frac{kx^k}{r}\right]_{t_1}^{t_2}.$$ This gives the conservation of the vector $$m\dot{x}_iL^{ik}-\frac{kx^k}{r},$$ which can be easily shown to be the Runge-Lenz vector. |
Tagged: Determinant
AuthorPosts August 5, 2019 at 10:19 am #32406 Aniruddha BardhanParticipant
see the attachement.
My approach: B=adj(A) and C=adj(B). so det (B)= A^2=2^2 and det (C)=B^2=2^4.
so det(2AB^TC)= 2^8 . I did not get any option with 2^8
May be I am wrong kindly check.
Attachments:You must be logged in to view attached files.August 28, 2019 at 11:46 am #34638 Jatin Kr DeyParticipant
I think you should have to apply the following property :
$$ \displaystyle det(cA) = c^{order(A)} det(A) $$August 28, 2019 at 1:18 pm #34646
Jatin Kr DeyParticipant
Given, $$ cofactors(b_{ij})= c_{ij} \Rightarrow Adj(B)= C^T $$ \
$$ cofactors(a_{ij})= b_{ij} \Rightarrow Adj(A)= B^T $$ \
$$ det(A) = 2 $$
and the order of the matrices is 3 .
We have to apply the followings :
$$ |Adj(A) = (|A|)^{order(A) – 1} $$
$$ |A| = |A^T| $$
$$ |ABC| = |A| |B| |C| $$
$$ |cA| = c^{order(A)}|A| $$
So now $$ |Adj(A)| = 2^{3-1}=|B^T|=|B| $$ \
$$ |Adj(B)| = (2^{3-1})^{3-1}=|C^T|=|C| = 2^4$$
$$ |2A| = 2^{3}|A| = 2^4 $$
Therefore , $$ \displaystyle |2AB^TC| = |2A| |B^T| |C| = 2^4 . 2^2 . 2^4 = \sum_{r=1}^{11} {10 \choose r-1} $$
This reply was modified 1 month, 2 weeks ago by Jatin Kr Dey. This reply was modified 1 month, 2 weeks ago by Jatin Kr Dey. This reply was modified 1 month, 2 weeks ago by Jatin Kr Dey. This reply was modified 1 month, 2 weeks ago by Jatin Kr Dey. This reply was modified 1 month, 2 weeks ago by Jatin Kr Dey. This reply was modified 1 month, 2 weeks ago by Jatin Kr Dey. AuthorPosts
You must be logged in to reply to this topic. |
Defining parameters
Level: \( N \) = \( 3600 = 2^{4} \cdot 3^{2} \cdot 5^{2} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 3600.bv (of order \(6\) and degree \(2\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 180 \) Character field: \(\Q(\zeta_{6})\) Newforms: \( 0 \) Sturm bound: \(720\) Trace bound: \(0\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{1}(3600, [\chi])\).
Total New Old Modular forms 104 0 104 Cusp forms 32 0 32 Eisenstein series 72 0 72
The following table gives the dimensions of subspaces with specified projective image type.
\(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0 |
Under the auspices of the Computational Complexity Foundation (CCF)
In this work we analyze a direct product test in which each of two provers receives a subset of size n of a ground set U,
and the two subsets intersect in about (1-\delta)n elements. We show that if each of the provers provides labels to the n ... more >>>
Given a function $f:[N]^k\rightarrow[M]^k$, the Z-test is a three query test for checking if a function $f$ is a direct product, namely if there are functions $g_1,\dots g_k:[N]\to[M]$ such that $f(x_1,\ldots,x_k)=(g_1(x_1),\dots g_k(x_k))$ for every input $x\in [N]^k$.
This test was introduced by Impagliazzo et. al. (SICOMP 2012), who ... more >>>
Agreement tests are a generalization of low degree tests that capture a local-to-global phenomenon, which forms the combinatorial backbone of most PCP constructions. In an agreement test, a function is given by an ensemble of local restrictions. The agreement test checks that the restrictions agree when they overlap, and the ... more >>>
A function f:[n_1] x ... x [n_d]-->F is a direct sum if it is of the form f(a_1,...,a_d) = f_1(a_1) + ... + f_d (a_d) (mod 2) for some d functions f_i:[n_i]-->F_i for all i=1,...,d. We present a 4-query test which distinguishes between direct sums and functions that are ... more >>> |
All you have to do is find an integer $a $ so that $a\ne 15x-11$ for any integer $x $. In other words an example when $a+11$ is not divisible by $15$. So long as $a $ divided by $15$ has any remainder but $4$ the is no $f (x)= a $ so $0,1,2,3,5,6,7,8,etc $ are all numbers that $f $ does $not $ map to.
==== more stuff====
Just go to definitions.
A function $f:A\to B$ is onto if for every $b \in B$ there is an $x \in A$ so that $f(x) = b$.
Or in other words, every element of $B$ gets mapped to by some element of $A$. (or as random girl puts it "gets hit"). So... onto: every element gets mapped to. not onto: there are elements that do not get mapped to.
So to prove onto:
You take an
arbitrary element $b \in B$ and try to show there is some $x_b \in A$ so that $f(x_b)= b$.
For example of $f:\mathbb R \to \mathbb R$ via $f(x) = x^3$: If for $b\in \mathbb R$ must there be an $x_b$ so the $f(x_b) = x_b^3 = b$? Well, if $x_b^3 = b$ then $x_b = \sqrt[3] b$. Does $x_b = \sqrt[3] b$ exist for all $b$? Yes, yes it does. So $f$ is onto.
But suppose instead we had $g:\mathbb Z \to \mathbb Z$ via $f(x) = x^3$. If $g$ onto. Well, let $b$ be an arbitrary integer. Must there be an $x_b\in \mathbb Z$ so that $f(x_b) = x_b^3 = b$. Again, if $x_b^3 = b$ then $x_b = \sqrt[3]{b}$. Does $x_b = \sqrt[3]{b} \in \mathbb Z$ exist of all integers $b$? No, it does not. It exists only if $b$ is a perfect cube and not all integers are perfect cubes.
Watch out that your domains and codomains are correct though. Consider $h:\mathbb R \to \mathbb Z$ via $h(x) = x^3$. Is $h$ onto? Well, again, for any arbitrary $b \in \mathbb Z$ does there exist a $x_b\in \mathbb R$ so that $f(x_b) = x_b^3 = b$. Well,, again if so that would mean $x_b = \sqrt[3]{b}$. For every integer $b$ does there exist a real $\sqrt[3]{b}$. The answer to that is, yes, yes it does. So $h$ is onto.
To prove something is not onto:
1) It's valid to provide a simple counter example:
Is $g:\mathbb Z \to \mathbb Z$ via $g(x) = x^3$ onto? Well,no if $b= 31 \in \mathbb Z$ and $x^3 = 31$ then $27 < x^3 = 31 < 64$ so $3 < x < 4$ and $x$ is not a perfect square. (That was probably overkill.)
2) If you don't know and aren't sure if there is a counter example, just follow through and see what being onto implies. And see whether that is or is not airtight.
Is $g$ onto? Let $b\in \mathbb Z$ be an arbitrary integer. Let $x_b$ be aninteger so that $x_b^3 = b$ and $x_b = \sqrt[3] b$. Is that true for
every $b$? If so it means $b = x_b*x_b*x_b$ must all numbers be in that form. What about $b + 1$. That would mean $b+1 = x_c*x_c*x_c$ and $x_c > x_b$ so $x_c\ge x_b + 1$ so $x^c^3 \ge x^b^3 + 3x_b^2 + 3x_b + 1 > x^b^3 + 1 = b+1$. So both $b$ and $b+1$ can't be mapped to (unless $b=0$ but $b$ was arbitrary so we can assume $b \ne 0$).
.........
So is $f:\mathbb Z\to \mathbb Z$ via $f(x)= 15x -11$ onto.
If $b $ is an arbitrary integer and $x_b$ is such $15x_b -11 = b$ then $x_b = \frac {b+11}{15}=\frac b{15} + \frac {11}{15}$ is an integer. Which it doesn't have to be.
A simple counter example such as $b = 7$ shows it doesn't have to be an integer. Or further more we can show that $\frac {b+11}{15}$ is an integer
only if $$b$ has a remainder of $4$ when divided by $15$. |
After taking some measure, how can a qunit be "unmeasured"? Is unmeasurement (ie reverse quantum computing) possible?
I am not really sure about what you mean by "unmeasuring" a qubit, but if you mean to recover the qubit that was measured by manipulating the post-measurement state then I am afraid that the answer is no. When a quantum state is measured, the supoerposition state of such is collpased to one of the possible outcomes of the measurement, and so the qubit is lost.
The third postulate of quantum mechanics explains measurments in the quantum world, and such postulate says the following:
Quantum measurements are described by a collection $\{M_m\}$ of
measurement operators. These are operators acting on the state space of the system being measured. The index $m$ referes to the measurement outcomes that may occur in the experiment. If the state of the quantum system is $|\psi\rangle$ immediately before the measurement, then the probability that result $m$ occurs is given by \begin{equation} p(m)=\langle\psi|M_m^\dagger M_m|\psi\rangle, \end{equation} and the state of the sytem after the measurement is \begin{equation} \frac{M_m|\psi\rangle}{\sqrt{\langle\psi|M_m^\dagger M_m|\psi\rangle}}. \end{equation}
So the post-measurement state collapses into another state defined by the postulate 3, and the previous quantum state is lost irreversibly. See also this wikipedia entry for wave function collapse, where it explains the collapse of quantum states after measurement.
Consequently, if the same measurment wants to be done, the quantum state must be prepared again before the measurement and so the xperiment can be repeated.
You can compute
by measuring - see cluster-based quantum computation - but the whole thing that makes measurement different in quantum mechanics is that it destroys the superposition. It can't be undone. Once you measure, the qudit isn't in a state $|\psi\rangle = \alpha|0\rangle + \beta|1\rangle + ... +\gamma|n\rangle$ but in a state $|\psi\rangle = |0\rangle$ or $|\psi\rangle = |1\rangle$ or what have you based upon probability. When you measure the qubit again soon after, it stays as either $|0\rangle$ or $|1\rangle$. The superposition is gone. We can't get it back (except by doing the same operations that lead our qubit to that point, in which case it'll be very similar) because we can't clone a qubit, so we can't figure out what $\alpha$ and $\beta$ are. Tl;dr: No.
Unitary operation is revesible, but measurement is a projection operation, which is not reveaible. Think about matrix inverse, projection matrix has lower rank and does not have inverse |
Assume, an extension of the lambda calculus with terms $t$ and values $v$ is defined in big-step operational semantics with evaluation relation $t \Downarrow v$.
It is intuitive to assume that $\beta$-equivalence holds, e.g.
(λx. t) unit $\equiv_\beta$
t when
x is not free in
t
It is however unclear to me, how $\equiv_\beta$ can be precisely defined in such a setting: Obviously, there is no small-step relation $\rightarrow_\beta$ that can be used to express a single reduction step, since the evaluation relates terms and values.
On the other hand, beta-equivalent terms do not evaluate to syntactically equal terms (e.g. when comparing abstractions which already are values).
So how does one define this equivalence in such a case? |
I have a State $\left|\Psi\right>=\frac{\left|1\right>+\left|0\right>}{\sqrt{2}},$ in the $z$-Spin basis and want to calculate the probability of this state for the eigenvectors of the operator $\frac{-1}{\sqrt2} S_x + S_z $ which are $\begin{pmatrix} 1-\sqrt2\\ 1 \end{pmatrix} and \begin{pmatrix} 1+\sqrt2\\ 1 \end{pmatrix}$(In the $z$-basis). So I take the norm squared of$ \langle\begin{pmatrix} 1\pm\sqrt2\\ 1 \end{pmatrix}|\Psi\rangle.$ Which gives me 1 in both cases which is no good for a probability.Where am I wrong?
Your operator is $\frac{1}{\sqrt{2}}S_x +S_z$. The eigenvectors of this operator are not what you have written down. They are
$v_1 = \frac{1}{\sqrt{5-2\sqrt{6}}}\begin{pmatrix} 1\\ \sqrt{3} - \sqrt{2} \end{pmatrix}$
$v_2 = \frac{1}{\sqrt{5+2\sqrt{6}}}\begin{pmatrix} 1\\ -\sqrt{3} - \sqrt{2} \end{pmatrix}$
Thus, the probability of finding the state $\left|\Psi\right>=\frac{\left|1\right>+\left|0\right>}{\sqrt{2}} = \begin{pmatrix} \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}\end{pmatrix}$ is
$\langle v_1\vert\psi\rangle^2 = \frac{1}{6}(3+\sqrt{3})$
$\langle v_2\vert\psi\rangle^2 = \frac{1}{6}(3-\sqrt{3})$ |
Global boundedness in a quasilinear fully parabolic chemotaxis system with indirect signal production
1.
School of Mathematical Sciences, Peking University, Beijing, 100871, China
2.
School of Mathematical Sciences, Dalian University of Technology, Dalian, 116024, China
In this paper we develop a new and convenient technique, with fractional Gagliardo-Nirenberg type inequalities
inter alia involved, to treat the quasilinear fully parabolic chemotaxis system with indirect signal production: $ u_t = \nabla\cdot(D(u)\nabla u-S(u)\nabla v) $, $ \tau_1v_t = \Delta v-a_1v+b_1w $, $ \tau_2w_t = \Delta w-a_2w+b_2u $, under homogeneous Neumann boundary conditions in a bounded domain $ \Omega\subset\Bbb{R}^{n} $ ($ n\geq 1 $), where $ \tau_i,a_i,b_i>0 $ ($ i = 1,2 $) are constants, and the diffusivity $ D $ and the density-dependent sensitivity $ S $ satisfy $ D(s)\geq a_0(s+1)^{-\alpha} $ and $ 0\leq S(s)\leq b_0(s+1)^{\beta} $ for all $ s\geq 0 $ with $ a_0,b_0>0 $ and $ \alpha,\beta\in\Bbb R $. It is proved that if $ \alpha+\beta<3 $ and $ n = 1 $, or $ \alpha+\beta<4/n $ with $ n\geq 2 $, for any properly regular initial data, this problem has a globally bounded and classical solution. Furthermore, consider the quasilinear attraction-repulsion chemotaxis model: $ u_t = \nabla\cdot(D(u)\nabla u)-\chi\nabla\cdot(u\nabla z)+\xi\nabla\cdot(u\nabla w) $, $ z_t = \Delta z-\rho z+\mu u $, $ w_t = \Delta w-\delta w+\gamma u $, where $ \chi,\mu,\xi,\gamma,\rho,\delta>0 $, and the diffusivity $ D $ fulfills $ D(s)\geq c_0(s+1)^{M-1} $ for any $ s\geq 0 $ with $ c_0>0 $ and $ M\in\Bbb R $. As a corollary of the aforementioned assertion, it is shown that when the repulsion cancels the attraction (i.e. $ \chi\mu = \xi\gamma $), the solution is globally bounded if $ M>-1 $ and $ n = 1 $, or $ M>2-4/n $ with $ n\geq 2 $. This seems to be the first result for this quasilinear fully parabolic problem that genuinely concerns the contribution of repulsion. Mathematics Subject Classification:Primary: 35B35, 35B40, 35K55; Secondary: 92C17. Citation:Mengyao Ding, Wei Wang. Global boundedness in a quasilinear fully parabolic chemotaxis system with indirect signal production. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 4665-4684. doi: 10.3934/dcdsb.2018328
References:
[1]
R. A. Adams,
Sobolev Spaces, Academic Press, New York, 1975.
Google Scholar
[2] [3]
J. Bergh and J. Löfström,
[4] [5] [6]
A. Friedman,
[7]
K. Fujie and T. Senba,
Application of an Adams type inequality to a two-chemical substances chemotaxis system,
[8]
H. Hajaiej, L. Molinet, T. Ozawa and B. Wang, Necessary and sufficient conditions for the fractional Gagliardo-Nirenberg inequalities and applications to Navier-Stokes and generalized boson equations,
[9]
D. Henry,
[10] [11]
M. Hieber and J. Prüss,
Heat kernels and maximal $L^p$-$L^q$ estimates for parabolic evolution equations,
[12] [13]
B. Hu and Y. Tao,
To the exclusion of blow-up in a three-dimensional chemotaxis-growth model with indirect attractant production,
[14]
S. Ishida, K. Seki and T. Yokota,
Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains,
[15] [16] [17]
H. Jin and T. Xiang,
Repulsion effects on boundedness in a quasilinear attraction-repulsion chemotaxis model in higher dimensions,
[18] [19]
O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva,
[20]
J. Lankeit,
Locally bounded global solutions to a chemotaxis consumption model with singular sensitivity and nonlinear diffusion,
[21] [22] [23] [24] [25]
M. Luca, A. Chavez-Ross, L. Edelstein-Keshet and A. Mogilner,
Chemotactic signalling, microglia, and Alzheimer's disease senile plague: Is there a connection?,
[26]
N. Mizoguchi and M. Winkler, Finite-time blow-up in the two-dimensional Keller-Segel system, preprint.Google Scholar
[27] [28] [29]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[30] [31] [32]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[33]
Y. Tao and M. Winkler,
Critical mass for infinite-time aggregation in a chemotaxis model with indirect signal production,
[34] [35]
H. Triebel,
[36] [37] [38] [39]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[40] [41]
show all references
References:
[1]
R. A. Adams,
Sobolev Spaces, Academic Press, New York, 1975.
Google Scholar
[2] [3]
J. Bergh and J. Löfström,
[4] [5] [6]
A. Friedman,
[7]
K. Fujie and T. Senba,
Application of an Adams type inequality to a two-chemical substances chemotaxis system,
[8]
H. Hajaiej, L. Molinet, T. Ozawa and B. Wang, Necessary and sufficient conditions for the fractional Gagliardo-Nirenberg inequalities and applications to Navier-Stokes and generalized boson equations,
[9]
D. Henry,
[10] [11]
M. Hieber and J. Prüss,
Heat kernels and maximal $L^p$-$L^q$ estimates for parabolic evolution equations,
[12] [13]
B. Hu and Y. Tao,
To the exclusion of blow-up in a three-dimensional chemotaxis-growth model with indirect attractant production,
[14]
S. Ishida, K. Seki and T. Yokota,
Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains,
[15] [16] [17]
H. Jin and T. Xiang,
Repulsion effects on boundedness in a quasilinear attraction-repulsion chemotaxis model in higher dimensions,
[18] [19]
O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva,
[20]
J. Lankeit,
Locally bounded global solutions to a chemotaxis consumption model with singular sensitivity and nonlinear diffusion,
[21] [22] [23] [24] [25]
M. Luca, A. Chavez-Ross, L. Edelstein-Keshet and A. Mogilner,
Chemotactic signalling, microglia, and Alzheimer's disease senile plague: Is there a connection?,
[26]
N. Mizoguchi and M. Winkler, Finite-time blow-up in the two-dimensional Keller-Segel system, preprint.Google Scholar
[27] [28] [29]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[30] [31] [32]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[33]
Y. Tao and M. Winkler,
Critical mass for infinite-time aggregation in a chemotaxis model with indirect signal production,
[34] [35]
H. Triebel,
[36] [37] [38] [39]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[40] [41]
[1]
Philippe Laurençot.
Global bounded and unbounded solutions to a chemotaxis system with indirect signal production.
[2]
Wei Wang, Yan Li, Hao Yu.
Global boundedness in higher dimensions for a fully parabolic chemotaxis system with singular sensitivity.
[3]
Youshan Tao, Michael Winkler.
A chemotaxis-haptotaxis system with haptoattractant remodeling: Boundedness enforced by mild saturation of signal production.
[4]
Liangchen Wang, Yuhuan Li, Chunlai Mu.
Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source.
[5]
Miaoqing Tian, Sining Zheng.
Global boundedness versus finite-time blow-up of solutions to a quasilinear fully parabolic Keller-Segel system of two species.
[6] [7]
Sachiko Ishida, Tomomi Yokota.
Boundedness in a quasilinear fully parabolic Keller-Segel system via maximal Sobolev regularity.
[8]
Yilong Wang, Zhaoyin Xiang.
Boundedness in a quasilinear 2D parabolic-parabolic attraction-repulsion chemotaxis system.
[9]
Ling Liu, Jiashan Zheng.
Global existence and boundedness of solution of a parabolic-parabolic-ODE chemotaxis-haptotaxis model with (generalized) logistic source.
[10]
Tomasz Cieślak, Kentarou Fujie.
Global existence in the 1D quasilinear parabolic-elliptic chemotaxis system with critical nonlinearity.
[11]
Hao Yu, Wei Wang, Sining Zheng.
Boundedness of solutions to a fully parabolic Keller-Segel system with nonlinear sensitivity.
[12]
Masaaki Mizukami.
Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity.
[13] [14] [15]
Johannes Lankeit, Yulan Wang.
Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption.
[16]
Hua Zhong, Chunlai Mu, Ke Lin.
Global weak solution and boundedness in a three-dimensional competing chemotaxis.
[17]
Chunhua Jin.
Boundedness and global solvability to a chemotaxis-haptotaxis model with slow and fast diffusion.
[18]
Jiashan Zheng.
Boundedness of solutions to a quasilinear higher-dimensional chemotaxis-haptotaxis model with nonlinear diffusion.
[19]
Hai-Yang Jin, Tian Xiang.
Repulsion effects on boundedness in a quasilinear attraction-repulsion chemotaxis model in higher dimensions.
[20]
Kentarou Fujie, Chihiro Nishiyama, Tomomi Yokota.
Boundedness in a quasilinear parabolic-parabolic
Keller-Segel system with the sensitivity $v^{-1}S(u)$.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.