text stringlengths 256 16.4k |
|---|
Phase and Amplitude Calculation for a Pure Complex Tone in a DFT using Multiple Bins
Introduction
This is an article to hopefully give a better understanding of the Discrete Fourier Transform (DFT) by deriving exact formulas to calculate the phase and amplitude of a pure complex tone from several DFT bin values and knowing the frequency. This article is functionally an extension of my prior article "Phase and Amplitude Calculation for a Pure Complex Tone in a DFT"[1] which used only one bin for a complex tone, but it is actually much more similar to my approach for real valued tones in my article "Phase and Amplitude Calculation for a Pure Real Tone in a DFT: Method 1"[2].
The Base Definitions
The same variables and equation for describing the tone from my previous articles will be used. The variables are amplitude ($ M $), frequency ($ f $), phase ($ \phi $), and the sample count in the frame ($ N $). The signal definition is: $$ S_n = M \cdot { e^{ i(\alpha n + \phi) } } \tag {1} $$ Where: $$ \alpha = f \cdot \frac{ 2\pi }{ N } \tag {2} $$ The same $1/N$ normalized DFT definition is also used. $N$ is also the number of bins in the DFT so $k$ ranges from $0$ to $N-1$. $$ Z_k = \frac{ 1 }{ N } \sum_{n=0}^{N-1} { S_n e^{ -i\beta_k n } } \tag {3} $$ Where: $$ \beta_k = k \cdot \frac{ 2\pi }{ N } \tag {4} $$
Finding the Frequency
If the frequency of the tone is not known, the two or three DFT bin values nearest the peak magnitude bin can be used to find it. If the magnitude of a bin adjacent to the peak is fairly large, the two bin frequency calculation formula should be used[3]. If the bins adjacent to the peak are small in comparison, the three bin formula [4] centered at the peak should be used. In either case, the frequency can be calculated. In the noiseless single tone case, the calculation will be exact. There are other estimation methods as well.
Integer Frequency Case
If the frequency is an integer value in terms of cycles per frame, the next few sections do not apply. Simply set $q$ to the bin value of the frequency bin and skip ahead to the "Final Results" section.
Bin Value Formula for a Pure Complex Tone
The alternative bin value formula found in my previous blog article titled "DFT Bin Value Formulas for Pure Complex Tones" [5] will be used instead of the main one. The reason is that it replaces a complex valued divide with a real valued one. This saves a lot of calculations.
First a change of variable is needed: $$ \delta_k = \alpha - \beta_k \tag {5} $$ The formula is then: $$ Z_k = \frac{ M }{ N } e^{ i \left[ -\delta_k (N-1) / 2 + \phi \right] } \cdot \frac{ sin( \delta_k N / 2 ) }{ sin( \delta_k / 2 ) } \tag {6} $$
Vector Formulation
The formula (6) can be rearranged slightly to make it more stackable. $$ Z_k = q \cdot e^{ i \left[ -\delta_k \frac{(N-1)}{2} \right] } \cdot \frac{ sin( N \delta_k / 2 ) }{ N sin( \delta_k / 2 ) } \tag {7} $$ Where: $$ q = M e^{ i \phi } \tag {8} $$ The complex value $q$ contains the amplitude and phase being sought. Therefore this is the value to be solved for. This can be done by stacking the bin value equations around the peak bin $p$ into a vector equation. The size of the vectors is flexible. $$ \begin{bmatrix} . \\ . \\ Z_{p-1}\\ Z_{p}\\ Z_{p+1}\\ . \\ . \\ \end{bmatrix} = q \cdot \begin{bmatrix} . \\ . \\ e^{ i \left[ -\delta_{p-1} \frac{(N-1)}{2} \right] } \cdot \frac{ sin( N \delta_{p-1} / 2 ) }{ N sin( \delta_{p-1} / 2 ) } \\ e^{ i \left[ -\delta_{p} \frac{(N-1)}{2} \right] } \cdot \frac{ sin( N \delta_{p} / 2 ) }{ N sin( \delta_{p} / 2 ) } \\ e^{ i \left[ -\delta_{p+1} \frac{(N-1)}{2} \right] } \cdot \frac{ sin( N \delta_{p+1} / 2 ) }{ N sin( \delta_{p+1} / 2 ) } \\ . \\ . \\ \end{bmatrix} \tag {9} $$ This equation is much simpler in vector notation. $$ \vec Z = q \vec Y \tag {10} $$ $\vec Z$ has the selected bin values from the DFT and $\vec Y$ is the calculated basis vector.
Solving for q
Finding $q$ is a standard Linear Algebra best fit with a single basis vector. Dotting both sides of (10) with the complex conjugate of $\vec Y$ turns the vector equation into a simple scalar one. $$ \vec Z \cdot \vec Y^* = q \vec Y \cdot \vec Y^* \tag {11} $$ The value of $q$ can now be calculated. $$ q = \frac{ \vec Z \cdot \vec Y^* }{ \vec Y \cdot \vec Y^* } \tag {12} $$
The Final Results
Thanks to the polar form of complex numbers, once the value of q is known finding the values of $\phi$ and $M$ is very straightforward. $$ \phi = \arg( q ) = \operatorname{atan2}( Im[q], Re[q] ) \tag {13} $$ $$ M = \| q \| = \sqrt{ (Im[q])^2 + (Re[q])^2 } \tag {14} $$ That's all there is to it.
Numerical Results
Here are some numeric results for using different length vectors. A length one vector is identical to the one bin solution provided in my previous blog article[1]. The result values are the RMS values for the errors multiplied by 100. The magnitude of the signal is one.
*********************************************************** The sample count is 32 and the run size is 10000 The phase value is 0.123400 Errors are shown as RMS at 100x actual value Target Noise Level = 0.100 Freq One Vec Two Vec Three Vec Four Vec Five Vec Phase Mag Phase Mag Phase Mag Phase Mag Phase Mag ---- ------------ ----------- ----------- ----------- ----------- 3.1 1.30 1.27 1.28 1.27 1.28 1.26 1.28 1.26 1.28 1.26 3.2 1.35 1.36 1.31 1.33 1.29 1.31 1.29 1.30 1.28 1.30 3.3 1.49 1.49 1.36 1.37 1.33 1.34 1.31 1.33 1.30 1.32 3.4 1.68 1.68 1.39 1.41 1.35 1.37 1.32 1.34 1.31 1.33 3.5 1.96 1.98 1.40 1.41 1.36 1.37 1.33 1.33 1.31 1.32 3.6 1.69 1.68 1.40 1.40 1.37 1.37 1.34 1.33 1.33 1.32 3.7 1.48 1.47 1.36 1.35 1.33 1.32 1.31 1.31 1.30 1.30 3.8 1.38 1.36 1.33 1.32 1.32 1.30 1.31 1.29 1.31 1.29 3.9 1.30 1.30 1.30 1.29 1.29 1.29 1.29 1.28 1.29 1.28 ***********************************************************
The results clearly show that when a frequency is near an integer bin, increasing the number of bins in the vector does not significantly improve the results. However, when the frequency is closer to being between two bins, having more leakage, using the extra bin values reduces the errors. The values in the last two columns indicates diminishing returns on extra bins.
Conclusion
The phase and amplitude calculations of a pure complex tone from DFT bin values are significantly simpler than for a real valued tone. The derivation presented in this article is not very complicated and the results are easy to verify. It is important to keep in mind that these equations are exact as no approximation were used in their derivation.
References
[1] Dawg, Cedron, Phase and Amplitude Calculation for a Pure Complex Tone in a DFT
[2] Dawg, Cedron, Phase and Amplitude Calculation for a Pure Real Tone in a DFT: Method 1
[3] Dawg, Cedron, A Two Bin Exact Frequency Formula for a Pure Complex Tone in a DFT
[4] Dawg, Cedron, Three Bin Exact Frequency Formulas for a Pure Complex Tone in a DFT
[5] Dawg, Cedron, DFT Bin Value Formulas for Pure Complex Tones
Previous post by Cedron Dawg:
Phase and Amplitude Calculation for a Pure Complex Tone in a DFT
Next post by Cedron Dawg:
Off Topic: Refraction in a Varying Medium
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads. |
I am currently reading Coleman's lecture note on QFT.(https://arxiv.org/abs/1110.5013) I have several questions regarding the scattering theory. Let $\phi$ be a real scalar field, and consider the interaction Hamiltonian $H_I(x)=g\phi(x)\rho(\vec{x})f(t)$, where $f$ represents the adiabatic turning on/off function.
The following is pp.91-92 of the lecture note:
Questions:
What is the difference between the vacuum with respect to $H_0$, and the vacuum with respect to $H_0 + H_I$? In the lecture note, these two vacuums are denoted by different notations $|0\rangle$ and $|0\rangle_P$. What is their difference?
Somewhat ad hoc counterterm $a=E_0$ was added to $H_I$ to fix the divergent phase in $\langle0|S|0\rangle $. Adding $a$ will fix the divergent phase, but I think it does not necessarily means that $\langle0|S|0\rangle =1$. (As described in the lecture note, the correct expression should be $\langle0|S|0\rangle=e^{-i(\gamma_- +\gamma_+)}$. |
I'll try to give you an equivalent definition, which I think is a bit clearer, I will leave the equivalence to you in first instance. I might add more explanation later.
First let's consider the case that $n=2$ and $m=1$, (actually what I will say works equally well for arbitrary $n$, but $n=2$ is the simplest case that is new to us).
Let me try to be concrete, suppose we have a function $f: \mathbb{R}^{2} \rightarrow \mathbb{R}$, and we want to compute the derivative of $f$, say at $0 \in \mathbb{R}^{2}$. That is, we want to figure out how the output of $f$ changes if we change the input. The idea is to use the fact that we know how to differentiate functions from $\mathbb{R}$ to $\mathbb{R}$. Suppose we have any $v \in \mathbb{R}^{2}$, then we can get a function from $\mathbb{R}$ to $\mathbb{R}$ as follows\begin{align}f_{v}: \mathbb{R} &\rightarrow \mathbb{R}, \\t &\mapsto f(tv).\end{align}This function is (a reparametrization of) the
restriction of $f$ to the line spanned by $v$.We know how to differentiate such a function with respect to $t$ (it might not be differentiable, in which case $f: \mathbb{R}^{2} \rightarrow \mathbb{R}$ is not differentiable). We thus get a number\begin{equation} D_{v} f := \frac{\text{d}}{\text{d}t}\bigg|_{t=0} f(tv),\end{equation}which tells us how quickly the output of the function $f$ changes if we vary the input along the line spanned by $v \in \mathbb{R}^{2}$.
What was described above makes sense for
any vector $v \in \mathbb{R}^{2}$, so we have a map from $\mathbb{R}^{2}$ to $\mathbb{R}$,\begin{align}\mu: \mathbb{R}^{2} &\rightarrow \mathbb{R}, \\v &\mapsto D_{v}f = \frac{\text{d}}{\text{d}t}\bigg|_{t=0} f(tv).\end{align}Now, I claim that the map $\mu$ is linear, and furthermore satisfies the equation that you wrote down (with $a=0$):\begin{equation}\lim_{h \rightarrow 0} \frac{\|f(h) - f(0) - \mu(h)\|}{\|h\|} = 0.\end{equation}Note that $h \in \mathbb{R}^{2}$. It's up to you to show that $\mu$ satisfies this equation, (and that any $\mu$ that satisfies this equation is the derivative). (I might add some steps to show this later).
Now I have claimed that $\mu: \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$ is the derivative of $f$ at $0$ (or $a$), I should maybe tell you a bit about what it would look like in the case that $m=n=1$. In this case we see (exercise!) that for $s \in \mathbb{R}$\begin{equation} \mu(s) = s \frac{\text{d}}{\text{d}t}\bigg|_{t=0} f(t).\end{equation}
Like remarked above, I have not really
used the fact that $n=2$ and the entire story holds equally well for arbitrary $n$. For higher $m$, we should view a function $f: \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$ as $m$ functions $f_{i}: \mathbb{R}^{n} \rightarrow \mathbb{R}$. |
I am trying to make sense of different notations used in measuring lattices, in particular before and after a basis reduction. In particular, I am trying to get bounds and estimates for the size of the shortest and of all the basis vectors after lattice reduction, not only in the usual L2-norm $||\vec{x}|| = (\Sigma x_i^2)^{1/2}$, but also in the supremum (L$\infty$) and in the railroad (L1) norm. I have some elementary questions and possible misunderstandings which might be best clarified by a good introductory text.
I think I have seen a statement of bounds for LLL inall of these norms in a slide from some talk. But am not sure where, and would additionally be interested in some kind of motivation or reasoning (which doesn't have to be a full-fledged proof).
I am currently also a bit confused by having seen (or misunderstood?) the determinant of a lattice (basis) that occurs in such formulas for both tight bounds and heuristic estimates defined in two different ways, as either the matrix-determinant of any basis $\mathbb{b} = \{ \vec{b}_i \}$ of the lattice, or as the square root of the matrix-determinant of $\mathbb{b^{T}b}$. I wonder if there are common situations (e.g. basis vectors forming a triangular matrix...?) where these are identical, but if so, then I don't quite see it myself. Is there a common convention for the determinant of a lattice?
Another source of confusion to me is that some bounds are based on the (usually unknown) shortest vector (often called $\lambda_1$) of a lattice, whilst others state what seems to be very similar bounds based on the determinant of the lattice. I understand that there the Gaussian heuristic links these two, at least approximately. But which of these actually gives a tight bound rather than something based on a heuristic?
What would be a good, and ideally comprehensive source for this topic? Unfortunately, I'm bringing the additional burden of a somewhat selective interest in maths. Basically, I can do simple quantum mechanical calculations, but I cannot talk about it with a mathematician without a translator. |
Search
Now showing items 11-20 of 53
Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE
(Elsevier, 2017-11)
Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ...
Electroweak boson production in p–Pb and Pb–Pb collisions at $\sqrt{s_\mathrm{NN}}=5.02$ TeV with ALICE
(Elsevier, 2017-11)
W and Z bosons are massive weakly-interacting particles, insensitive to the strong interaction. They provide therefore a medium-blind probe of the initial state of the heavy-ion collisions. The final results for the W and ...
Investigating the Role of Coherence Effects on Jet Quenching in Pb-Pb Collisions at $\sqrt{s_{NN}} =2.76$ TeV using Jet Substructure
(Elsevier, 2017-11)
We report measurements of two jet shapes, the ratio of 2-Subjettiness to 1-Subjettiness ($\it{\tau_{2}}/\it{\tau_{1}}$) and the opening angle between the two axes of the 2-Subjettiness jet shape, which is obtained by ...
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Multiplicity dependence of identified particle production in proton-proton collisions with ALICE
(Elsevier, 2017-11)
The study of identified particle production as a function of transverse momentum ($p_{\text{T}}$) and event multiplicity in proton-proton (pp) collisions at different center-of-mass energies ($\sqrt{s}$) is a key tool for ...
Probing non-linearity of higher order anisotropic flow in Pb-Pb collisions
(Elsevier, 2017-11)
The second and the third order anisotropic flow, $V_{2}$ and $V_3$, are determined by the corresponding initial spatial anisotropy coefficients, $\varepsilon_{2}$ and $\varepsilon_{3}$, in the initial density distribution. ...
The new Inner Tracking System of the ALICE experiment
(Elsevier, 2017-11)
The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ...
Neutral meson production and correlation with charged hadrons in pp and Pb-Pb collisions with the ALICE experiment at the LHC
(Elsevier, 2017-11)
Among the probes used to investigate the properties of the Quark-Gluon Plasma, the measurement of the energy loss of high-energy partons can be used to put constraints on energy-loss models and to ultimately access medium ...
Direct photon measurements in pp and Pb-Pb collisions with the ALICE experiment
(Elsevier, 2017-11)
Direct photon production in heavy-ion collisions provides a valuable set of observables to study the hot QCD medium. The direct photons are produced at different stages of the collision and escape the medium unaffected. ...
Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE
(Elsevier, 2017-11)
Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ... |
1. Series 27. Year Post deadline: - Upload deadline: -
(4 points)3. bubble in a pipeline
A horizontal pipeline with a flowing liquid contains a small bubble of gas. How do the dimensions of this bubble change when it reaches a narrower point of the pipeline? Can you find some applications of this phenomena? What problems could it cause? Assume that the flow is laminar.
Karel was thinking about air fresheners.
(4 points)4. cube in a pool
Large ice cube placed at the bottom of an empty pool starts to dissolve. Assume that the process is isotropic in the sense that the cube is geometrically similar at all times. What fraction of the cube needs to dissolve before it starts to float in the water? The surface area of the pool floor is $S$, and the length of an edge of the cube before it started disolving was $a$.
Lukáš was staring at a frozen town.
(5 points)5. a bead
A small bead of mass $m$ and charge $q$ is free to move in a horizontal tube. The tube is placed in between two spheres with charges $Q=-q$. The spheres are separated by a distance 2$a$. What is the frequency of small oscillations around the equilibrium point of the bead? You can neglect any friction in the tube.
Hint: When the bead is only slightly displaced, the force acting on it changes negligibly.
Radomír was rolling in a pipe.
(5 points)P. speed of light
What would be the world like if the speed of light was only $c=1000\;\mathrm{km}\cdot h^{-1}$ while all the other fundamental constants stayed unchanged? What would be the impact on life on Earth? Would it even be possible for people to exist in such a world?
Karel came up with an unsolvable problem.
(8 points)E. bend it but don't bend it!
Your task is to measure the spacing of a diffraction grating using the light from three different LED-diodes. In case your interested, send us an email at experiment@fykos.cz and we will send you the LED diodes, resistor, wires, and, of course, the diffraction grating. The only thing you will need to buy is a 9 V battery.
Karel spent all of our budget.
(6 points)S. relativity
Any theory of quantum gravity is useful only when we deal with very small distances where the effects of gravitation are comparable to quantum effects. Gravitation is characterized by the gravitational constant, quantum mechanics by the Planck constant, and special relativity by the speed of light. Look up numerical values of these constants, and, using standard algebraic operations, combine them to obtain a quantity with the dimensions of length. This is the length scale where both quantum mechanics and gravitation are important. Prove that the special Lorentz transform (i.e. a change of the reference frame to one that is moving with speed $v$ in the $x¹;$ direction)
$$x^0_\;\mathrm{nov}=\frac{x^0-\frac{v}{c}x^1}{\sqrt{1-\(\frac{v}{c}\)^2}}\,,\quad x^1_\mathrm{nov}=\frac{-\frac{v}{c}x^0 x^1}{\sqrt{1-\(\frac{v}{c}\)^2}}\,,\quad x^2_\mathrm{nov}= x^2\,,\quad x^3_\mathrm{nov}= x^3$$ leaves the spacetime interval invariant. * Set $Δx=Δx=0$ in the definition of a spacetime interval. You should get
$$(\Delta s)^2 = -\(\Delta x^0\)^2 \(\Delta x^1\)^2$$
What is the region of the plane ( $Δx^{0},Δx¹;)$ where the spacetime interval ( $Δs)$ is positive? Where negative? What is the curve ( $Δs)=0?$ |
"
On the Achievable Throughput of a Multiantenna Gaussian Broadcast Channel" by Giuseppe Carie and Shlomo Shamai talks, in part, about the following type of link (paraphrasing):
A transmitter with $t$ antennas broadcasts to $r$ independent receivers through a flat-fading channel (modeled by a matrix $H\in \mathbb{C}^{r\times t}$) with iid additive white gaussian noise. i.e. each reception looks like: $$\vec{y}=H\vec{x}+\vec{z}; \quad \vec{z}\sim \mathcal{N}(\vec{0}, I_{r\times r})$$ Each receiver seeks to recieve a different message.
The
sum capacity of such a channel is: $\displaystyle \underset{\vec{r}\in R}{\sup} \textstyle\sum_{i}r_i$ where $R$ is the achievable-rate region. It's just the maximum overall rate.
Given $H$, what is the
sum capacity of a channel where a single transmit antenna is talking to $r$ independent (non-communicating) receive antennas?
Note that finding the actual region $R$ is difficult, but the sum capacity is a known result.
The paper says that the capacity region of the system in question is well-known. It cites Cover/Thomas' information theory textbook. However, Cover/Thomas only characterizes the $r=2$ case (relevant details start at pg515), and doesn't focus on the
sum capacity
"
Sum Capacity of the Vector Gaussian Broadcast Channel and Uplink–Downlink Duality" by Viswanath and David finds the sum capacity for the more complicated case where $t>1$, and says that the $t=1$ case is well-known. (but also alludes to $t=1$ systems being 'degenerate') |
In order to reach Carnot's efficiency, the materials that makes up the TEG must have a ZT value of infinity. That is so because the efficiency of a TEG can be expressed as the Carnot efficiency multiplied by a term that equals 1 when $ZT$ is infinite. (see https://en.wikipedia.org/wiki/Thermoelectric_materials#Device_efficiency for example.)
However $ZT = \frac{\sigma S^2T}{\kappa}$ where $\kappa$ is the thermal conductivity, $\sigma$ is the electrical conductivity, $S$ is the Seebeck coefficient and $T$ is the absolute temperature.
This value cannot reach infinity.
A good candidate material must therefore possess a good electrical conductivity, a low thermal conductivity, and a high Seebeck coefficient. In practice metals are discarted because they somehow obey to Wiedeman-Franz law which states that a good electrical conductor is also a good thermal conductor, and this holds true for most metals. Insulator have high values of $S$ but extremely low values for $\sigma$ and so they aren't good candidates either. The best candidates are heavily doped semiconductors, which fall in between metals and insulators (note that there's a theoretical optimum doping concentration value). For semiconductors it is possible to split the thermal conductivity into a lattice contribution and into an electronic contribution: $\kappa = \kappa_l + \kappa_e$. While $\kappa_e$ satisfies reasonably well Wiedeman-Franz law, the part $\kappa_l$ is independent of it. This means that if one reduces the lattice contribution to the thermal conductivity, one increases the ZT coefficient. In practice such a reduction is accomplished through a variety of ways, such as creating very complex materials (with very large chemical formulae) with complex crystal structure geometry, impacting on phonons propagation. Many researchers are nowadays working on engineering materials with very low $\kappa_l$.
If we turn to superconductors, they unfortunately have a vanishing Seebeck coefficient, so they are totally unable to generate power out of a temperature difference through the Seebeck effect. I.e. they cannot be used as materials to generate power in a TEG or to cool down things in a Peltier module.
Note that there is a theoretical minimum value of $\kappa$ which can be attained, which is roughly the one of a glass. Since $S$ is also limited (cannot be arbitrarily big), which is many times related to the electrical conductivity through Mott formula and since we've already seen that superconductors (where one could be lead to think that $\sigma$ is infinity while it would be better to say that $\rho$ vanishes), there is no hope left for $ZT$ to reach infinity.
This only explains why thermoelectric materials lead to heat engines that do not match Carnot efficiency.
Now the saddest part: The ZT value of commercial TEGs is around 0.7 near room temperature (the material is Bi2Te3, which is the same material used since the 1970's despite many decades of improvements!). In the lab they've found a material with a ZT of above 2, but such a material is so chemically unstable and fragile that it cannot be used in the making of a TEG or Peltier module. That's the real reason why thermoelectric engines have a so low efficiency. There is, thus far, no material having a high ZT despite insane efforts since decades. Even DFT (density functional theory) applied to all the elements of the periodic table were used to predict good candidates and the best candidates had miserable ZT values compared to the efficiency one gets from solar panels and many heat engines. |
2. Series 27. Year Post deadline: - Upload deadline: -
(2 points)2. Flying wood
We have a wooden sphere at a height of $h=1\;\mathrm{m}$ above the surface of the Earth which has a perimeter of $R_{Z}=6378\;\mathrm{km}$ and a weight of $M_{Z}=5.97\cdot 10^{24}\;\mathrm{kg}$. The sphere has a perimeter of $r=1\;\mathrm{cm}$ and is made of a wood which has the density of $ρ=550\;\mathrm{kg}\cdot \mathrm{m}^{-3}$. Assume that the Earth has an electric charge of $Q=5C$. What is the charge $q$ that the sphere has to have float above the surface of the Earth? How does this result depend on the height $h?$
Karel přemýšlel, co zadat jednoduchého.
(4 points)3. torturing the piston
We have a container of a constant cross section, which contains an ideal gas and a piston at a height of $h$. First we compress the air quickly (practically adiabatically) by moving the piston to a height of $h⁄2$, we hold it there until thermal equilibrium with its surroundings is reached, and then we let it go. To what height will the piton rise immediately? What is the height that it will reach after a very long time? Draw a $pV$ diagram.
Karel přemýšlel nad pístem.
(4 points)4. The stellar size of the Moon
It is known that the Moon when it is full has the apparent magnitude of approximately -12 mag and the Sun during the day has the apparent magnitude of -27 mag. Try to figure out what is the apparent magnitude of the Moon directly before a solar eclipse, if you know that the albedo of the Earth is approximately 0.36 and the albedo of the Moon 0.12. Presume that light after reflection disperes the same way on the surface of both the Moon and Earth.
Janči byl oslepený.
(5 points)5. Plastic cup on water
A truncated cone that is the upside down (the hole is open downwards) may be held in the air by a stream of water which originates from the ground with a constant mass flowrate and an intial velocity $v_{0}$. At what height above the surface of the Earth will the cone levitate ?
Bonus: Explore the stability of the cone.
Radomír pil až do dna.
(4 points)P. Temelínská
Estimate how much nuclear fuel get used by an atomic powerplant to generate 1 MWh of electrical energy that people use at home. Compare it with the usage offuel in a thermal powerplant. Don't forget to think about all posible ways that energy gets lost.
Bonus: Include the energy that is required to transport the fuel into your solution.
Karel přemýšlel nad ČEZem.
(8 points)E. that's the way the ball bounces..I mean rolls
Let us have an inclined plane on which we place a ball and we give it kinetic energy so that it will begin rolling upwards without slipping. Measure the relationship between the velocity of the ball and time and determine the loss of energy as a function of time. The inclined plane should have an angle of at least $α10°$ with the horizontal. Do not forget to describe the parameters of your ball.
Karel se zamyslel nad výrokem koulelo se koulelo.
(6 points)S. actional
What are the physical dimensions of action? (What are its units?) Does it have the same unit as one of fundamental constants from the first question in the previous part of the series? Which one? $ From Niels Bohr$ – Assume the motion of a point mass on a circle with the centripetal force of
$$F_\;\mathrm{d} = m a_\mathrm{d} = \frac{\alpha}{r^2}\,,$$
where $ris$ the radius of a circle and $α$ is some constant. Then
Calculate the reducted action $S_{0}$ for one revolution as a function of its radius $r$. Determine the values of $r_{n}$, for which the value of $S_{0}$ is merely the constant from the sub-task a) multiplied by a natural number. The total energy of the point mass is $E=T+V$. For this force it istrue that $V=-α⁄r$. Express the energy $E_{n}$ of the particles depending on the radii $r_{n}$ using said constants. Tip Youshould have encountered radial motion in your high-school education and also the relationships between displacement, velocity and acceleration. Use them and then the integration of action along the circumference of the circle with a constant $r$ shall become easier (constant quanties can be easily factored out of the integral). Don't forget that the path integral of „nothing“ is merely the length of the integrated path. The last sub-problem may seem complicated but it is merely a excercise in differentiating and integrating simple functions. You should be able to do it nly with standard table integrals and derivatives. Show that the full action $S$ for a free particle moving from the point [ 0$;0]$ to the point [ 2$;1]is$ for the case of linear motion (first case) minimal. In other words that it is bigger in the two other cases
$$\mathbf{y}(t)&=\left(2t,t\right) \,,\\\mathbf{y}(t)&=\left(1-\cos{(\pi t)} \frac{1}{\pi}\sin{2\pi t}, t\right) \,,\\\mathbf{y}(t)&=\left(2t, \frac{\;\mathrm{e}^t-1 t^2(t-1)}{\mathrm{e}-1}\right) \,,$$
where e is the Euler number.
Tip First find the derivative of $\textbf{y}(t)$, put it into the equation for action and integrate. |
3. Series 27. Year Post deadline: - Upload deadline: -
(2 points)1. eclipse
A planet is orbiting around a star on a circular orbit and a moon is orbiting around the planet on a circular orbit in the plane of the planet's orbit. We know that, during the eclipse of the sun the angular size of the moon is the same as the angular size of the sun if observed from the surface of the planet (the moon perfectly covers the sun). Furthermore we know that the planet perfectly covers the moon during the moon's eclipse. Determine the ratio of the radius of the planet $R$ and the moon $r$, if the distance of the planet from the star is very large compared to the distance of the moon from the planet $L$ and this is in turn larger by several orders of magnitude than the dimensions $R$, $r$.
Mirek was looking through the archive.
(2 points)2. The Mediterenean sea
How quickly on average does water flow through the Gibralatar Strait if it allows the changing of high and low tide in the Mediteranean Sea? Find the required data on the internet and don't forget to cite them properly!
Lukáš was surprised by the height of the tide
(4 points)3. cup tubby
Take an empty cylindrical cup. Turn it upside down and push it beneath a calm water surface. How high will the column of air in the cup be depending on the submersion of the cup?
Karel got inspired by the times when he used to play in his bathtub
(4 points)4. I have already forgotten more than you ever knew
A hot air balloon with its basket weighs $M$. The basket of the baloon is submerged into a water reservoir and water flows into it. Now we shall raise the temperature a bit and by that we raise the buoyant force acting on the balloon $Mg+F$. The basket has the shape of a rectangular cuboid with a square base which has a side of size $a$ and is submerged into a depth $H$. The openings in the basket make up $p≪1$ of the whole area of the basket about which we assume that it is empty (with the exception of water). Let us neglect the viscosity of water and the volume of the basket itself. How quickly shall it rise depending on the depth of submersion?
Bonus: When shall it emerge? Tip The expected speed of water flowing from the basket above the water surface is 2/3 of the maximum speed of water flowing out.
Was thought up by Lukáš during watching the movie Vratné lahve.
(5 points)5. mig-mig!
A poor hungry coyote wants to catch the devilish Road Runner and he prepared the following trap to snare him: onto a sturdy rope he fastened a 500 ton anvil and he shall throw it over a branch so that it would hang over the road and he will wait. How many times does he need to make the rope go around the branch so that he can hold it there just using his own weight? Assume that the weight of the rope is negligible compared to the weight of the coyote.
Mirek was always a fan of Wile E. Coyote.
(8 points)E. viscous
Each liquid has its specific viscosity. Try to make an Ostwald viscometer (capillary viscometer) and measure the relative viscosity of a few (at least three) liquids compared to water. Compare your results with what you find online.
Kiki got frustrated by the fact that everything flows differently during weighing things in a apothecary.
(6 points)S. Aplicational
In the text of the seriesy we used the approximative relation √( 1 + $h)$, where $his$ a small value. Determine the precision of such an approximation. How much can $h$ differ from zero so that the approximated value and the precise one shall differ only by 10%? We can make a similar approximation for any „normal“ (read occuring in nature) function using Taylor's series expansion. Try to find the Tylor's series of cos$h$ and sin$h$ on the internet and neglect factors with a higher order than $h$ and find the approximate border value where it differs by approximately 0.1. Considering a wave equation for a classical string from the serial and let the string be fastened on one end in the point [ $x;y]=[0;0]$ a na druhém konci v bodě [ $x;y]=[l;0]$. For what values of $ω,α,aabis$ the expression
$$y(x,t)=\sin ({\alpha} x)\left [a\sin {({\omega} t)} b\cos {({\omega} t)}\right ]$$
a solution of the wave equation?
Tip Subsitute into the equation for motion and use the boundary conditions. In the previous part of the series we were comparing different values of action for different trajectories of different particles. Now calculate the value of Nambu-Gota's action for a closed string which from time 0 to time $t$ stands still un the plane ( $x¹,x)$ and has the shape of a circle with radius $R$. Thus we have
$$X({\tau} , {\sigma} )=(c{\tau} , R\cos {{\sigma} }, R\sin {{\sigma} },0)$$
for $τ∈\langle0,2π\rangle$. Furthermore sketch the worldsheet of this string (forget about the last zero component) and how the line of a constant $τ$ and $σ$ look. |
Abbreviation:
DRL
A
is a residuated lattice $\mathbf{L}=\langle L, \vee, \wedge, \cdot, e, \backslash, /\rangle$ such that distributive residuated lattice
$\vee, \wedge$ are distributive: $x\wedge(y\vee z) =(x\wedge y) \vee (x\wedge z)$
Remark:
Let $\mathbf{L}$ and $\mathbf{M}$ be distributive residuated lattices. A morphism from $\mathbf{L}$ to $\mathbf{M}$ is a function $h:L\rightarrow M$ that is a homomorphism:
$h(x\vee y)=h(x)\vee h(y)$, $h(x\wedge y)=h(x)\wedge h(y)$, $h(x\cdot y)=h(x)\cdot h(y)$, $h(x\backslash y)=h(x)\backslash h(y)$, $h(x/y)=h(x)/h(y)$, $h(e)=e$
Example 1:
Classtype variety Equational theory Quasiequational theory undecidable First-order theory undecidable Locally finite no Residual size unbounded Congruence distributive yes Congruence modular yes Congruence n-permutable yes, n=2 Congruence regular no Congruence e-regular yes Congruence uniform no Congruence extension property no Definable principal congruences no Equationally def. pr. cong. no Amalgamation property Strong amalgamation property Epimorphisms are surjective
$\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &3\\ f(4)= &20\\ f(5)= &115\\ f(6)= &899\\ f(7)= &7782\\ f(8)= &80468\\ \end{array}$ |
4. Series 27. Year Post deadline: - Upload deadline: -
(2 points)1. Another unsharpened one
A freshly sharpened pencil 6B has a tip in the shape of a cone and the radius of the cone's base is $r=1\;\mathrm{mm}$ a and its height is $h=5\;\mathrm{mm}$. How long will the line that we are able to make with it be if the distance of two graphite layers is $d=3,4Å$ and the track of the pencil has on average $n=100$ such layers?
Mirek was calculating how long his pencils will last.
(2 points)2. test tubes
Test tubes of volumes 3 ml and 5 ml are connected by a short thin tube in which we can find a porous thermally non-conductive barrier that allows an equilbirum in pressures to be achieved within the system. Both test tubes in the beginning are filled with oxygen at a pressure of 101,25 kPa and a temperature of 20 ° C. We submerge the first test tube (3 ml) into a container which has a system of water and ice in equilbrium inside it and the other one (5 ml) into a container with steam. What wil the pressure be in the system of the teo test tubes be after achieving mechanical equilibrium? What would the pressure be if it would have been nitrogen and not oxygen that was in the test tubes?(while keeping other conditions the same)/p>
Kiki dug up something from the archives of physical chemistry.
(4 points)3. Seagull
Two ships are sailing against each other, the first one with a velocity $u_{1}=4\;\mathrm{m}\cdot \mathrm{s}^{-1}$ and the second with a velocity of $u_{2}=6\;\mathrm{m}\cdot \mathrm{s}^{-1}$. When they are seperated by $s_{0}=50\;\mathrm{km}$, a seagull launches from the first ship and flies towards the second one. He is flying against the wind, his speed is $v_{1}=20\;\mathrm{m}\cdot \mathrm{s}^{-1}$. When he arrives to the second ship he turns around and flies back now with the wind behind his back with a velocity $v_{2}=30\;\mathrm{m}\cdot \mathrm{s}^{-1}.He$ keps on flying back and forth until the two ships meet. How long is the path that he has undertaken?
Mirek was improving tasks from elementary school.
(4 points)4. discharged pudding
There are a lot of models of hydrogens and many of these have been overcome but we like pudding and so we shall return to the pudding model of hydrogen. The atom is made of a sphere with a radius $R$ with an equally distributed positive charge(„puding“), in which we can find an electron(„rozinka“). Obviously the electron prefers being in the place with the lowest possible energy and so he sits in the middle of the pudding. Overall the system is electrically neutral. What is the energy that we must give the electron to get it to infinity? What would radius have to be so that this energy would be equal to Rydberg's energy (the energy of excitation of an electron in an atom of hydrogen)? Express the radius in multiples of the Bohr radius.
Jakub was making pudding.
(4 points)5. Another unsharpened one
By how much shall the temperature of two identical steel balls rise after their collision?They move in the same direction with speeds $v_{1}=0,7c$ and $v_{2}=0,9c$ where $c$ is the speed of light. Assume that the heat capacity is constant and that the balls are still solid.
Lucas was making a task for the Online Physics Brawl and then he put it into the series.
(5 points)P. the true gravitational acceleration
Faleš wanted to determine the
gravitational acceleration from an experiment in Prague(V Holešovickách 2 in the first floor/ground floor). In the experiment he was dropping a round ball from a height of a couple of meters above the Earth. Think about what kind of corrections he had to apply when analysing the data. Then think up your own experiment to determine g and discuss its accuracy.
Karel was thinking about the difference between gravitational acceleration and gravitational force
(8 points)E. some like it lukewarm
Measure the relation between the temperature and time in a freshly made cup of tea. Conduct the measurements for a undisturbed cup of tea and a cup of tea stirred by a teaspoon. Finally determine if the time the tea takes to reach a drinkable temperature depends on the stirring.
Michal altered xkcd.
(6 points)S. quantum
Look into the text to see how the operator of position $<img$
src=„https://latex.codecogs.com/gif.latex?\hat%20X“>$
and momentum $<img$ src=„https://latex.codecogs.com/gif.latex?\hat
%20P“>$ acts on the components of the state vector in $x-$
representation (wave function) and calculate their comutator, in other
words
<img src=„https://latex.codecogs.com/gif.latex?(\hat%20{X})_x%20\left((\hat%20
{P})_x%20{\psi}%20(x)\right)%20-%20(\hat%20{P})_x%20\left((\hat%20{X})_x%20
{\psi}%20(x)\right)%20“>
Tip Find out what happens when you take the derivative
of two functions multiplied together
The problem of levels of energy for a free quantum particle in other words
for $V(x)=0$ has the
following form:
<img src=„https://latex.codecogs.com/gif.latex?-\frac%20{\hbar%20^2}
{2m}%20\dfrac{\partial^2%20{\psi}%20(x)}{\partial%20x^2}=%20E%20{\psi}%20
(x)\,.“>
Try inputting $ψ$
( $x)=e^{αx}$ as the solution
and find out for what $α$ (a general complex number)
is $Epositive$ (only use such $α$ from now on).
Is this solution periodic? If yes then with what spatial period
(wavelength)?
Is the gained wave function the eigenvector of the operator of momentum
(in the $x-representation)?$ If yes find the relation between
wavelength and momentum (in other words the respective eigenvalue) of the state.
Try to formally calculate the density of probability oof presence of the
particle in space.naší vlnové funkci podle vzorce uvedeného v textu. Pravděpodobnost, že se
částice vyskytuje v celém prostoru by měla být pro fyzikální hustotu pravděpodobnosti 1,
tj. <img src=„https://latex.codecogs.com/gif.latex?\int_\mathbb{R}%20\rho
(x)%20\mathrm{d}%20x=1.“> Show that our wave function can't be
$normalized$ (in other words multiply by some constant) so that its formal
density of probability according to the equation from the text was a real
physical density of probability.
*Bonus:** What do you think that the limit of the
uncertainity of a position of a particle is if the wave function it has is close
to ours (In other words it approaches it in all properties but it always has a
normalized probability density and thus is a physical state) Can we (using Heisenberg's relation of uncertainty) determine what is
the lowest possible imprecision while finding the momentum?
Tip Take care when dealing with complex numbers. For
example the square of a complex number is different than that of its magnitude.
In the second part of the series we derivated the energy levels of an
electron in hydrogen using reduced action. Due to a random happenstance the
solution of the spectrum of the hamiltonian in a coulombic potential of a
proton would lead to thecompletely same energy,in other words
<img src=„https://latex.codecogs.com/gif.latex?E_n%20=%20-{\mathrm{Ry}}%20\frac
%20{1}{n^2}“>
where Ty = 13,6 eV is an ernergy constant that is known
as the
Rydberg constant. An electron which falls from a random energy
level to $n=2$ shall emit energy in the form of a proton
and the magnitude of the energy shall be equal to the diference of the energies
of the two states. Which are the states that an electron can fall from so that
the light will be in the visible spectrum? What will the color of the spectral
lines be?
Tip Remember the photoelectric
effect and the relation between the frequency of light and its
wavelength. |
A numerical study of a mean curvature denoising model using a novel augmented Lagrangian method
Department of Mathematics, University of Alabama, Box 870350, Tuscaloosa, AL 35487, USA
In this paper, we propose a new augmented Lagrangian method for the mean curvature based image denoising model [
Keywords:Image denoising, mean curvature, augmented Lagrangian method, high-order, variational model. Mathematics Subject Classification:Primary: 68U10, 65K10; Secondary: 49M05. Citation:Wei Zhu. A numerical study of a mean curvature denoising model using a novel augmented Lagrangian method. Inverse Problems & Imaging, 2017, 11 (6) : 975-996. doi: 10.3934/ipi.2017045
References:
[1] [2]
L. Ambrosio and S. Masnou, On a variational problem arising in image reconstruction,
[3] [4] [5] [6] [7]
T. Chan, G. H. Golub and P. Mulet,
A nonlinear primal-dual method for total variation-basedimage restoration,
[8] [9]
T. Chan, S. Esedoḡlu, F. Park and M. H. Yip,
[10] [11]
M. P. do Carmo,
Differential Geometry of Curves and Surfaces, Prentice-Hall, Inc., 1976.
Google Scholar
[12] [13]
J. Eckstein and W. Yao,
Understanding the Convergence of the Alternating Direction Methodof Multipliers: Theoretical and Computational Perspectives,
[14] [15] [16]
M. Hinterm
[17] [18] [19]
S. Masnou and J. M. Morel, Level lines based disocclusion,
[20]
Y. Meyer,
[21]
M. Myllykoski, R. Glowinski, T. Kárkkáinen and T. Rossi,
A new augmented Lagrangianapproach for L1-mean curvature image denoising,
[22] [23]
S. Osher, M. Burger, D. Goldfarb, J. J. Xu and W. T. Yin,
An iterative regularization methodfor total variation-based image restoration,
[24]
R. T. Rockafellar,
Augmented Lagrangians and applications of the proximal point algorithmin convex programming,
[25] [26]
T. Schoenemann, F. Kahl and D. Cremers, Curvature regularity for region-based image segmentation and inpainting: A linear programming relaxation,
[27]
T. Schoenemann, F. Kahl, S. Masnou and D. Cremers,
A linear framework for region-basedimage segmentation and inpainting involving curvature penalization,
[28] [29] [30]
C. Wu and X. C. Tai,
Augmented Lagrangian method, dual methods, and split Bregmaniteration for ROF, Vectorial TV, and high order models,
[31] [32] [33] [34] [35] [36]
show all references
References:
[1] [2]
L. Ambrosio and S. Masnou, On a variational problem arising in image reconstruction,
[3] [4] [5] [6] [7]
T. Chan, G. H. Golub and P. Mulet,
A nonlinear primal-dual method for total variation-basedimage restoration,
[8] [9]
T. Chan, S. Esedoḡlu, F. Park and M. H. Yip,
[10] [11]
M. P. do Carmo,
Differential Geometry of Curves and Surfaces, Prentice-Hall, Inc., 1976.
Google Scholar
[12] [13]
J. Eckstein and W. Yao,
Understanding the Convergence of the Alternating Direction Methodof Multipliers: Theoretical and Computational Perspectives,
[14] [15] [16]
M. Hinterm
[17] [18] [19]
S. Masnou and J. M. Morel, Level lines based disocclusion,
[20]
Y. Meyer,
[21]
M. Myllykoski, R. Glowinski, T. Kárkkáinen and T. Rossi,
A new augmented Lagrangianapproach for L1-mean curvature image denoising,
[22] [23]
S. Osher, M. Burger, D. Goldfarb, J. J. Xu and W. T. Yin,
An iterative regularization methodfor total variation-based image restoration,
[24]
R. T. Rockafellar,
Augmented Lagrangians and applications of the proximal point algorithmin convex programming,
[25] [26]
T. Schoenemann, F. Kahl and D. Cremers, Curvature regularity for region-based image segmentation and inpainting: A linear programming relaxation,
[27]
T. Schoenemann, F. Kahl, S. Masnou and D. Cremers,
A linear framework for region-basedimage segmentation and inpainting involving curvature penalization,
[28] [29] [30]
C. Wu and X. C. Tai,
Augmented Lagrangian method, dual methods, and split Bregmaniteration for ROF, Vectorial TV, and high order models,
[31] [32] [33] [34] [35] [36]
1.Initialization:
2.Compute an approximate minimizer
$ (u^{k}, q^{k}, \mathbf{n}^{k}) \approx \mbox{argmin } \mathcal{L}(u, q, \mathbf{n}; \lambda_{1}^{k-1}, {\boldsymbol{\lambda}}_{2}^{k-1}). $ 3. Update the Lagrangian multipliers $ \lambda_{1}^{k} = \lambda_{1}^{k-1}+r_{1}(q^{k}-\nabla\cdot \mathbf{n}^{k}) $ $ {\boldsymbol{\lambda}}_{2}^{k} = {\boldsymbol{\lambda}}_{2}^{k-1}+r_{2}\left(\frac{\nabla u^{k}}{\sqrt{1+|\nabla u^{k}|^{2}}}-\mathbf{n}^{k}\right), $ 4. Measure the relative residuals and stop the iteration if they are smaller than a threshold
1.Initialization:
2.Compute an approximate minimizer
$ (u^{k}, q^{k}, \mathbf{n}^{k}) \approx \mbox{argmin } \mathcal{L}(u, q, \mathbf{n}; \lambda_{1}^{k-1}, {\boldsymbol{\lambda}}_{2}^{k-1}). $ 3. Update the Lagrangian multipliers $ \lambda_{1}^{k} = \lambda_{1}^{k-1}+r_{1}(q^{k}-\nabla\cdot \mathbf{n}^{k}) $ $ {\boldsymbol{\lambda}}_{2}^{k} = {\boldsymbol{\lambda}}_{2}^{k-1}+r_{2}\left(\frac{\nabla u^{k}}{\sqrt{1+|\nabla u^{k}|^{2}}}-\mathbf{n}^{k}\right), $ 4. Measure the relative residuals and stop the iteration if they are smaller than a threshold
1. Initialization:
2. For fixed Lagrangian multipliers
$ \widetilde{u}^{1} = \mbox{argmin } \mathcal{L}(u, \widetilde{q}^{0}, \widetilde{\mathbf{n}}^{0};\lambda_{1}, {\boldsymbol{\lambda}}_{2}) $ $ \widetilde{q}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{u}^{1}, q, \widetilde{\mathbf{n}}^{0};\lambda_{1}, {\boldsymbol{\lambda}}_{2}) $ $ \widetilde{\mathbf{n}}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{u}^{1}, \widetilde{q}^{1}, \mathbf{n}, \lambda_{1}, {\boldsymbol{\lambda}}_{2}) $ 3.
1. Initialization:
2. For fixed Lagrangian multipliers
$ \widetilde{u}^{1} = \mbox{argmin } \mathcal{L}(u, \widetilde{q}^{0}, \widetilde{\mathbf{n}}^{0};\lambda_{1}, {\boldsymbol{\lambda}}_{2}) $ $ \widetilde{q}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{u}^{1}, q, \widetilde{\mathbf{n}}^{0};\lambda_{1}, {\boldsymbol{\lambda}}_{2}) $ $ \widetilde{\mathbf{n}}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{u}^{1}, \widetilde{q}^{1}, \mathbf{n}, \lambda_{1}, {\boldsymbol{\lambda}}_{2}) $ 3.
32 2e-3 1e-1 64 2e-3 1e-1 128 1e-4 1e-1
32 2e-3 1e-1 64 2e-3 1e-1 128 1e-4 1e-1
[1] [2]
Anis Theljani, Ke Chen.
An augmented lagrangian method for solving a new variational model based on gradients similarity measures and high order regulariation for multimodality registration.
[3] [4]
Guoshan Zhang, Peizhao Yu.
Lyapunov method for stability of descriptor second-order and high-order systems.
[5]
Xi-Hong Yan.
A new convergence proof of augmented Lagrangian-based method with full Jacobian decomposition for structured variational inequalities.
[6]
Zheng Sun, José A. Carrillo, Chi-Wang Shu.
An entropy stable high-order discontinuous Galerkin method for cross-diffusion gradient flow systems.
[7]
Egil Bae, Xue-Cheng Tai, Wei Zhu.
Augmented Lagrangian method for an Euler's elastica based segmentation model that promotes convex contours.
[8] [9]
Chunlin Wu, Juyong Zhang, Xue-Cheng Tai.
Augmented Lagrangian method for total variation restoration with non-quadratic fidelity.
[10]
Xueyong Wang, Yiju Wang, Gang Wang.
An accelerated augmented Lagrangian method for multi-criteria optimization problem.
[11]
Li Jin, Hongying Huang.
Differential equation method based on approximate augmented Lagrangian for nonlinear programming.
[12] [13]
Huan Han.
A variational model with fractional-order regularization term arising in registration of diffusion tensor image.
[14]
Seung-Yeal Ha, Jeongho Kim, Jinyeong Park, Xiongtao Zhang.
Uniform stability and mean-field limit for the augmented Kuramoto model.
[15]
Marc Wolff, Stéphane Jaouen, Hervé Jourdren, Eric Sonnendrücker.
High-order dimensionally split Lagrange-remap schemes for ideal magnetohydrodynamics.
[16]
Raymond H. Chan, Haixia Liang, Suhua Wei, Mila Nikolova, Xue-Cheng Tai.
High-order total variation regularization
approach for axially symmetric
object tomography from a single radiograph.
[17] [18]
Tarek Saanouni.
Global well-posedness of some high-order semilinear wave and Schrödinger type equations with exponential nonlinearity.
[19] [20]
Andrey B. Muravnik.
On the Cauchy problem for differential-difference parabolic equations with high-order nonlocal terms of general kind.
2018 Impact Factor: 1.469
Tools Metrics Other articles
by authors
[Back to Top] |
In this is article we are going to learn about the terms two dimensional random variable, cumulative distribution function, marginal probability and joint density function.
Two Dimensional Random Variable
Let E be an experiment and S a sample space associated with E. Let X = X(s) and Y = Y(s) be two function each assigning a real number to each outcomes s Є S. We call (X, Y) a two dimensional random variable.
If X
1 = X 1(s), X 2 = X 2(s), ……., X n = X n(s) are n functions each assigning a real number to every outcome s Є S, we call (X 1, X 2, … , X n) as n-dimensional random variable. Discrete and Continuous Random Variable
Let (X, Y) be two-dimensional discrete random variable with possible values of (X, Y) are finite or countably infinite. That is, the possible values of (X, Y) may be represented as (x
i, y j), i = 1, 2, …, n, j = 1, 2, …, m. With each possible outcome (x i, y j) we associate a number p(x i, y j) representing P[X = x i , Y = y j ] and satisfying the following conditions: \[(i)p({{x}_{i}},{{y}_{j}})\ge 0~~\forall x,y\] \[(ii)\sum\limits_{j=1}^{\infty }{\sum\limits_{i=1}^{\infty }{p({{x}_{i}},{{y}_{j}})=1}}\] The function p defined for all (x i, y j) in the range space (X, Y) is called the probability function of (X, Y). The set of triplets (x i, y j;p(x i, y j)) i, j = 1, 2, … is called the probability distribution of (X, Y). Joint Density Function
Let (X, Y) be a continuous random variable assuming all values in some region R of the Euclidian plane. The joint probability density function f(x, y) is a function, satisfying the following conditions:
\[(i)f(x,y)\ge 0~~\forall x,y\] \[(ii)\iint\limits_{R}{f(x,y)dxdy=1}\] Cumulative Distribution Function
Let (X, Y) be a two-dimensional random variable. The cumulative distribution function (cdf) F of the two-dimensional random variable (X, Y) is defined by F(x, y) = P[X ≤ x, Y ≤ y]
Marginal and Condition Probability Distribution
With each two dimensional random variable (X, Y) we associate two one dimensional random variable, namely X and Y, individually. That is we may be interested in the probability distribution of X or the probability distribution of Y.
Let (X, Y) be a discrete random variable with probability distribution p(x i, y j), i, j = 1, 2, … The marginal probability distribution X is defined as \[p({{x}_{i}})=P\left[ X={{x}_{i}} \right]=\sum\limits_{j=1}^{\infty }{p({{x}_{i}},{{y}_{j}})}\] Similarly, the marginal probability distribution of Y is defined as \[p({{y}_{j}})=P\left[ Y={{y}_{j}} \right]=\sum\limits_{i=1}^{\infty }{p({{x}_{i}},{{y}_{j}})}\] Let (X, Y) be a two-dimensional discrete random variable with joint pdf f(x, y). The marginal probability density function of X. can be defined as \[g(x)=\int\limits_{-\infty }^{\infty }{f(x,y)dy}\]. The marginal probability density function of Y can be defined as \[h(y)=\int\limits_{-\infty }^{\infty }{f(x,y)dx}\] Note: \[P(c\le X\le d)=P(c\le X\le d,-\infty \le Y\le \infty )\] \[\Rightarrow P(c\le X\le d)=\int\limits_{c}^{d}{\int\limits_{-\infty }^{\infty }{f(x,y)dydx}}\] \[\therefore P(c\le X\le d)=\int\limits_{c}^{d}{g(x)dx}\] \[\text{Similarly},~~P(a\le Y\le b)=\int\limits_{a}^{b}{h(y)dy}\] Definition
Let (X, Y) be a discrete two dimensional random variable with probability distribution p(x
i, y j). Let p(x i) and q(y j) be the marginal pdfs of X and Y, respectively.
The conditional pdf of X for given Y = y
j is defined by \[p\left( {{x}_{i}}|{{y}_{j}} \right)=P\left[ X={{x}_{i}}|Y={{y}_{j}} \right]=\frac{p({{x}_{i}},{{y}_{j}})}{q({{y}_{j}})}if~q({{y}_{j}})>0\] Similarly, the conditional pdf of Y for given X = x i is defined as \[q\left( {{y}_{j}}|{{x}_{i}} \right)=P\left[ Y={{y}_{j}}|X={{x}_{i}} \right]=\frac{p({{x}_{i}},{{y}_{j}})}{p({{x}_{i}})}if~p({{x}_{i}})>0\] Definition
Let (X, Y) be a continuous two dimensional random variable with joint pdf ‘f’. Let g and h be the marginal pdfs of X and Y respectively.
The conditional pdf of X for given Y = y is defined by
\[g(x|y)=\frac{f(x,y)}{h(y)},h(y)>0\] The conditional pdf of Y for given X = x is defined by \[h(y|x)=\frac{f(x,y)}{g(x)},g(x)>0\] Independent Random Variable
Let (X, Y) be a two dimensional discrete random variable. We say that X and Y are independent random variables if and only if P(x
i, y j) = p(x i)q(y j) for all i and j. That is P(X = x i , Y = y j ) = P(X = x i )P(Y = y j)for all i and j, i.e., p(x i, y j) = p(x i)q(y j) Ɐi,j
Let (X, Y) be a two dimensional continuous random variable. We say that X and Y are independent random variables if and only if f(x, y) = g(x)h(y) for all (x, y), where f is the joint pdf and g and h are the marginal pdfs of X and Y, respectively.
Example 01 Find the constant k so that, is a joint probability density function. Are X and Y independent? Solution:
We observe that f(x, y) ≥ 0 for all x, y if k ≥ 0
Further,
\[\int\limits_{-\infty }^{\infty }{\int\limits_{-\infty }^{\infty }{f(x,y)dxdy}}=\int\limits_{y=0}^{\infty }{\int\limits_{x=0}^{1}{f(x,y)dxdy}}\] \[=k\left\{ \int\limits_{0}^{1}{\left( x+1 \right)dx} \right\}\left\{ \int\limits_{0}^{\infty }{{{e}^{-y}}dy} \right\}\] \[=k{{\left[ \frac{{{x}^{2}}}{2}+x \right]}_{0}}^{1}{{\left[ -{{e}^{-y}} \right]}_{0}}^{\infty }=\frac{3}{2}k\] Accordingly, f(x, y) is a joint probability density function if k=2/3. With k=2/3, we find that the marginal density functions are \[g(x)=\int\limits_{-\infty }^{\infty }{f(x,y)dy}=\frac{2}{3}\left( x+1 \right)\int\limits_{0}^{\infty }{{{e}^{-y}}dy}\] \[\therefore g(x)=\frac{2}{3}\left( x+1 \right),0<x<1\] \[\text{and}~h(y)=\int\limits_{-\infty }^{\infty }{f(x,y)dy}=\frac{2}{3}{{e}^{-y}}\int\limits_{0}^{1}{\left( x+1 \right)dx}\] \[\therefore h(y)=\frac{2}{3}{{e}^{-y}}.\frac{3}{2}={{e}^{-y}},y>0\] We observe that g(x)h(y) = f(x, y) Therefore, X and Y are independent random variable. |
5. Series 27. Year Post deadline: - Upload deadline: -
(2 points)1. a pressured giraffe
Compare the blood pressure in the head of an adult giraffe and an adult human $p_{h1}=120\;\mathrm{mm}Hg$ and in the giraffe $p_{g1}=280\;\mathrm{mm}Hg,the$ density of the blood of both animals is $ρ=1050\;\mathrm{kg}\cdot m$. Consider only the case where both the human and giraffe are standing. The speed of the flow of the blood is to be assumed constant.
Mirek was wondering why a giraffe doesn't faint.
(2 points)2. uranium star
Imagine that no thermonuclear fusion occurs in stars and instead they run on nuclear fission. Estimate how long such a star would be able to shine if at the beginning of its life cycle it is composed of uranium 235, its mass and luminosity are both aproximately constant and are equal to the current values of the sun.
Mirek was reading through his new textbooks.
(3 points)3. the fine container
Consider a cylindrical container which fills the volume of $V=1l$. The container is closed with an airtight moving piston which has a non-negible mass $M$. Furthermore we know that the container is divided by horizontal partitions into $n$ sections and in the $i-thsection$ (it is numbered from the top ascendingly) there are 2^{$i}a$ particles, where $ais$ an undefined constant.The partitions are not fixed with regards to the container but at the same time they prevent the sections in which the ideal gas can be found from exchanging heat or particles. The whole system is at equilibrium. Then we make the mass of the piston twice as large and wait for equilibrium to arise again. How will the volume of the gas in the container change? Do not consider atmospheric pressure.
Nahry was under pressure and created a problem about pressure.
(4 points)4. Trianglular resistor
Determine the resistance of a triangle created out of resistive wires between clamps A a B, that you see in the picture. One side of the small triangle (of which the bigger triangle is composed of) has the resistance of $R_{0}.Neglect$ the resistance of the wires that led it there.
Karel was drawing triangles
(5 points)5. Babysitting
Consider a swing held up by two ropes of length $l=1.5\;\mathrm{m}$ that hang from a pole of radius $r=4\;\mathrm{cm}$. The child sitting on the swing shall gain in the bottom dead center such a speed $v_{0}$ that the child shall accomplish a whole turn around the horizontal pole and that the ropes shall be experiencing a tension sufficient enough for them to be rigid throughout the turn. At the same time we wish to minimalize the initial velocity. Determine the difference of the angular velocity $ω_{1}of$ the swing with the child after its return to the bottom dead point and the initial angular velocity $ω_{0}$.
Hint: To calculate the centripetal acceleration you may assume that locally the child is moving on a circular path.
Mirek always liked playing with his younger siblings.
(8 points)E. rubbery
An object of mass $m$ on a piece of rubber of length $l_{0}is$ hung at a rigid point, the coordinates of which are $x=0$ and $y=0$. From the $xaxis$, which is horizontal, we slowly release the mass. What will be the relation between the lowest point reached and its position on the axis $x?$
Dominika was testing which method is optimal for gouging someone's eyeball out.
(6 points)S. string
We consider only open strings and we shall limit ourselves merely to three dimensions. Draw how the following things look like a string moving freely through timespace, a string fixed with both ends to a D2-brane, a string between a D2-brane and D1-brane.
Where can the strings end in the case of three parallel D2-branes?
Choose one of the functions
$$\mathcal{P}_{\mu}^{\tau}$$
ot $$\mathcal{P}_{\mu}^{\sigma}$$ that was defined in the first part of the series and find its explicit
form (in other words a direct dependence on $$\dot{X}^{\mu}$$ and <img
src=„https://latex.codecogs.com/gif.latex?X'^{\mu}“>). Show that the conditions $$\vect{X}'\cdot \dot{\vect{X}}=0$$
and $$|\dot{\vect{X}}|^2=-|\vect{X}'|^2$$
Find the spectrum of energies of a harmonic oscilator. The energy of the oscilator is given by the hamiltonian
$$\hat{H}=\frac{\hat{p}^2}{2m} \frac{1}{2}m\omega^2\hat{x}^2$$
The second expression is clearly the potential energy while the first gives after substituting in $$\hat{p}=m\hat{v}$$ kinetic energy. We define linear combination as
$$\hat{\alpha}=a\hat{x} \;\mathrm{i} b\hat{p}$$ . Find the real constants <img
src=„https://latex.codecogs.com/gif.latex?a“> a $b$ , such that the Hamiltonian will have the form of
<img src=„https://latex.codecogs.com/gif.latex?\hat{H}=\hbar \omega \left(\hat{\alpha} ^{\dagger}\hat{\alpha}+\frac{1}
{2}\right)\,,“> where $$\hat{\alpha} ^{\dagger}$$ is the complex conjugate <img
src=„https://latex.codecogs.com/gif.latex?\hat{\alpha}“>.
Show from your knowledge of canoninc commutation relations for
$$\hat{x}$$
and $$\hat{p}$$ that the following is true
<img src=„https://latex.codecogs.com/gif.latex?\left[\hat{\alpha},\hat{\alpha}\right]=0\,,\quad\left[\hat{\alpha} ^{\dagger},\hat{\alpha} ^
{\dagger}\right]=0\,,\quad\left[\hat{\alpha} ,\hat{\alpha} ^{\dagger}\right]=1\,.“>
In the spectrum of the oscilator there will surely be the state with the lowest possible energy which corresponds to the smallest possible
amount of oscilating. Lets call it $$|0\rangle$$ . This state must fulfill <img
src=„https://latex.codecogs.com/gif.latex?\alpha |0\rangle =0“>. Show that its energy is equal to $$\hbar\omega/2$$ , ie. $$\hat{H}|0\rangle=\hbar\omega/2|0\rangle$$ . Furthermore prove that if $$\alpha |0\rangle \neq 0$$ then we have a contradiction with the fact that <img
$$E<\hbar\omega/2$$ . All the eigenstates of the Hamiltonian can be described
as $$\left(\alpha^{\dagger}\right) ^n|0\rangle$$
for $$n=0,1,2,\dots$$ Find the energy of these states, in other words find such numbers <img
(\alpha^{\dagger}\right)^n|0\rangle“>.
Tip Use the commutation relation for $$\hat{\alpha}^{\dagger}$$a <img
src=„https://latex.codecogs.com/gif.latex?\hat{\alpha}“>. |
An example of methylation analysis with simulated datasets
Part 2: Potential DMPs from the methylation signal
Methylation analysis with Methyl-IT is illustrated on simulated datasets of methylated and unmethylated read counts with relatively high average of methylation levels: 0.15 and 0.286 for control and treatment groups, respectively. In this part, potential differentially methylated positions are estimated following different approaches. 1. Background
Only a signal detection approach can detect with high probability real DMPs. Any statistical test (like e.g. Fisher’s exact test) not based on signal detection requires for further analysis to distinguish DMPs that naturally can occur in the control group from those DMPs induced by a treatment. The analysis here is a continuation of Part 1.
2. Potential DMPs from the methylation signal using empirical distribution
As suggested from the empirical density graphics (above), the critical values $H_{\alpha=0.05}$ and $TV_{d_{\alpha=0.05}}$ can be used as cutpoints to select potential DMPs. After setting $dist.name = “ECDF”$ and $tv.cut = 0.926$ in Methyl-IT function
getPotentialDIMP, potential DMPs are estimated using the empirical cummulative distribution function (ECDF) and the critical value $TV_{d_{\alpha=0.05}}=0.926$.
DMP.ecdf <- getPotentialDIMP(LR = divs, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "ECDF")
3. Potential DMPs detected with Fisher’s exact test
In Methyl-IT Fisher’s exact test (FT) is implemented in function
FisherTest. In the current case, a pairwise group application of FT to each cytosine site is performed. The differences between the group means of read counts of methylated and unmethylated cytosines at each site are used for testing ( pooling.stat=”mean”). Notice that only cytosine sites with critical values $TV_d$> 0.926 are tested ( tv.cut = 0.926).
ft = FisherTest(LR = divs, tv.cut = 0.926, pAdjustMethod = "BH", pooling.stat = "mean", pvalCutOff = 0.05, num.cores = 4L, verbose = FALSE, saveAll = FALSE) ft.tv <- getPotentialDIMP(LR = ft, div.col = 9L, dist.name = "None", tv.cut = 0.926, tv.col = 7, alpha = 0.05)
There is not a one-to-one mapping between $TV$ and $HD$. However, at each cytosine site $i$, these information divergences hold the inequality:
$TV(p^{tt}_i,p^{ct}_i)\leq \sqrt{2}H_d(p^{tt}_i,p^{ct}_i)$ [1].
where $H_d(p^{tt}_i,p^{ct}_i) = \sqrt{\frac{H(p^{tt}_i,p^{ct}_i)}w}$ is the Hellinger distance and $H(p^{tt}_i,p^{ct}_i)$ is given by Eq. 1 in part 1.
So, potential DMPs detected with FT can be constrained with the critical value $H^{TT}_{\alpha=0.05}\geq114.5$ 4. Potential DMPs detected with Weibull 2-parameters model
Potential DMPs can be estimated using the critical values derived from the fitted Weibull 2-parameters models, which are obtained after the non-linear fit of the theoretical model on the genome-wide $HD$ values for each individual sample using Methyl-IT function
nonlinearFitDist [2]. As before, only cytosine sites with critical values $TV>0.926$ are considered DMPs. Notice that, it is always possible to use any other values of $HD$ and $TV$ as critical values, but whatever could be the value it will affect the final accuracy of the classification performance of DMPs into two groups, DMPs from control and DNPs from treatment (see below). So, it is important to do an good choices of the critical values.
nlms.wb <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L)# Potential DMPs from 'Weibull2P' modelDMPs.wb <- getPotentialDIMP(LR = divs, nlms = nlms.wb, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Weibull2P")nlms.wb$T1
## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.5413711 0.0003964435 1365.570 0 0.991666592250838## scale 19.4097502 0.0155797315 1245.833 0 ## rho R.Cross.val DEV## shape 0.991666258901194 0.996595712743823 34.7217494754823## scale ## AIC BIC COV.shape COV.scale## shape -221720.747067975 -221694.287733122 1.571674e-07 -1.165129e-06## scale -1.165129e-06 2.427280e-04## COV.mu n## shape NA 50000## scale NA 50000
5. Potential DMPs detected with Gamma 2-parameters model
As in the case of Weibull 2-parameters model, potential DMPs can be estimated using the critical values derived from the fitted Gamma 2-parameters models and only cytosine sites with critical values $TV_d > 0.926$ are considered DMPs.
nlms.g2p <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L, dist.name = "Gamma2P")# Potential DMPs from 'Gamma2P' modelDMPs.g2p <- getPotentialDIMP(LR = divs, nlms = nlms.g2p, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Gamma2P")nlms.g2p$T1
## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.3866249 0.0001480347 2611.717 0 0.999998194156282## scale 76.1580083 0.0642929555 1184.547 0 ## rho R.Cross.val DEV## shape 0.999998194084045 0.998331895911125 0.00752417919133131## scale ## AIC BIC COV.alpha COV.scale## shape -265404.29138371 -265369.012270572 2.191429e-08 -8.581717e-06## scale -8.581717e-06 4.133584e-03## COV.mu df## shape NA 49998## scale NA 49998
Summary table:
data.frame(ft = unlist(lapply(ft, length)), ft.hd = unlist(lapply(ft.hd, length)),ecdf = unlist(lapply(DMPs.hd, length)), Weibull = unlist(lapply(DMPs.wb, length)),Gamma = unlist(lapply(DMPs.g2p, length)))
## ft ft.hd ecdf Weibull Gamma## C1 1253 773 63 756 935## C2 1221 776 62 755 925## C3 1280 786 64 768 947## T1 2504 1554 126 924 1346## T2 2464 1532 124 942 1379## T3 2408 1477 121 979 1354
6. Density graphic with a new critical value
The graphics for the empirical (in black) and Gamma (in blue) densities distributions of Hellinger divergence of methylation levels for sample T1 are shown below. The 2-parameter gamma model is build by using the parameters estimated in the non-linear fit of $H$ values from sample T1. The critical values estimated from the 2-parameter gamma distribution $H^{\Gamma}_{\alpha=0.05}=124$ is more ‘conservative’ than the critical value based on the empirical distribution $H^{Emp}_{\alpha=0.05}=114.5$. That is, in accordance with the empirical distribution, for a methylation change to be considered a signal its $H$ value must be $H\geq114.5$, while according with the 2-parameter gamma model any cytosine carrying a signal must hold $H\geq124$.
suppressMessages(library(ggplot2)) # Some information for graphic dt <- data[data$sample == "T1", ] coef <- nlms.g2p$T1$Estimate # Coefficients from the non-linear fit dgamma2p <- function(x) dgamma(x, shape = coef[1], scale = coef[2]) qgamma2p <- function(x) qgamma(x, shape = coef[1], scale = coef[2]) # 95% quantiles q95 <- qgamma2p(0.95) # Gamma model based quantile emp.q95 = quantile(divs$T1$hdiv, 0.95) # Empirical quantile # Density plot with ggplot ggplot(dt, aes(x = HD)) + geom_density(alpha = 0.05, bw = 0.2, position = "identity", na.rm = TRUE, size = 0.4) + xlim(c(0, 150)) + stat_function(fun = dgamma2p, colour = "blue") + xlab(expression(bolditalic("Hellinger divergence (HD)"))) + ylab(expression(bolditalic("Density"))) + ggtitle("Empirical and Gamma densities distributions of Hellinger divergence (T1)") + geom_vline(xintercept = emp.q95, color = "black", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = emp.q95 - 20, y = 0.16, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Emp==114.5)', family = "serif", color = "black", parse = TRUE) + geom_vline(xintercept = q95, color = "blue", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = q95 + 9, y = 0.14, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Gamma==124)', family = "serif", color = "blue", parse = TRUE) + theme( axis.text.x = element_text( face = "bold", size = 12, color="black", margin = margin(1,0,1,0, unit = "pt" )), axis.text.y = element_text( face = "bold", size = 12, color="black", margin = margin( 0,0.1,0,0, unit = "mm")), axis.title.x = element_text(face = "bold", size = 13, color="black", vjust = 0 ), axis.title.y = element_text(face = "bold", size = 13, color="black", vjust = 0 ), legend.title = element_blank(), legend.margin = margin(c(0.3, 0.3, 0.3, 0.3), unit = 'mm'), legend.box.spacing = unit(0.5, "lines"), legend.text = element_text(face = "bold", size = 12, family = "serif")
References Steerneman, Ton, K. Behnen, G. Neuhaus, Julius R. Blum, Pramod K. Pathak, Wassily Hoeffding, J. Wolfowitz, et al. 1983. “On the total variation and Hellinger distance between signed measures; an application to product measures.” Proceedings of the American Mathematical Society88 (4). Springer-Verlag, Berlin-New York: 684–84. doi:10.1090/S0002-9939-1983-0702299-0. Sanchez, Robersy, and Sally A. Mackenzie. 2016. “Information Thermodynamics of Cytosine DNA Methylation.” Edited by Barbara Bardoni. PLOS ONE11 (3). Public Library of Science: e0150427. doi:10.1371/journal.pone.0150427. |
To get something resembling the binomial theorem you have to eliminate that $x$ factor in the terms. Well, when we express the binomial coefficient as factorials, we have a factor of $x!$ in the denominator, so we can cancel except when $x=0$, but fortuitously ...
$$\begin{align}\mu^z\sum\limits_{x=0}^z\dfrac{z!x(\frac{\lambda}{\mu})^x}{x!(z-x)!}~&=~\mu^z \sum_{x=1}^z\dfrac{z!x(\frac{\lambda}{\mu})^{x}}{x!(z-x)!} &&\text{term for }x=0\text{ is }0\\[1ex] &=~ \mu^z \sum_{x=1}^z\dfrac{z!(\frac{\lambda}{\mu})^{x}}{(x-1)!(z-x)!}&&\text{cancelling common factor}\end{align}$$
And you can proceed from there until you have something familiar.
(Hint: use a change of variables.) |
This may be resolved by identifying $B_4$ with the mapping class group of the 4-punctured disk (fixing the boundary). I.e. $B_4\cong Mod(D^2-\{p_1,p_2,p_3,p_4\})$. I will consider isotopy classes of arcs in $D^2-\{p_1,p_2,p_3,p_4\}$ with endpoints on $\partial D^2$ and how they intersect each other, and the action of $B_4$ on the isotopy classes of arcs. In your pictures defining the generators of $B_4$, I'm assuming the points $p_i$ are numbered in order from top to bottom. Choices of isotopy classes of arcs may be made by choosing a complete hyperbolic metric on the punctured disk so that the boundary is totally geodesic, and making the arcs geodesics. Then the intersection number between pairs of arcs will be minimized by these representatives. Given an arc $w$ and $\sigma \in B_4$, I'll abuse notation $\sigma(w)$ to mean the arc made geodesic after applying $\sigma$ to the arc $w$.
Consider the arcs $x, y, z$:
Then we have that $stab(y)=\langle a,c\rangle$, $stab(x)=\langle b,c\rangle$, $stab(x\cup y)=\langle c\rangle$, which may be seen again by regarding the braid groupas a mapping class group.
What you would like to know is the intersection of the doublecosets $$U=( \langle a,c\rangle\cdot\langle b,c\rangle ) \cap ( \langle b,c\rangle\cdot\langle a,c\rangle ).$$
Suppose that we have $\gamma \in U$. Then one may see that $\gamma(x)\cap y =\emptyset$ and $\gamma(y)\cap x = \emptyset$. Moreover $\gamma(x)\cap \gamma(y)=\emptyset$ since $x\cap y=\emptyset$. Write $\gamma=\alpha \beta, \alpha\in \langle a,c\rangle, \beta \in \langle b,c\rangle$. Then $\beta(x)=x$ and $\alpha(y)=y$, so we see that $\gamma(x)\cap y =\alpha\beta(x)\cap \alpha(y) =\alpha(x)\cap \alpha(y)=\emptyset$, and similar for the other intersection.
Consider the subsurface $P\subset D^2$ obtained by cutting $D^2$ along $x \cup y$, and keeping the middle piece containing $p_2, p_3$. Then $P'=P-\{p_2,p_3\}$ is a twice-punctured disk, and there is only one isotopy class of essential arc with endpoints on $\partial P'$. Then each component of $\gamma(x)\cap P', \gamma(y)\cap P'$ must be isotopic to this arc (the case when one of the arcs is boundary parallel may be dealt with in a similar fashion). However, all but one component of $\gamma(x)\cap P'$ must have endpoints on $x\subset \partial P'$, and all but one component of $\gamma(y) \cap P'$ must have endpoints on $y\subset P'$. Up to composing $\gamma$ with an element of $\langle c\rangle = Mod(P')$, we must see a picture like this:
Thus replace $\gamma$ by $c^{-i}\gamma$ if necessary toget this configuration. We see now that $\gamma(x)\cap z=\emptyset$,$\gamma(y)\cap z=\emptyset$. Now focus on the subsurface$L\subset D^2$ obtained by cutting along $z$ and taking theleft piece containing $p_1, p_2$. There is only one isotopyclass of arc $x$ and $\gamma(x)$ in $L'=L-\{p_1,p_2\}$. Butthe endpoints of these arcs lie in $\partial L' -z$, and hencewe see a picture like this:
Thus we see that up to composing with a power of $a$ (whichgenerates $Mod(L')$), we may assume that $\gamma(x)=x$. Similarly, up to composing with a power of $b$, we may assumethat $\gamma(y)=y$.
But now $stab(x \cup y) =\langle c\rangle$ in $B_4$. Sowe've shown that $\gamma = c^i a^j b^k c^l$.
Now we have $$\gamma = c^ia^jb^kc^l = \alpha \beta, \alpha\in\langle a,c\rangle, \beta\in \langle b,c\rangle.$$
Then $$\alpha^{-1} c^i a^j = \beta c^{-l} b^{-k} \in \langle a,c\rangle\cap \langle b,c\rangle = stab(x\cup y)=\langle c\rangle.$$
Thus $\alpha = c^i a^j c^{-m}, \beta =c^m b^k c^l$, as desired. |
The Algebraic Hartogs Lemma states that in a Noetherian normal scheme, a rational function that is regular outside a closed subset of codimension at least two, is in fact regular everywhere.
In a research problem I was working on recently, I was (following suggestions by my advisor) using this to prove that a particular section of a line bundle existed on a space $\mathbb{A}^n_H$. The argument worked well when $H$ was a point. For more general $H$, I could show that the section was defined except on a codimension-two subset of every fiber; but it was not immediately obvious to me how to go from there, to showing that the section was defined "on all fibers simultaneously," i.e., over $\mathbb{A}^n_H$. This was especially problematic in that the base scheme $H$ in question was a component of a Hilbert scheme, and thus (as shown by Ravi Vakil) capable of exhibiting arbitrarily bad behavior. In particular, the fact that a composition of normal morphisms is normal (see EGA IV.2, section 6.8) would not come close to covering my situation.
In order to deal with this, I have obtained what seems to be a proof of the following statement:
Lemma(Relative Algebraic Hartogs' Lemma) Let $X \to S$ be a flat, finite-type morphism of Noetherian schemes such that every associated fiber is normal. Let $\mathscr{L}$ be a line bundle on $X$. Suppose that $U \subset X$ is an open subscheme such that (i) $U$ contains all the associated points of $X$, (ii) for every $s \in S$, $U \cap X_s$ contains all the associated points of $X_s$, and (iii) for every associated point $\eta$ of $S$, $U$ contains all but a codimension-two closed subset of $X_{\eta}$. Then the restriction map $$\Gamma(X,\mathscr{L}) \to \Gamma(U,\mathscr{L})$$ is an isomorphism.
(I'm using "associated fiber" to mean "fiber over an associated point," by analogy with the standard term "generic fiber.")
I should note that suggestions of Will Sawin in this answer were invaluable in coming up with the proof.
This lemma is surprisingly strong, in that two key hypotheses--normality, and codimension-two-ness--only need to hold "generically" (i.e., in the fibers over the associated points). Another interesting feature is that, if you look at the proof, the fibers do not actually have to be normal, as long as they "satisfy the Hartogs property"--which I understand is equivalent to satisfying the S2 condition. I'm also not convinced that the finite-type hypothesis is necessary, but I have not yet verified the argument that I suspect would remove it.
The proof is not incredibly long (4 pages from me; probably less from a more experienced mathematician who knows what details to leave out), uses only techniques that are extremely well known, and so far as I can tell, does not use them in an especially clever way. Thus, it seems likely that someone, at some point, has already written up a similar result. However, neither I, nor my advisor, nor anyone else I have spoken to was familiar with it (which suggests to me that, if nothing else, the result is less well-known than it should be). Hence, my question:
Question:Does anyone know of a similar statement in the literature? (To be safe, I'll also ask if anyone knows of any counterexamples, although I thinkthe proof is solid.)
One final note: From what I understand, the popularity of the name "algebraic hartogs lemma" is quite recent, possibly a result of its use in Ravi Vakil's notes, so a similar result in the older literature would probably not be called by the word "Hartogs."
Update: I have posted a proof of the Lemma above. |
6. Series 27. Year Post deadline: - Upload deadline: -
(2 points)1. anticore
There are two homogenous non-rotating planets the shape of perfect sphere's with outer radii $R_{Z}$. The first of which is a perfect sphere with a density of $ρ$ a on its surface the gravitational acceleration is $a_{g}$. The second is hollow to half its radius and then its full.
If both planets would be out of the same homogenous material, on the surface of which planet shall the gravitational acceleration be greater and what shall be the ratio of the two gravitational accelerations on the two planets? If the gravitational acceleration on the surface of the second planet will be $a_{g}$, what does the density of the second planet have to be?
Karel created something astrological again with a hollow earth.
(2 points)2. go west
More than a hundred years ago the measurements of surveyors confirmed that when we sail west, gravimeters show higher values of gravitational acceleration than when travelling east. Determine the difference that we measure on the equator between the measurements we make when still (relative to the earth) and when we are travelling at 20 knots per hour westwards.
Mirek was wondering why people don't migrate eastwards.
(4 points)3. Sphere and shell
Consider a copper sphere and a copper hollow shell (so thin that one can neglect its thickness). Both have the same radius at room temperature. How shall the radius change if we begin warming them up? (Find the relation between the radius and the temperature and comment on it) With the copper shell think that it has small openings which ensure that the inside and outside pressure are both the same.
Karel was inspired by the book Physics for Scientists and Engineers by Serwaye & Jewetta.
(4 points)4. insatiable spider
In a dark corner there lurks a spider that has just caught a fly and is slowly devouring it. Assume that the consumption follows such an equation:
$$\;\mathrm{A} + \mathrm B \mathop{\rightleftharpoons}_{k_{-1}}^{k_1} \mathrm{AB} \stackrel{k_2}{\longrightarrow} \mathrm C + \mathrm B\,,$$
where A is
fly substrate, B are the digestive compounds (there is always enough of themu) and C is the product of digestion. AB denotes the unstable intermediate product. The reaction is of the first order, in other words the speed is directly proportional to the concentration of the said substance. Determine how long will take the spider to digest the fly and begin hunting again, if its receptors will tell it that it is hungry once the substrate reaches 10 % of the original value. Tip Use the approximation of the stationary state of intermediate product.
Mirek reminiscing about Bestvina.
(4 points)5. toilet roll
We put a roll with paper into a bearing (without friction) and we let the paper unroll itself (we neglect the sticking of layers to eac other, friction in the bearing and the weight of the bearing). What is the angular velocity of the roll after all paper is removed? We know the radiusand mass of the roll, the longitudal density of paper, its overall mass and its length. Consider that the paper shall be able to unroll into an infinite pit.
Bonus: Now consider that the paper will fall to the ground before it all unrolls.
Lukáš came up with this problem when reading Michal's toilet problem.
(5 points)P. light according to the norms
Design a placement of lights over a table so that you will fulfill the norms for lighting. You have enough compact fluorescent lamps with a luminous flux of $P=1400lm$. Norms say that for usual work the lighting of the workplace should be $E=300lx$. The lamps can be placed into any position on the ceilling at a height of $H=2\;\mathrm{m}$ over the work desk. For simplicity's sake one can consider a square work area that has a side of $a=1\;\mathrm{m}$ a consider the lamp to be an isotropic source of light. Neglect reflection and dispersion of light.
Karel was thinking about the norms of the EU.
(8 points)E. gelatinous speed of light
Determine the speed of light in a translucent gelatinous cake that you will make yourself. Don't forget to describe what its composed of.
Hint: Get yourself a microwave or a laser
Karel was going through different physical websites on the internet and found http://www.sciencebuddies.org/science-fair-projects/project_ideas/Phys_p009.shtml
(6 points)S. series
How will the spectrum of an open string on a mass level $M=2⁄α′?$ How many possible states of the string on this level? If we consider the interaction of tachyons with other strings, we would find out, že ho můžeme popsat přibližně jako částici pohybující se v nějakém potenciálu. We consider a model of a string that is fastened on a unstable D-brane. The relevant potential of the tachyon is defined by
$$V(\phi)=\frac{1}{3\alpha'}\frac{1}{2\phi _0}(\phi-\phi _0)^2\left (\phi \frac{1}{2}\phi _0\right )\,,$$
where $$\alpha'$$
The theory of superstrings enables the description of fermions. For their description one needs anticomutating variables. For those one creates an anticomutator instead of a comuator with the relation
$$\{A,B\}=AB BA$$
Find two such $$2\times 2$$
… |
When reading the prefaces of many books devoted to the theory of inequalities, I found one thing repeatedly stated: Inequalities are used in all branches of mathematics. But seriously, how important are they? Having finished a standard freshman course in calculus, I have hardly ever used even the most renowned inequalities like the Cauchy-Schwarz inequality. I know that this is due to the fact that I have not yet delved into the field of more advanced mathematics, so I would like to know just how important they are. While these inequalities are usually concerned with a finite set of numbers, I guess they must be generalised to fit into subjects like analysis. Can you provide some examples to illustrate how inequalities are used in more advanced mathematics?
I have a feeling that what you actually seek are examples of "famous" inequalities being put into good use and not just the notion of inequality as a general attribute.
For, if we stick to the notion of inequality in general, a prime example of why that is essential is in defining the real line.
Dedekind cuts that define the reals are partitions of the ordered field $\Bbb{Q}$. If you do not have an ordering (inequality relationships) amongst it's members, you cannot define $\Bbb{R}$ in this way.
For an example of an inequality as a "formula" , consider the $LM$-inequality, used in complex analysis, that gives an upper bound for a contour integral, thus having a variety of applications.
If f is a complex-valued, continuous function on the contour $\Gamma$, the arc length of $\Gamma$ is $l(\Gamma)=L$ and the absolute value of $f$, $|f(z)|$ is bounded by a constant $M$ $\forall$ $z$ on $\Gamma$, then it holds that $$\int_\Gamma|f(z)|dz\leq ML$$
Inequalities are extremely useful in mathematics, especially when we deal with quantities that we do not know exactly what they equate too. For example, let $p_n$ be the $n$-th prime number. We have no nice formula for $p_n$. However, we do know that $p_n \leq 2^n$. Often, one can solve a mathematical problem, by estimating an answer, rather than writing down exactly what it is. This is one way inequalities are very useful.
There are a lot of inequalities in mathematics that are more or less important, for a list you can see here.
It is not simple to establish a rank of importance for them, but I think that the most important is the triangle inequality. In the simplest form this inequality states that, for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. This captures a fundamental character of the notion of distance that agree with our intuition in the euclidean space. But it can be generalized to more abstract spaces (as the spaces of functions in functional analysis) so that, also in these spaces we can define a notion of distance.
To prove the triangle inequality in these spaces we need some other inequalities and the more relevant are the Holder and Minkowski inequalities that are used to prove when in a vector space we can define a norm, and, from this norm, a distance.
I think they're most important because of limits. I'm sure you've done limits in your calculus class. Limits are extremely important in maths - they're not just used to define derivatives and integrals.
There's a whole branch of mathematics called analysis which deals with limits. Sometime in the 18th century, mathematicians tried to understand calculus more rigorously and they came up with a formal definition of a limit.
$\lim \limits_{n \rightarrow \infty} a_n = a \iff \forall \varepsilon > 0, \exists N \in \mathbb{N} \ s.t. |a_n - a| < \varepsilon $
(In simple words, if you have a sequence of numbers say $ \dfrac{1}{2}, \dfrac{3}{4}, \dfrac{7}{8}, \dfrac{15}{16}...$ which tend to a number (1 in the previous example), after a certain point ($N$) all points within a certain range ($\varepsilon$) of the limit.)
Because of this, analysis is all about inequalities - including the triangle inequality and the Cauchy-Schwartz. They're very useful.
There are other places such as in Computer Science, when you define the order of growth of an algorithm (Big-O notation), and Operations Research where you use them to put certain constraints on maximisation/minimisation problems, e.g. find the best portfolio of investments, given that you may at most invest $1000. (The last one requires a good knowledge of theoretical probability - which is basically analysis and inequalities.) |
If following is true, give a proof. If it is false, give a counterexample.
(a)If $f$ and $g$ are injective, then $g\circ f$ is injective
(b) if $g\circ f$ is surjective then then $g$ is surjective
Question: I think both are true for specific case that $f:A\rightarrow{B},g:B\rightarrow{C}$, but this only include functions s.t. co-domain(f) = domain(g), I'm not sure for more general cases $f:A\rightarrow{R},g:B\rightarrow{R}$
what I get so far:
a)Proof.(for general cases)
let $f:A\rightarrow{R},g:B\rightarrow{R},f \circ g:C\rightarrow{R}$
(s.t. A is the domain that f is defined on
and B is the domain that g is defined on)
Assume f and g are injective
Show $g\circ f$ is injective
By assumption
Have $\forall x_1,x_2\in A,x_1\neq x_2\rightarrow{f(x_1)\neq f(x_2)}$
and $\forall x_3,x_4\in B,x_3\neq x_4\rightarrow{g(x_3)\neq g(x_4)}$
We want to show that:
$\forall x_5,x_6\in C,x_5\neq x_6\rightarrow{g(f(x_5))\neq g(f(x_6))}$
...(but I don't know how to prove)
b)Proof.
let $f:A\rightarrow{R},g:B\rightarrow{R},f \circ g:C\rightarrow{R}$
Assume $\forall y \in R, \exists x_1\in C, g(f(x_1))=y$
Show $\forall y \in R, \exists x_2 \in B, g(x_2)=y$
...
Definitions I'm using: $f:A\rightarrow{B}$:
$f$:domain $\rightarrow$ co-domain
domain:
Subset of R that f is defined on
(for example, domain of $\frac{1}{x}$ is R without $0$)
co-domain:
R as default
range:
Outputs of f as a subset in co-domain
injective:
Let $f:A\rightarrow{B}$
f is injective iff $\forall x_1,x_2\in A,x_1\neq x_2\rightarrow{f(x_1)\neq f(x_2)}$
surjective:
Let $f:A\rightarrow{B}$
f is surjective iff $\forall y \in B,\exists x\in A,f(x)=y$
(In another word:Its range is same as its co-domain) |
Let be $f(x)=\frac{2x}{1+x} $ function and $ x_0 > 0 $. With the help of this, form the $x_{n+1}=f(x_n)$ sequence. Is $x_n$ convergent and if yes what is the limit?
Thank you very much in advance!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This is a different method that you will be able to use in cases where it is not possible to obtain a closed formula for $x_n$.
The function $f$ is increasing in $[0,\infty)$. It cuts the line $y=x$ (the diagonal of he first quadrant) at two points: $(0,0)$ and $(1,1)$. If $0<x<1$, then $x<f(x)<1$, while if $x>1$, then $1<f(x)<x$.
Assume $0<x_0<1$. Then show that the sequence $\{x_n\}$ is increasing and bounded above by $1$. Use one of the theorems you must have learned to show that $\{x_n\}$ is convergent to some number $\ell\in(0,1]$. Now take limits on both sides of the recurrence relation $x_{n+1}=f(x_n)$ to conclude that $f(\ell)=\ell$, and from this that $\ell=1$.
If $x_0=1$ then $x_n=1$ for all $n$. I leave to you the case $x_0>1$.
It's not difficult to prove by induction that $ x_{n+k} = \frac{2^{k}x_n}{1 + (2^k -1)x_n} $ for any $ k \geq 0 $ and any $ n $.
So we have that $ x_k = \frac{2^k x_0}{1 + (2^k - 1)x_0} $ for $ k \geq 0 $
Does this help? |
Can quantum entanglement occur between two unlike particles, like one photon and one electron? Or one proton and one electron?
Yes, entanglement does occur between two unlike particles. For example, in the lowest-energy state of a hydrogen atom, the spins of the electron and proton are entangled with each other. To be specific, they are in the superposition$$ |\psi\rangle\sim \big|\uparrow\,\downarrow\big\rangle - \big|\downarrow\,\uparrow\big\rangle\tag{1}$$where the first arrow indicates the spin-direction of the electron and the second arrow indicates the spin-direction of the proton.(Reference: Griffiths,
Introduction to Quantum Mechanics, section 6.5, "Hyperfine splitting".)For simplicity, I'm only showing the spin degrees of freedom here.
Another example is
positronium, a short-lived bound state of an electron and a positron (anti-electron). In positronium, the electron and positron form a two-particle "orbital" around their center of mass, so their locations are entangled with each other. |
The question may not be entirely well-defined, in the sense that to ask for a way to compute $C(U)$ from a decomposition of $U$ you need to specify the set of gates that you are willing to use.Indeed, it is a known result that any $n$-qubit gate can be exactly decomposed using $\text{CNOT}$ and single-qubit operations, so that a naive answer to the question ...
Getting an optimal decomposition is definitely an open problem. (And, of course, the decomposition is intractable, $\exp(n)$ gates for large $n$.) A "simpler" question you might ask first is what is the shortest sequence of cnots and single qubit rotations by any angle, (what IBM, Rigetti, and soon Google currently offer, this universal basis of gates can ...
The answer you mention references Michael Nielsen and Isaac Chuang's book, Quantum Computation and Quantum Information (Cambridge University Press), which does contain a proof of the universality of these gates. (In my 2000 edition, this can be found on p. 194.) The key insight is that the $T$ gate (or $\pi/8$ gate), together with the $H$ gate, generates ...
Universality can be a very subtle thing which is quite tricky to prove. There are usually two options for proving it:show directly, using your chosen gates, how to construct any arbitrary unitary of arbitrary size (there’s no constraint on the size of the construction, just that it can be done) to arbitrary accuracy (on some non-trivial sub space of the ...
Throughout this answer, the norm of a matrix $A$, $\left\lVert A\right\rVert$ will be taken to be the spectral norm of $A$ (that is, the largest singular value of $A$). The solovay-Kitaev theorem states that approximating a gate to within an error $\epsilon$ requires $$\mathcal O\left(\log^c\frac 1\epsilon\right)$$ gates, for $c<4$ in any fixed number of ...
Disclosure: while I am not an experimental physicist, I am part of the NQIT project, which is aiming to develop quantum hardware which is suitable to realise scalable quantum computers. The architecture that we're investing most heavily in is optically linked ion traps.Ions represent some of the physically best understood systems to experimental and ...
Any classical one-bit function $f:x\mapsto y$ where $x\in\{0,1\}^n$ is an $n$-bit input and $y\in\{0,1\}$ is an $n$-bit output can be written as a reversible computation,$$f_r:(x,y)\mapsto (x,y\oplus f(x))$$(Note that any function of $m$ outputs can be written as just $m$ separate 1-bit functions.)A quantum gate implementing this is basically just the ...
Taking an $n$-mode simple harmonic oscillator (SHO) in a (Fock) space $\mathcal F = \bigotimes_k\mathcal H_k$, where $\mathcal H_k$ is the Hilbert space of a SHO on mode $k$.This gives the usual annihilation operator $a_k$, which act on a number state as $a_k\left|n\right> = \sqrt n\left|n-1\right>$ for $n\geq 1$ and $a_k\left|0\right> = 0$ and ...
The Solovay-Kitaev algorithm is not practical. It is very useful theoretically because it proves that once you have a "dense" set of quantum gates (i.e. a set with which you can approximate any other quantum gate) you can approximate up to an arbitrary precision and quickly any quantum gate.In practice, the Solovay-Kitaev works as follow:Fill the space ...
The function that handles this is transpile(), which could be found in qiskit.compiler. When you call transpile(circuit, backend) it goes through the compilation process for the input circuit based on the backend you provide. It returns a new circuit that will be valid to run on the provided backend.You can then view this new circuit just like you would ...
XX couplers are necessary to make an quantum annealing universal.https://arxiv.org/abs/0704.1287As for fabricating them, I’m not too familiar with the hardware issues. Perhaps someone else can comment on that.
Although this might not answer your question completely, I think it might provide some direction of thinking. Here are two important facts:Any unitary $2^{n}\times 2^{n}$ matrix $M$, can be realized on a quantum computer with $n$-quantum bits by a finite sequence of controlled-not and single qubit gates1.Suppose $U$ is a unitary $2\times 2$ matrix ...
Caveat. I can't be absolutely certain that no-one has contemplated a quantum XOR list before — but I can be pretty confident. On the theory side, the idea of data structures as granular as linked lists (of any description) is pretty low-level, and to my knowledge is not really the subject of research; and people working on architectures only dream of ...
What does a truth table for a QRCA look like?You don't want to know. It will be a gigantic complicated table that provides no insight whatsoever. At the very least you need to use boolean algebra instead of a table, but even that will be cumbersome and will require many intermediate values that ultimately are just a less-visual way of describing an ...
Suppose that an exact synthesis was possible for your provided unitary (the number of theoretic restriction on the entries) and so the algorithms described in the question gave you a sequence of Clifford+T gates that implemented that unitary. As stated in the Giles-Selinger paper, you get a sequence that is very far from optimal. So at this point you have ...
In your question, you don't define $P(\theta)$ or $R_z(\theta)$. I'm going to assume:$$P(\theta)=\left(\begin{array}{cc} 1 & 0 \\ 0 & e^{i\theta} \end{array}\right)\qquad R_z(\theta)=\left(\begin{array}e^{-i\theta} & 0 \\ 0 & e^{i\theta} \end{array}\right).$$In this case, you simply have that$$R_z(\theta)=P(2\theta)e^{-i\theta}\equiv P(...
You can prove that one gate set is universal by showing how to construct another universal gate set out of it. For example, we know that {H, T, cNOT} is universal, so can you find a way of making cNOT out of {H, T, CPHASE}? (Hint: Yes)On the other hand, the best way to prove that a gate set is not universal is to show that you can simulate the evolution of ...
Talking about efficiency here isn't exactly a fair question: as you change n, the number of qubits in the Fourier transfer, you're changing the gate that you're talking about using (because the smallest phase will be something like controlled-$Z(\pi/2^n)$). After all, if I can do controlled-$Z$ when I have two qubits, why would I suddenly lose the ability to ...
This paper gives a fairly complete answer to the question "given oracle access to U, implement the inverse of U".https://arxiv.org/abs/1810.06944They give a protocol which implements U inverse with a number of queries that's linear in the dimension of U and show that this is essentially optimal.This seems to be fairly closely related to your question....
Nielsen and Chuang, pg 191 of the 10th anniversary edition:We have just shown that an arbitrary unitary matrix on a $d$-dimensional Hilbert space may be written as a product of two-level unitary matrices. Now we show that single qubit and CNOT gates together can be used to implement an arbitrary two-level unitary operation on the state space of $n$ ...
I found out the number of ancillas is minimum $n-2$.I found this line in the Qiskit source code of mct:if len(ancillary_qubits) < len(control_qubits) - 2:raise AquaError('Insufficient number of ancillary qubits.')
Here's a silly method that works if you know $y$, you know the probability of measuring $y$, and you can efficiently generate arbitrary-size superpositions of the form$$\frac{1}{\sqrt{N}}\sum_{b < N}\vert b\rangle.$$To do this, use a Grover-like search: You need two circuits $U_y$ and $U_0$, with the following action:$$U_y\vert x\rangle \vert \psi\...
You might like to look up about quantum cellular automata. These are systems where you can repeatedly apply the same global unitary operation to generate the circuit that you want. The circuit is specified by the initial (product) state that is operated on. In that sense, you achieve the inverse using the same sequence of unitaries, just by changing the ...
What would be the simplest thing that could be done to make it universal?See US Patent US9162881B2 "Physical realizations of a universal adiabatic quantum computer" or US Application US20150111754A1 "Universal adiabatic quantum computing with superconducting qubits" which is quoted here:Definition: Basis Throughout this specification and the appended ... |
I have to calculate the KL divergence between a distribution $q$ and a prior distribution $p$, both of which are univariate Gaussians, i.e. $KL(q|p), q \sim \mathcal{N}(\mu, \sigma^2), p \sim \mathcal{N}(\mu', \sigma'^2)$. This term is part of a larger formula, which is justified in some way not relevant to this question.
Now, to be honest, I don't want to put
any asusmptions on $p$ except that it is Gaussian. My intuition is to just say that if I do that, I can just say $p=q$ and thus $KL(p|q) = 0$.
I wonder if there is a way to phrase this intuition into proper math. I read about non-informative priors, but found nothing about calculating KL divergences in this scenario.
Update:
I try to make the question more clear.
I have two distributions. One is the prior, the other driven by data. My prior is not tied to any parameter ranges (e.g. in form of a conjugate prior), but only specified in its functional form (i.e. disitrbution family). In typical Bayesian frameworks, such a thing is called an informative prior. Does the same concept exist for KL based objective functions? |
It asks for the percentage of the entire budget....sooo
the entire budget is 3.8 trill = 3,800,000,000,000 (rounded to 4 trill) the foreign aid is rounded to 30000000000
30,000,000,000/4,000,000,000,000 x 100% = .75 %
Can you do the other part without the rounding?
Sorry...I see you already did it ...BRAVO !
So that is 7 200 000 000 people in 27,878,400 square feet
You want to know how many square feet / person. (area divided by number of people)
Just use the units to help you.
27,878,400 / 7200000000 (sqrare feet per person)
Now just do the division
27,878,400 / 7200000000 = 0.003872 square feet/person
I assume you are learnig about Dimensional analysis.....one more...
(we aren't really here to do your homework.....we are here to help YOU do YOUR homework....i.e. "learn")
18690000 bbl/day x 42 gal/bbl x 1/306 000 000 ppl = 2.565 gal/pp each day
(Guest was correct...I had a typo and used 360 million people instead of the correct 306 million)
4√[ 3xy^3 ] / [ x^3y^4 ] =
[ 3^(1/4) * x^(1/4) * y^(3/4) ] / [ x^(12/4) *y^(16/4) ] =
3^(1/4) / [ x^(11/4) * y^(13/4) ] =
Multiply top/bottom by 3^(3/4) =
[ 3^(1/4) * 3^(3/4)] / [ x^(11/4) * y^(13/4) * 3^(3/4) ] =
3 / [ x^(11/4) * y^(13/4) * 27^(1/4) ]
Write back in radical form
3 /
4√ [ 27 x 11 y 13 ]
4√[ 27xy^3] / [ 3xy^2] writing this in an exponential fashion, we have
[ (3^3)^(1/4) * x^(1/4) * y^(3/4) ] / [ 3xy^2] =
[( 3^3)(1/4) * x^(1/4) * y^(3/4) / [ (3^(4/4) * x^(4/4) * y^(8/4) ]
[ 3^(3/4) * x^(1/4) * y^(3/4) ] / [ 3^(4/4) * x^(4/4) * y^(8/4) ] =
{ Using a^m / a^n = a^(m - n) }
1 / [ 3^(1/4) * x^(3/4) * y^(5/4) ] write back in radical form
1 /
4√ [ 3x 3y 5 ]
This question was inspired by this post https://math.stackexchange.com/questions/20717/how-to-find-solutions-of-linear-diophantine-ax-by-c/20738#20738
I didn't find the explanation that helpful so I was hoping someone could clarify or even put it into a more systematic setup where it can be solved by a matrix inverse.
Hi SpaceModo,
I'll do one of them.
\(\frac{{2\sqrt[6]{2xy^3}}}{_{xy^2}}\\ =\frac{{2\sqrt[6]{2xy^3}}}{\sqrt[6]{x^6y^{12}}}\\ =\frac{{2\sqrt[6]{2}}}{\sqrt[6]{x^5y^{9}}}\times\frac{2^{5/6}}{2^{5/6}}\\ =\frac{{2*2}}{\sqrt[6]{x^5y^{9}}}\times\frac{1}{(2^5)^{1/6}}\\ =\frac{{4}}{\sqrt[6]{32x^5y^{9}}}\\ \)
That is one done, now you can copy the technique and try matching up the others.
If you do not understand this one just ask and see if you can specify what line is giving you trouble :)
What is your problem
wertyusop?
Someone is kind enough to give you an answer, you give no written response but give them a thumbs down.
That is plain rude!
If you do not like an answer then you politely state why. Then maybe you will get more help in a form that is useful to you.
----
Thanks you for you answer guest, it looked good to me.
I was wondering whether wertyusop wanted a calculus answer or some other answer... He/she did not specify.
The formula 180(n-2) gives the number of degrees in the angles of a convex polygon because n-2 triangles can be drawn (with no lines crossing) in a polygon with n sides, each triangle containing 180 degrees. In how many ways can a convex heptagon be divided into five triangles if each different orientation is counted separately?
I am not sure I understand the question ......
Wouldn't it just be 7 ways. One way from each vertex? Perhaps you mean something different? |
I have a random events generator. I know in advance the set of event that can be generated (in my case I have only three possible events). The probabilities of the events are not known. I need to estimate these probabilities. For that I run an experiment. For example I generate 20 events. So, I have a sequence of events. For example:
a,a,b,a,c,c,c,a,b,....c. Having the sequence I can count for the number of every event (so, in my case I get three integer numbers $n_1$, $n_2$ and $n_3$).
I can calculate the probabilities of every event in the following way:
$\nu_1 = \frac{n_1}{n_1+n_2+n_3}$ $\nu_2 = \frac{n_2}{n_1+n_2+n_3}$ $\nu_3 = \frac{n_3}{n_1+n_2+n_3}$
But these values are approximate. For example, I can have probability of 0.25 for the event
a, so if I generate 20 events, I have to get approximately 5
a-events. But just by chance I can get 10 or 0
a-events.
So, I want to have a density distribution of probabilities. Since, $\nu_1$, $\nu_2$ and $\nu_3$ are dependent (the sum of them is equal to one) I going to use $\nu_1$ and $\nu_2$. So, I want to have an explicit form for the density distribution of $\nu_1$ and $\nu_2$.
$\rho (\nu_1,\nu_2) = F (n_1,n_2,n_3)$.
Does anybody know where I can get it? |
More is the same.
Why is this an approximation
Why is this an approximation? Because the actual magnetic field for example is often not the average of all the magnetic dipoles.
Mean field theory often fails at the critical points that is mean field theory is not precise enough for phase transition in some low dimensional systems. This gives us a very interesting thought.
Why does mean field theory work
Why does mean field theory work? From the view of mathematics, potential can always be expanded at some value which is exactly the mean value of the field. For an Hamiltonian,
Mean field treatment is
where \(\sigma = \sum_i \sigma_i/N\) is the average spin configuration.
This looks just like we take the 0 order of spin configuration expansion. And we can also include the second order which means add the interaction of just two spins.
Note
Susceptibility is a parameter that shows how much an extensive parameter changes when an intensive parameter increases. Magnetic susceptibility is
Important
What makes the phase transition in such a system? Finite system has no phase transitions because finite continuous function can only make up continuous function by addition. Phase transition happens when the correlation length becomes infinite. So this is all about correlations.
Important
Why is mean field theory an approximation? Because the actual spin is more or less different from the average of spin configuration. Fluctuations in the spin makes the difference.
Ideal gas is the simplest. Van de Waals model considers the correction in pressure and volume.
for n mole gas.
\(nb\) is because molecules are not point particles, they take up space. In Lenard-Jones potential model, b is 4 times the size of all molecules.
\(a n^2/V^2\) is because in this model molecules will attract to each other when they get too close. This attractive force makes it possible to have phase transition, condensation of gas molecules.
This models, is basically a kind of mean field theory which treats the corrections as a mean field. More specifically, we write the original Hamiltonian
as
in which \(\phi(r)\) is the average of potential and all particles interaction have the same value.
Onnes used series to write the equation of state,
This can be derived using Mayer function and cluster expansion.
A standard procedure of solving mechanics problems, said by Prof. Kenkre which is don’t really accept, is
Initial condition / Description of states -> Time evolution -> Extraction of observables
Density of states in phase space
Continuity equation
This conservation law can be more simpler if dropped the term \(\nabla\cdot \vec u = 0\) for incompressibility.
Or more generally,
and here \(\vec j\) can take other definitions like \(\vec j = - D \partial_x \rho\).
This second continuity equation can represent any conservation law provided the proper \(\vec j\).
From continuity equation to Liouville theorem
From continuity equation to Liouville theorem:
We start from
Divergence means
Then we will have the initial expression written as
Expand the derivatives,
Recall that Hamiltonian equations
Then
Finally convective time derivative becomes zero because \(\rho\) is not changing with time in a comoving frame like perfect fluid.
Apply Hamiltonian dynamics to this continuity equation, we can get
which is very similar to quantum density matrix operator
That is to say, the time evolution is solved if we can find out the Poisson bracket of Hamiltonian and probability density.
Liouville theorem;
Normalizable;
Hint
What about a system with constant probability for each state all over the phase space? This is not normalizable. Such a system can not really pick out a value. It seems that the probability to be on states with a constant energy is zero. So no such system really exist. I guess?
Like this?
Someone have 50% probability each to stop on one of the two Sandia Peaks for a picnic. Can we do an average for such a system?
Example by Professor Kenkre.
And one more for equilibrium systems, \(\partial_t \rho =0\).
It’s simply done by using the ensemble average
where \(i=1,2,..., 3N\). |
Abbreviation:
PIDom
A
is an integral domains $\mathbf{R}=\langle R,+,-,0,\cdot,1\rangle$ in which principal ideal domain
every ideal is principal: $\forall I \in Idl(R)\ \exists a \in R\ (I=aR)$
Ideals are defined for commutative rings
Example 1: ${a+b\theta | a,b\in Z, \theta=\langle 1+ \langle-19\rangle^{1/2}\rangle/2}$ is a Principal Ideal Domain that is not an Euclidean domains
See Oscar Campoli's “A Principal Ideal Domain That Is Not a Euclidean Domain” in <i>The American Mathematical Monthly</i> 95 (1988): 868-871
$\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &1\\ f(4)= &1\\ f(5)= &1\\ f(6)= &0\\ \end{array}$ |
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper) |
I have a set of points in an equatorial coordinate system specified by a longitude, $0^{\circ}\leq\alpha < 360^{\circ}$ and a latitude, $-90^{\circ}\leq \delta \leq +90^{\circ}$
I want to take a point with coordinates $(\alpha_1,\delta_1)$ and create a second coordinate system such that my point now has coordinates $(\alpha_2,\delta_2)$. I then want to apply that transformation to every point in the set.
Shifting in longitude with constant latitude is straightforward, the transformation is just $(\alpha,\delta)\rightarrow(\alpha+\alpha_2-\alpha_1,\delta)$, but shifting in latitude is more complicated due to behaviour around the poles.
There's a similar question here that rotates the equator of the coordinate system to intersect with the point while preserving the points longitude (i.e. shifting latitude with constant longitude). There is definitely a way to generalise the top answer to that question so that it solves my problem, I just can't seem to figure it out.
I can imagine a series of transformations such that I shift $(\alpha_1,\delta_1)$ to $(0,\delta_1)$ using the longtitude transformation I mention above, then use the solution to the other question to shift that to $(0,0)$, then use an inverse transformation to shift it to $(0,\delta_2)$ and then finally a simple longitude transformation to shift it to $(\alpha_2,\delta_2)$.
However, that sounds like an overly complicated solution.
There should (hopefully!) be a way to do it with a single coordinate transformation, which would be preferable since I have to write this in Python and more transformations means more compute time. I'll be calculating and applying this transformation ~10,000 times to approximately 10 million points, so needlessly increasing the number of calculations has big flow-on effects. |
Infinite Set has Countably Infinite Subset Contents Theorem
Let $S$ be an infinite set, and let $a_0 \in S$.
$S$ is infinite, so $\exists a_1 \in S, a_1 \ne a_0$, and $\exists a_2 \in S, a_2 \ne a_0, a_2 \ne a_1$, and so on.
That is, we can continue to pick elements out of $S$, and assign them the labels $a_0, a_1, a_2, \ldots$ and this procedure will never terminate as $S$ is infinite.
$\blacksquare$
Warning
The intuitive nature of this proof obscures the fact that it is not a trivial truth that one may choose elements of $S$ in this manner when $S$ is infinite.
In Zermelo-Fraenkel set theory, a rigorous application of the principle of mathematical induction would show that one can repeat the procedure any
finite number of times to construct a finite set $\left\{{a_0, a_1, \ldots, a_n}\right\}$.
However, in general, one needs the axiom of dependent choice to justify repeating such a procedure
indefinitely.
It should be noted that the weaker axiom of countable choice is sufficient to prove the stated theorem.
Let $S$ be an infinite set.
Suppose that there exists an injection $\psi: \N \to S$.
Let $T$ be the image of $\psi$.
Now, suppose that that there exists a surjection $\phi: \N \to S$.
$\blacksquare$
This proof follows the same steps as the intuitive one, but with more formality.
Let $S$ be an infinite set.
First an injection $f: \N \to S$ is constructed.
Let $g$ be a choice function on $\powerset S \setminus \set \O$.
Then define $f: \N \to S$ as follows:
$\forall n \in \N: f \left({n}\right) = \begin{cases} \map g S & : n = 0 \\ \map g {S \setminus \set {\map f 0, \ldots, \map f {n - 1} } } & : n > 0 \end{cases}$
Therefore $f \sqbrk \N$ is infinite.
To show that $f$ is injective, let $m, n \in \N$, say $m < n$.
Then:
$\map f m \in \set {\map f 0, \ldots, \map f {n - 1} }$
but:
$\map f n \in S \setminus \set {\map f 0, \ldots, \map f {n - 1} }$
Hence $\map f m \ne \map f n$.
Thus $f \sqbrk \N$ is a countable subset of $S$.
$\blacksquare$
Let $S$ be an infinite set.
First an injection $f: \N \to S$ is constructed.
Let $f$ be a choice function on $\mathcal P \left({S}\right) \setminus \left\{ {\varnothing}\right\}$.
That is:
$\forall A \in \mathcal P \left({S}\right) \setminus \left\{{\varnothing}\right\}: f \left({A}\right) \in A$
This is justified only if the Axiom of Choice is accepted.
Let $A \in \mathcal C$.
Since $S$ is infinite it follows that $S \setminus A \ne \varnothing$.
So $S \setminus A \in \operatorname{Dom} \left({f}\right)$.
Let $g: \mathcal C \to \mathcal C$ be the mapping defined as: $g \left({A}\right) = A \cup \left\{{f \left({S \setminus A}\right)}\right\}$
That is, $g \left({A}\right)$ is constructed by joining $A$ with the element that $f$ chooses from $S \setminus A$.
Consider the Recursion Theorem applied to $g$, starting with the set $\varnothing$.
We obtain a mapping $U: \N \to \mathcal C$ such that:
$U \left({x}\right) = \begin{cases} \varnothing & : x = 0 \\ U \left({n}\right) \cup \left\{{f \left({S \setminus U \left({n}\right)}\right)}\right\} & : x = n^+ \end{cases}$
where here $\N$ is considered as elements of the minimal infinite successor set $\omega$.
Consider the mapping $v: \N \to S$, defined as: $\forall n \in \N: v \left({n}\right) = f \left({S \setminus U \left({n}\right)}\right)$
We have that, by definition of $v$:
$(1): \quad \forall n \in \N: v \left({n}\right) \notin U \left({n}\right)$ $(2): \quad \forall n \in \N: v \left({n}\right) \in U \left({n^+}\right)$ $(3): \quad \forall m, n \in \N: n \le m \implies U \left({n}\right) \subseteq U \left({m}\right)$ Then because $v \left({n}\right) \in U \left({m}\right)$ but $v \left({m}\right) \notin U \left({m}\right)$: $(4): \quad \forall m, n \in \N: n < m \implies v \left({n}\right) \ne v \left({m}\right)$
Thus $v: \N \to S$ is an injection.
Thus $v \left({\N}\right)$ is the countable subset of $S$ that was required.
$\blacksquare$
Let $S$ be an infinite set.
For all $n \in \N$, let: $\mathcal F_n = \left\{{T \subseteq S : \left\vert{T}\right\vert = n}\right\}$
where $\left\vert{T}\right\vert$ denotes the cardinality of $T$.
$\mathcal F_n$ is non-empty. Define: $\displaystyle T = \bigcup_{n \mathop \in \N} S_n \subseteq S$ $T$ is infinite.
$\blacksquare$
Comment Axiom of Countable Choice
This theorem depends on the Axiom of Countable Choice.
As such, mathematicians are generally convinced of its truth and believe that it should be generally accepted. |
Abbreviation:
MultLat
A
(or multiplicative lattice ) is a structure $\mathbf{A}=\langle A,\vee,\wedge,\cdot\rangle$ of type $\langle 2,2,2\rangle$ such that $m$-lattice
$\langle A,\vee,\wedge\rangle$ is a lattice
$\cdot$ distributes over $\vee$: $x(y\vee z)=xy\vee xz$, $(x\vee y)z=xz\vee yz$
Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x ... y)=h(x) ... h(y)$
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[Lattice-ordered semigroups]] [[Lattices]] reduced type [[Multiplicative semilattices]] reduced type |
Caution: Heavy citing
"In his doctoral dissertation
Revankar (1967) expounded his generalizedproduction functions that permit variability to returns-to-scale as well as elasticity of substitution. In contrast with the production functions that (rather unrealistically) assume the same returns to scale at all levels of output, Zellner and Revankar (1969) found a procedure to generalize any given (neoclassical) production function with specified constant or variable elasticities of substitution such that the resulting production function retains its specification as to the elasticities of substitution all along but permits returns- to-scale to vary with the scale of output. Their Generalized Production Function (GPF) is given as\begin{equation}\ Pe^{\theta P} = c^h f^h\end{equation}where f is the basic function (e.g. Cobb-Douglas, CES, etc) as theobject of generalization, c is the constant of integration and θ, hrelate to parametersassociated with the returns-to-scale function. In particular, if the Cobb-Douglas production function is generalized, we have\begin{equation}\ Pe ^{\theta P}= AK^{\rho\alpha} L^ {\rho (1- \alpha )} \end{equation}. This function is interestingfrom the viewpoint of estimation also. It has to be estimated so as to maximize the likelihood function since the Least Squares and Max Likelihood estimators of parameters do not coincide. The return to scale function is given by \begin{equation}\rho (P) = \rho /(1 + \theta P) \end{equation} . Depending on θ the sign of θ , the returns-to-scale function monotonically increases or decreases withincrease in P. However, as we know, the returns to scale first increases with output, remains more or less constant in a domain and then begins falling. This fact is not captured by the Zellner-Revankar function since it gives us a linear returns-to-scale function."
from
A Brief History of Production Functions by SK Mishra 2007
So, I would guess, the \begin{equation} \alpha \end{equation} parameter in your log-linearized function would have something to do with returns-to-scale.
Regarding estimation, I'm way too much of a newb. But this might be something at least similar to what you are looking for: https://link.springer.com/chapter/10.1007/3-540-28556-3_2
Variable Elasticity of Substitution and EconomicGrowth: Theory and Evidence by Giannis Karagiannis,Theodore Palivos,and Chris Papageorgiou,2005 |
An example of methylation analysis with simulated datasets
Part 1: Methylation signal
Methylation analysis with Methyl-IT is illustrated on simulated datasets of methylated and unmethylated read counts with relatively high average of methylation levels: 0.15 and 0.286 for control and treatment groups, respectively. The main Methyl-IT downstream analysis is presented alongside the application of Fisher’s exact test. The importance of a signal detection step is shown. 1. Background
Methyl-IT R package offers a methylome analysis approach based on information thermodynamics (IT) and signal detection. Methyl-IT approach confront detection of differentially methylated cytosine as a signal detection problem. This approach was designed to discriminate methylation regulatory signal from background noise induced by molecular stochastic fluctuations. Methyl-IT R package is not limited to the IT approach but also includes Fisher’s exact test (FT), Root-mean-square statistic (RMST) or Hellinger divergence (HDT) tests. Herein, we will show that a signal detection step is also required for FT, RMST, and HDT as well.
2. Data generation
For the current example on methylation analysis with Methyl-IT we will use simulated data. Read count matrices of methylated and unmethylated cytosine are generated with Methyl-IT function
simulateCounts. Function simulateCounts randomly generates prior methylation levels using Beta distribution function. The expected mean of methylation levels that we would like to have can be estimated using the auxiliary function:
bmean <- function(alpha, beta) alpha/(alpha + beta)alpha.ct <- 0.09alpha.tt <- 0.2c(control.group = bmean(alpha.ct, 0.5), treatment.group = bmean(alpha.tt, 0.5), mean.diff = bmean(alpha.tt, 0.5) - bmean(alpha.ct, 0.5))
## control.group treatment.group mean.diff ## 0.1525424 0.2857143 0.1331719
This simple function uses the α and β (
shape2) parameters from the Beta distribution function to compute the expected value of methylation levels. In the current case, we expect to have a difference of methylation levels about 0.133 between the control and the treatment. 2.1. Simulation
Methyl-IT function
simulateCounts will be used to generate the datasets, which will include three group of samples: reference, control, and treatment.
suppressMessages(library(MethylIT))# The number of cytosine sites to generatesites = 50000 # Set a seed for pseudo-random number generationset.seed(124)control.nam <- c("C1", "C2", "C3")treatment.nam <- c("T1", "T2", "T3")# Reference group ref0 = simulateCounts(num.samples = 4, sites = sites, alpha = alpha.ct, beta = 0.5, size = 50, theta = 4.5, sample.ids = c("R1", "R2", "R3"))# Control groupctrl = simulateCounts(num.samples = 3, sites = sites, alpha = alpha.ct, beta = 0.5, size = 50, theta = 4.5, sample.ids = control.nam)# Treatment grouptreat = simulateCounts(num.samples = 3, sites = sites, alpha = alpha.tt, beta = 0.5, size = 50, theta = 4.5, sample.ids = treatment.nam)
Notice that reference and control groups of samples are not identical but belong to the same population.
2.2. Divergences of methylation levels
The estimation of the divergences of methylation levels is required to proceed with the application of signal detection basic approach. The information divergence is estimated here using the function
estimateDivergence. For each cytosine site, methylation levels are estimated according to the formulas: $p_i={n_i}^{mC_j}/({n_i}^{mC_j}+{n_i}^{C_j})$, where ${n_i}^{mC_j}$ and ${n_i}^{C_j}$ are the number of methylated and unmethylated cytosines at site $i$.
If a Bayesian correction of counts is selected in function
estimateDivergence, then methylated read counts are modeled by a beta-binomial distribution in a Bayesian framework, which accounts for the biological and sampling variations [1,2,3]. In our case we adopted the Bayesian approach suggested in reference [4] (Chapter 3).
Two types of information divergences are estimated:
TV, total variation ( TV, absolute value of methylation levels) and Hellinger divergence ( H). TV is computed according to the formula: $TV=|p_{tt}-p_{ct}|$ and H:
$H(\hat p_{ij},\hat p_{ir}) = w_i[(\sqrt{\hat p_{ij}} – \sqrt{\hat p_{ir}})^2+(\sqrt{1-\hat p_{ij}} – \sqrt{1-\hat p_{ir}})^2]$ (1)
where $w_i = 2 \frac{m_{ij} m_{ir}}{m_{ij} + m_{ir}}$, $m_{ij} = {n_i}^{mC_j}+{n_i}^{uC_j}+1$, $m_{ir} = {n_i}^{mC_r}+{n_i}^{uC_r}+1$ and $j \in {\{c,t}\}$
The equation for Hellinger divergence is given in reference [5], but
any other information theoretical divergences could be used as well. Divergences are estimated for control and treatment groups in respect to a virtual sample, which is created applying function poolFromGRlist on the reference group.
.
# Reference sampleref = poolFromGRlist(ref0, stat = "mean", num.cores = 4L, verbose = FALSE)# Methylation level divergencesDIVs <- estimateDivergence(ref = ref, indiv = c(ctrl, treat), Bayesian = TRUE, num.cores = 6L, percentile = 1, verbose = FALSE)
The mean of methylation levels differences is:
unlist(lapply(DIVs, function(x) mean(mcols(x[, 7])[,1])))
## C1 C2 C3 T1 T2 ## -0.0009820776 -0.0014922009 -0.0022257725 0.1358867135 0.1359160219 ## T3 ## 0.1309217360
3. Methylation signal
Likewise for any other signal in nature, the analysis of methylation signal requires for the knowledge of its probability distribution. In the current case, the signal is represented in terms of the Hellinger divergence of methylation levels (H).
divs = DIVs[order(names(DIVs))]# To remove hd == 0 to estimate. The methylation signal only is given for divs = lapply(divs, function(div) div[ abs(div$hdiv) > 0 ])names(divs) <- names(DIVs)# Data frame with the Hellinger divergences from both groups of samples samples l = c(); for (k in 1:length(divs)) l = c(l, length(divs[[k]]))data <- data.frame(H = c(abs(divs$C1$hdiv), abs(divs$C2$hdiv), abs(divs$C3$hdiv), abs(divs$T1$hdiv), abs(divs$T2$hdiv), abs(divs$T3$hdiv)), sample = c(rep("C1", l[1]), rep("C2", l[2]), rep("C3", l[3]), rep("T1", l[4]), rep("T2", l[5]), rep("T3", l[6])))
Empirical critical values for the probability distribution of
H and TV can be obtained using quantile function from the R package stats.
critical.val <- do.call(rbind, lapply(divs, function(x) { hd.95 = quantile(x$hdiv, 0.95) tv.95 = quantile(x$TV, 0.95) return(c(tv = tv.95, hd = hd.95))}))critical.val
## tv.95% hd.95%## C1 0.7893927 81.47256## C2 0.7870469 80.95873## C3 0.7950869 81.27145## T1 0.9261629 113.73798## T2 0.9240506 114.45228## T3 0.9212163 111.54258
3.1. Density estimation
The kernel density estimation yields the empirical density shown in the graphics:
suppressMessages(library(ggplot2))# Some information for graphiccrit.val.ct <- max(critical.val[c("C1", "C2", "C3"), 2]) # 81.5crit.val.tt <- min(critical.val[c("T1", "T2", "T3"), 2]) # 111.5426# Density plot with ggplotggplot(data, aes(x = H, colour = sample, fill = sample)) + geom_density(alpha = 0.05, bw = 0.2, position = "identity", na.rm = TRUE, size = 0.4) + xlim(c(0, 125)) + xlab(expression(bolditalic("Hellinger divergence (H)"))) + ylab(expression(bolditalic("Density"))) + ggtitle("Density distribution for control and treatment") + geom_vline(xintercept = crit.val.ct, color = "red", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = crit.val.ct-2, y = 0.3, size = 5, label = 'bolditalic(H[alpha == 0.05]^CT==81.5)', family = "serif", color = "red", parse = TRUE) + geom_vline(xintercept = crit.val.tt, color = "blue", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = crit.val.tt -2, y = 0.2, size = 5, label = 'bolditalic(H[alpha == 0.05]^TT==114.5)', family = "serif", color = "blue", parse = TRUE) + theme( axis.text.x = element_text( face = "bold", size = 12, color="black", margin = margin(1,0,1,0, unit = "pt" )), axis.text.y = element_text( face = "bold", size = 12, color="black", margin = margin( 0,0.1,0,0, unit = "mm")), axis.title.x = element_text(face = "bold", size = 13, color="black", vjust = 0 ), axis.title.y = element_text(face = "bold", size = 13, color="black", vjust = 0 ), legend.title = element_blank(), legend.margin = margin(c(0.3, 0.3, 0.3, 0.3), unit = 'mm'), legend.box.spacing = unit(0.5, "lines"), legend.text = element_text(face = "bold", size = 12, family = "serif") )
Hvalues $H^{TT}_{\alpha=0.05}\geq114.5$. According to the critical value estimated for the differences of methylation levels, the methylation signal holds $TV^{TT}_{\alpha=0.05}\geq0.926$. Notice that most of the methylation changes are not signal but noise (found to the left of the critical values). This situation is typical for all the natural and technologically generated signals. Assuming that the background methylation variation is consistent with a Poisson process and that methylation changes conform to the second law of thermodynamics, the Hellinger divergence of methylation levels follows a Weibull distribution probability or some member of the generalized gamma distribution family [6]. References Hebestreit, Katja, Martin Dugas, and Hans-Ulrich Klein. 2013. “Detection of significantly differentially methylated regions in targeted bisulfite sequencing data.” Bioinformatics (Oxford, England)29 (13): 1647–53. doi:10.1093/bioinformatics/btt263. Hebestreit, Katja, Martin Dugas, and Hans-Ulrich Klein. 2013. “Detection of significantly differentially methylated regions in targeted bisulfite sequencing data.” Bioinformatics (Oxford, England)29 (13): 1647–53. doi:10.1093/bioinformatics/btt263. Dolzhenko, Egor, and Andrew D Smith. 2014. “Using beta-binomial regression for high-precision differential methylation analysis in multifactor whole-genome bisulfite sequencing experiments.” BMC Bioinformatics15 (1). BioMed Central: 215. doi:10.1186/1471-2105-15-215. Baldi, Pierre, and Soren Brunak. 2001. Bioinformatics: the machine learning approach. Second. Cambridge: MIT Press. Basu, A., A. Mandal, and L. Pardo. 2010. “Hypothesis testing for two discrete populations based on the Hellinger distance.” Statistics & Probability Letters80 (3-4). Elsevier B.V.: 206–14. doi:10.1016/j.spl.2009.10.008. Sanchez R, Mackenzie SA. Information Thermodynamics of Cytosine DNA Methylation. PLoS One, 2016, 11:e0150427. |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
November 2011 , Volume 30 , Issue 4
Select all articles
Export/Reference:
Abstract:
We develop the techniques of [25] and [11] in order to derive dispersive estimates for a matrix Hamiltonian equation defined by linearizing about a minimal mass soliton solution of a saturated, focussing nonlinear Schrödinger equation
$\i u_t + \Delta u + \beta (|u|^2) u = 0$
$\u(0,x) = u_0 (x),$ Abstract:
Motivated by the physical theory of Critical Dynamics the Cahn-Hilliard equation on a bounded space domain is considered and forcing terms of general type are introduced. For such a rescaled equation the limiting inter-face problem is studied and the following are derived: (i) asymptotic results indicating that the forcing terms may slow down the equilibrium locally or globally, (ii) the sharp interface limit problem in the multidimensional case demonstrating a local influence in phase transitions of terms that stem from the chemical potential, while free energy independent terms act on the rest of the domain, (iii) a limiting non-homogeneous linear diffusion equation for the one-dimensional problem in the case of deterministic forcing term that follows the white noise scaling.
Abstract:
Let $\Omega $ be a smooth bounded domain. We are concerned about the following nonlinear elliptic problem:
$\Delta u + |x|^{\alpha}u^{p} = 0, \ u > 0 \quad$ in
$\Omega$,
$\ u = 0 \quad$ on $\partial \Omega $, $H( w )$ as $\alpha \to \infty,$ where $H$ is the mean curvature on $\partial \Omega $ and $\partial ^$*$\Omega \equiv \{ x \in \partial \Omega : |x| \ge |y|$ for any $y \in \Omega \}.$ $\w \in \partial ^$*$ \Omega$ Abstract:
We consider the fully nonlinear integral systems involving Wolff potentials:
$\u(x) = W_{\beta, \gamma}(v^q)(x)$, $\x \in R^n$;
$\v(x) = W_{\beta, \gamma} (u^p)(x)$, $\x \in R^n$;
(1)where
$ \W_{\beta,\gamma} (f)(x) = \int_0^{\infty}$ $[ \frac{\int_{B_t(x)} f(y) dy}{t^{n-\beta\gamma}} ]^{\frac{1}{\gamma-1}} \frac{d t}{t}.$
After modifying and refining our techniques on the method of moving planes in integral forms, we obtain radial symmetry and monotonicity for the positive solutions to systems (1).
This system includes many known systems as special cases, in particular, when $\beta = \frac{\alpha}{2}$ and $\gamma = 2$, system (1) reduces to
$\u(x) = \int_{R^{n}} \frac{1}{|x-y|^{n-\alpha}} v(y)^q dy$, $\ x \in R^n$,
$v(x) = \int_{R^{n}} \frac{1}{|x-y|^{n-\alpha}} u(y)^p dy$, $\ x \in R^n$.
(2)The solutions $(u,v)$ of (2) are critical points of the functional associated with the well-known Hardy-Littlewood-Sobolev inequality. We can show that (2) is equivalent to a system of semi-linear elliptic PDEs
$(-\Delta)^{\alpha/2} u = v^q$, in $R^n$,
$(-\Delta)^{\alpha/2} v = u^p$, in $R^n$
(3)which comprises the well-known Lane-Emden system and Yamabe equation.
Abstract:
We give examples of rank one compact surfaces on which there exist recurrent geodesics that cannot be shadowed by periodic geodesics. We build rank one compact surfaces such that ergodic measures on the unit tangent bundle of the surface are not dense in the set of probability measures invariant by the geodesic flow. Finally, we give examples of complete rank one surfaces for which the non wandering set of the geodesic flow is connected, the periodic orbits are dense in that set, yet the geodesic flow is not transitive in restriction to its non wandering set.
Abstract:
Our aim is to study the pointwise time-asymptotic behavior of solutions for the scalar conservation laws with relaxation in multi-dimensions. We construct the Green's function for the Cauchy problem of the relaxation system which satisfies the dissipative condition. Based on the estimate for the Green's function, we get the pointwise estimate for the solution. It is shown that the solution exhibits some weak Huygens principle where the characteristic 'cone' is the envelope of planes.
Abstract:
N/A
Abstract:
Under the Coulomb gauge condition Chern-Simons-Higgs equations are formulated in the hyperbolic system coupled with elliptic equations. We consider a solution of Chern-Simons-Higgs equations with finite energy and show how to obtain $H^1$ solution with one exceptional term $\phi\partial_t A_0$ from which the model equations (63) are proposed.
Abstract:
This paper is concerned with a modified two-component periodic Camassa-Holm system. The local well-posedness and low regularity result of solution are established by using the techniques of pseudoparabolic regularization and some priori estimates derived from the equation itself. A wave-breaking for strong solutions and several results of blow-up solution with certain initial profiles are described. In addition, the initial boundary value problem for a modified two-component periodic Camassa-Holm system is also considered.
Abstract:
For mixing $\mathbb Z^d$-actions generated by commuting automorphisms of a compact abelian group, we investigate the directional uniformity of the rate of periodic point distribution and mixing. When each of these automorphisms has finite entropy, it is shown that directional mixing and directional convergence of the uniform measure supported on periodic points to Haar measure occurs at a uniform rate independent of the direction.
Abstract:
We propose new Kruzhkov type entropy conditions for one dimensional scalar conservation law with a discontinuous flux. We prove existence and uniqueness of the entropy admissible weak solution to the corresponding Cauchy problem merely under assumptions on the flux which provide the maximum principle. In particular, we allow multiple flux crossings and we do not need any kind of genuine nonlinearity conditions.
Abstract:
Retroreflectors are optical devices that reverse the direction of incident beams of light. Here we present a collection of billiard type retroreflectors consisting of four objects; three of them are asymptotically perfect retroreflectors, and the fourth one is a retroreflector which is very close to perfect. Three objects of the collection have recently been discovered and published or submitted for publication. The fourth object ---
notched angle--- is a new one; a proof of its retroreflectivity is given. Abstract:
In 2004, Manning showed that the topological entropy of the geodesic flow for a surface of negative curvature decreases as the metric evolves under the normalised Ricci flow. It is an interesting open problem, also due to Manning, to determine to what extent such behaviour persists for higher dimensional manifolds. In this short note, we describe the problem and give a curvature criterion under which monotonicity of the topological entropy can be established for a short time. In particular, the criterion applies to metrics of negative sectional curvature which are in the same conformal class as a metric of constant negative sectional curvature.
Abstract:
This paper is concerned with the following periodic Hamiltonian elliptic system
$\-\Delta \varphi+V(x)\varphi=G_\psi(x,\varphi,\psi)$ in $\mathbb{R}^N,$
$\-\Delta \psi+V(x)\psi=G_\varphi(x,\varphi,\psi)$ in $\mathbb{R}^N,$ $\varphi(x)\to 0$ and $\psi(x)\to0$ as $|x|\to\infty.$ Abstract:
In this paper, we study the asymptotic behavior of solutions to one-dimensional compressible Navier-Stokes equations with gravity and vacuum for isentropic flows with density-dependent viscosity $\mu(\rho)=c\rho^{\theta}$. Under some suitable assumptions on the initial date and $\gamma>1$, if $\theta\in(0,\frac{\gamma}{2}]$, we prove the weak solution $(\rho(x,t),u(x,t))$ behavior asymptotically to the stationary one by adapting and modifying the technique of weighted estimates. This result improves the one in [5] where Duan showed that the weak solution converges to the stationary one in the sense of integral for shallow water model. In addition, if $\theta\in(0,\frac{\gamma}{2}]\cap(0,\gamma-1]$, following the same idea in [9], we estimate the stabilization rate of the solution as time tends to infinity in the sense of $L^\infty$ norm, weighted $L^2$ norm and weighted $H^1$ norm.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
October 2012 , Volume 32 , Issue 10
Select all articles
Export/Reference:
Abstract:
The paper intends to lay out the first steps towards constructing a unified framework to understand the symplectic and spectral theory of finite dimensional integrable Hamiltonian systems. While it is difficult to know what the best approach to such a large classification task would be, it is possible to single out some promising directions and preliminary problems. This paper discusses them and hints at a possible path, still loosely defined, to arrive at a classification. It mainly relies on recent progress concerning integrable systems with only non-hyperbolic and non-degenerate singularities.
This work originated in an attempt to develop a theory aimed at answering some questions in quantum spectroscopy. Even though quantum integrable systems date back to the early days of quantum mechanics, such as the work of Bohr, Sommerfeld and Einstein, the theory did not blossom at the time. The development of semiclassical analysis with microlocal techniques in the last forty years now permits a constant interplay between spectral theory and symplectic geometry. A main goal of this paper is to emphasize the symplectic issues that are relevant to quantum mechanical integrable systems, and to propose a strategy to solve them.
Abstract:
This paper introduces a notion of regularity (or irregularity) of the point at infinity ($\infty$) for the unbounded open set $\Omega\subset {\mathbb R}^{N}$ concerning second order uniformly elliptic equations with bounded and measurable coefficients, according as whether the ${\mathcal A}$- harmonic measure of $\infty$ is zero (or positive). A necessary and sufficient condition for the existence of a unique bounded solution to the Dirichlet problem in an arbitrary open set of ${\mathbb R}^{N}, N\ge 3$ is established in terms of the Wiener test for the regularity of $\infty$. It coincides with the Wiener test for the regularity of $\infty$ in the case of Laplace equation. From the topological point of view, the Wiener test at $\infty$ presents thinness criteria of sets near $\infty$ in fine topology. Precisely, the open set is a deleted neigborhood of $\infty$ in fine topology if and only if $\infty$ is irregular.
Abstract:
We give an explicite formula to compute rotation numbers of piecewise linear (PL) circle homeomorphisms $f$ with the product of $f$-jumps in the break points contained in a same orbit is trivial. In particular, a simple formulas are then given for particular PL-homeomorphisms such as the PL-Herman's examples. We also deduce that if the slopes of $f$ are integral powers of an integer $m\geq 2$ and break points and their images under $f$ are $m$-adic rational numbers, then the rotation number of $f$ is rational.
Abstract:
Given a sequence of sets $A_n \subseteq \{0,\ldots,n-1\}$, the Furstenberg correspondence principle provides a shift-invariant measure on $2^N$ that encodes combinatorial information about infinitely many of the $A_n$'s. Here it is shown that this process can be inverted, so that for any such measure, ergodic or not, there are finite sets whose combinatorial properties approximate it arbitarily well. The finite approximations are obtained from the measure by an explicit construction, with an explicit upper bound on how large $n$ has to be to yield a sufficiently good approximation.
We draw conclusions for computable measure theory, and show, in particular, that given any computable shift-invariant measure on $2^N$, there is a computable element of $2^N$ that is generic for the measure. We also consider a generalization of the correspondence principle to countable discrete amenable groups, and once again provide an effective inverse.
Abstract:
We study relations between Rauzy classes coming from an interval exchange map and the corresponding connected components of strata of the moduli space of Abelian differentials. This gives a criterion to decide whether two permutations are in the same Rauzy class or not, without actually computing them. We prove a similar result for Rauzy classes corresponding to quadratic differentials.
Abstract:
Considered herein is the generalized two-component periodic Camassa-Holm system. The precise blow-up scenarios of strong solutions and several results of blow-up solutions with certain initial profiles are described in detail. The exact blow-up rates are also determined. Finally, a sufficient condition for global solutions is established.
Abstract:
We extend Sharkovsky's Theorem to several new classes of spaces, which include some well-known examples of non-locally connected continua, such as the topologist's sine curve and the Warsaw circle. In some of these examples the theorem applies directly (with the same ordering), and in other examples the theorem requires an altered partial ordering on the integers. In the latter case, we describe all possible sets of periods for functions on such spaces, which are based on multiples of Sharkovsky's order.
Abstract:
In this paper, we investigate the formation of singularities of the classical solution to the Cauchy problem of quasi-linear hyperbolic system and give a sharp limit formula for the lifespan of the classical solution. It is important that we only require that the initial data are sufficiently small in the $L^1$ sense and the BV sense.
Abstract:
We consider a generalisation of the baker's transformation, consisting of a skew-product of contractions and a $\beta$-transformation. The Hausdorff dimension and Lebesgue measure of the attractor is calculated for a set of parameters with positive measure. The proofs use a new transverality lemma similar to Solomyak's [12]. This transversality, which is applicable to the considered class of maps holds for a larger set of parameters than Solomyak's transversality.
Abstract:
Lyapunov functions are an important tool to determine the basin of attraction of exponentially stable equilibria in dynamical systems. In Marinósson (2002), a method to construct Lyapunov functions was presented, using finite differences on finite elements and thus transforming the construction problem into a linear programming problem. In Hafstein (2004), it was shown that this method always succeeds in constructing a Lyapunov function, except for a small, given neighbourhood of the equilibrium.
For two-dimensional systems, this local problem was overcome by choosing a fan-like triangulation around the equilibrium. In Giesl/Hafstein (2010) the existence of a piecewise linear Lyapunov function was shown, and in Giesl/Hafstein (2012) it was shown that the above method with a fan-like triangulation always succeeds in constructing a Lyapunov function, without any local exception. However, the previous papers only considered two-dimensional systems. This paper generalises the existence of piecewise linear Lyapunov functions to arbitrary dimensions.
Abstract:
In this paper, arbitrarily many solutions, in particular arbitrarily many nodal solutions, are proved to exist for perturbed elliptic equations of the form \begin{equation*}\label{} \left\{ \begin{array}{ll} \displaystyle -\Delta_p u+|u|^{p-2}u = Q(x)(f(u)+\varepsilon g(u)),\ \ \ x\in \mathbb R^N, \\ u\in W^{1,p}(\mathbb R^N), \end{array} \right. (P_\varepsilon) \end{equation*} where $\Delta_p$ is the $p$-Laplacian operator defined by $\Delta_p u=\text{div}(|\nabla u|^{p-2}\nabla u)$, $p>1$, $Q\in \mathcal{C}(\mathbb R^N,\mathbb R)$ is a positive function, $f\in\mathcal{C}(\mathbb R, \mathbb R)$ oscillates either near the origin or near the infinity, and $\epsilon$ is a real number. For $g$ it is only required that $g\in\mathcal{C}(\mathbb R, \mathbb R)$. Under appropriate assumptions on $Q$ and $f$ the following results which are special cases of more general ones are proved: the unperturbed problem $(P_0)$ has infinitely many nodal solutions, and for any $n\in\mathbb N$ the perturbed problem $(P_\varepsilon)$ has at least $n$ nodal solutions provided that $|\epsilon|$ is sufficiently small.
Abstract:
We point out an interesting relation between transport in Hamiltonian dynamics and Floer homology. We generalize homoclinic Floer homology from $\mathbb{R}^2$ and closed surfaces to two-dimensional cylinders. The relative symplectic action of two homoclinic points is identified with the flux through a turnstile (as defined in MacKay & Meiss & Percival [19]) and Mather's [20] difference in action $\Delta W$. The Floer boundary operator is shown to annihilate turnstiles and we prove that the rank of certain filtered homology groups and the flux grow linearly with the number of iterations of the underlying symplectomorphism.
Abstract:
In this paper, we study a class of nonlocal dispersion equation with monostable nonlinearity in $n$-dimensional space \[ \begin{cases} u_t - J\ast u +u+d(u(t,x))= \displaystyle \int_{\mathbb{R}^n} f_\beta (y) b(u(t-\tau,x-y)) dy, \\ u(s,x)=u_0(s,x), \ \ s\in[-\tau,0], \ x\in \mathbb{R}^n, \end{cases} \] where the nonlinear functions $d(u)$ and $b(u)$ possess the monostable characters like Fisher-KPP type, $f_\beta(x)$ is the heat kernel, and the kernel $J(x)$ satisfies ${\hat J}(\xi)=1-\mathcal{K}|\xi|^\alpha+o(|\xi|^\alpha)$ for $0<\alpha\le 2$ and $\mathcal{K}>0$. After establishing the existence for both the planar traveling waves $\phi(x\cdot{\bf e}+ct)$ for $c\ge c_*$ ($c_*$ is the critical wave speed) and the solution $u(t,x)$ for the Cauchy problem, as well as the comparison principles, we prove that, all noncritical planar wavefronts $\phi(x\cdot{\bf e}+ct)$ are globally stable with the exponential convergence rate $t^{-n/\alpha}e^{-\mu_\tau t}$ for $\mu_\tau>0$, and the critical wavefronts $\phi(x\cdot{\bf e}+c_*t)$ are globally stable in the algebraic form $t^{-n/\alpha}$, and these rates are optimal. As application,we also automatically obtain the stability of traveling wavefronts to the classical Fisher-KPP dispersion equations. The adopted approach is Fourier transform and the weighted energy method with a suitably selected weight function.
Abstract:
As is well-known, the existence of a cone-field with constant orbit core dimension is, roughly speaking, equivalent to hyperbolicity, and consequently guarantees expansivity and shadowing. In this paper we study the case when the given cone-field does not have the constant orbit core dimension. It occurs that we still obtain expansivity even in general metric spaces.
Main Result. Let $X$ be a metric space and let $f:X \rightharpoonup X$ be a given partial map. If there exists a uniform cone-field on $X$ such that $f$ iscone-hyperbolic, then $f$ isuniformly expansive, i.e. there exists $N \in \mathbb{N}$, $\lambda \in [0,1)$ and $\epsilon > 0$ such that for all orbits$\mathrm{x},\mathrm{v}:{-N,\ldots,N} \to X$ \[ d_{\sup}(\mathrm{x},\mathrm{v}) \leq \epsilon \Longrightarrow d(\mathrm{x}_0,\mathrm{v}_0) \leq \lambda d_{\sup}(\mathrm{x},\mathrm{v}). \] } We also show a simple example of a cone hyperbolic orbit in $\mathbb{R}^3$ which does not have the shadowing property. Abstract:
In this article, we consider a non-autonomous diffuse interface model for an isothermal incompressible two-phase flow in a two-dimensional bounded domain. We assume that the external force is singularly oscillating and depends on a small parameter $ \epsilon. $ We prove the existence of the uniform global attractor $A^{\epsilon}. $ Furthermore, using the method of [13] in the case of the two-dimensional Navier-Stokes systems, we study the convergence of $A^{\epsilon} $ as $ \epsilon $ goes to zero. Let us mention that the nonlinearity involved in the model considered in this article is slightly stronger than the one in the two-dimensional Navier-Stokes system studied in [13].
Abstract:
We consider a parabolic-elliptic system of equations that arises in modelling the chemotaxis in bacteria and the evolution of self-attracting clusters. In the case space dimension $3 \leq N \leq 9$, we will derive criteria of the blow-up rate of solutions, and identify an explicit class of initial data for which the blow-up is of self-similar rate. Our argument is based on the study of the asymptotic properties of backward self-similar solutions to the system together with the intersection comparison principle.
Abstract:
In this work we look for central configurations of the planar $1+n$ body problem such that, after the addition of one or two satellites, we have a new planar central configuration. We determine all such configurations in two cases: the first, the addition of two satellites considering that all satellites have equal infinitesimal masses and the second case where one satellite is added but the infinitesimal masses are not necessarily equal.
Abstract:
In this paper, we consider Hartree-type equations on the two-dimensional torus and on the plane. We prove polynomial bounds on the growth of high Sobolev norms of solutions to these equations. The proofs of our results are based on the adaptation to two dimensions of the techniques we had previously used in [49, 50] to study the analogous problem in one dimension. Since we are working in two dimensions, a more detailed analysis of the resonant frequencies is needed, as was previously used in the work of Colliander-Keel-Staffilani-Takaoka-Tao [19].
Abstract:
We study the exponential rate of decay of Lebesgue numbers of open covers in topological dynamical systems. We show that topological entropy is bounded by this rate multiplied by dimension. Some corollaries and examples are discussed.
Abstract:
The longtime dynamics of the three dimensional (3D) Brinkman-Forchheimer equations with time-dependent forcing term is investigated. It is proved that there exists a uniform attractor for this nonautonomous 3D Brinkman-Forchheimer equations in the space $\mathbb{H}^1(\Omega)$. When the Darcy coefficient $\alpha$ is properly large and $L^2_b$-norm of the forcing term is properly small, it is shown that there exists a unique bounded and asymptotically stable solution with interesting corollaries.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Catenoid
The surface formed by the revolution of a catenary $y=c\cosh x/c$ about the $x$-axis, where $c$ is a positive constant. The parametric equations for the catenoid are then\[x = v \quad y = c \cosh \frac{v}{c} \sin u \quad z = c \cosh \frac{v}{c} \cos u\, .\]The catenoid is a minimal surface and it is the form realized by a soap film "stretched" over two wire discs the planes of which are perpendicular to the line joining their centres (see Fig. 1). The catenoid is a member of the one-parameter family of surfaces of revolution of the curves $y = a \cosh x/b$, which are sometimes also called
catenoids. However only for the special choice of parameters $a = b$ the corresponding surface is a minimal surface and usually the word catenoid is used only for this particular case.
The catenoid is locally isometric to the helicoid and in fact such local isometry can be achieved as endpoint of a continuous one-parameter family of isometric deformations which are all minimal surfaces.
Figure 1
References
[BG] M. Berger, B. Gostiaux, "Differential geometry: manifolds, curves, and surfaces" , Springer (1988) MR0917479 Zbl 0629.53001 [DoC] M.P. Do Carmo, "Differential geometry of curves and surfaces" , Prentice-Hall (1975) pp. 214 Zbl 0733.53001 Zbl 0606.53002 Zbl 0326.53001 [Hs] C.C. Hsiung, "A first course in differential geometry" , Wiley (Interscience) (1981) [ON] B. O'Neill, "Elementary differential geometry" , Acad. Press (1966) Zbl 0971.53500 Zbl 0974.53001 [Sp] M. Spivak, "A comprehensive introduction to differential geometry" , 1979 , Publish or Perish pp. 218–219 MR0532834 MR0532833 MR0532832 MR0532831 MR0532830 MR0394453 MR0394452 MR0372756 MR1537051 MR0271845 MR0267467 Zbl 1213.53001 Zbl 0439.53005 Zbl 0439.53004 Zbl 0439.53003 Zbl 0439.53002 Zbl 0439.53001 Zbl 0306.53003 Zbl 0306.53002 Zbl 0306.53001 Zbl 0202.52201 Zbl 0202.52001 How to Cite This Entry:
Catenoid.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Catenoid&oldid=38911 |
If an entanglement experiment, whereby an entangled pair of particles is measured at both ends, is independent of the next entanglement experiment with another pair of entangled particles, how can there be a correlation? It seems that each independent run does not influence the next run, but...
all the references I can find on the net to justifying a correlation treat it as a matter of judgment, and, quite correctly, that it depends on the application.But it seems to me that one could compare the fit to the data of a horizontal line (i.e. average y) with that of the linear regression...
Suppose we have two variables A and B. A has a truly random distribution over {0,1} with P(0)=P(1)=0.5 . B has the same distribution.Now suppose that A and B always show both a 1 or both a 0. This would be a strong correlation between A and B.Now could this be called 'a coincidence'? And if...
1. Homework StatementI have the following questions as homeworks and I would like to get help.Here's some informations given to help us to answer :Photometry :U=11.60B=11.16V=10.20Redshift : z= 0.00780Central velocity dispersion : ##\sigma_{v}## = 210 km/sIntroduction :The...
1. Homework StatementI have the following questions as homeworks and I would like to get help.Question 1) Give the formal expression of the total number density of galaxies. Why is this expressoin problematic in practice?Question 2) In practice one uses a numerical value for the number...
Suppose A is an ensemble of measurement events in the past.Suppose B is an ensemble of measurement events in the present.Suppose there is a correlation between A and B that stays the same over time.Suppose we can manipulate the outcomes of B (for example by choosing the orientation of the...
OK, so, I've forgotten more statistics than my students will ever know, and I'm not too proud to ask for help, cuz I'm just blanking out on this. I would appreciate it if someone could patiently follow along and let me know what I've got right or wrong please.My understanding of the chi-sqr is...
Hello everyone.I stumbled across an article in the social sciences that had a correlation coefficient of r=0.93.Being from a maths background and knowing nothing about things like social sciences, psychology, etc., is this r-value in these types of fields considered fairly strong, strong...
I have 2 PERFECT data of the transmitter and receiver. From 2 data, I can calculate the delay estimation:Fs = 8e6; % sample rate...for i = 1:2[cc_correlation,lag] = xcorr(signal2(i), signal1);[cc_maximum, cc_time] = max(abs(cc_correlation));cc_estimation...
A couple of questions today. First. I am running a panel data regression test. First I check the correlations between the independent variables and the dependent variable. these are the results.The D/(D+Em) is the dependent variable, and the independent are the 4 variables most adjacent...
Suppose we have two truly random sources A and B that generate bits ('0' or '1') synchronously. If we measure the correlation between the respective bits generated, we find a random, ie no, correlation.Now suppose A and B are two detectors that register polarization-entangled photons passing...
I have 2 data files, which links are attached below:Transmitted datahttps://www.dropbox.com/s/0nmhw6mpgh7upmv/TX.dat?dl=0Received datahttps://www.dropbox.com/s/xgyo6le3bcmd25r/RX.dat?dl=0Those binary data are read by this MATLAB code:%% initial values:nsamps = inf;nstart = 0;%% input...
I wasn't sure where to post this, I hope this was the right section.I've been struggling quite a bit with implementing an autocorrelation code into my current project. The autocorrelation as it is now, is increasing exponentially from 1 at the start of my MC run, and hitting 2 halfway through...
I thought I understood the concept of a correlation function, but I having some doubts.What exactly does a correlation function quantify and furthermore, what is a correlation length.As far as I understand, a correlation between two variables ##X## and ##Y## quantifies how much the two...
First let me ask this:Consider a pair of entangled photons fired at a respective detector after passing respective polarisation filters.If a photon passes a polarisation filter, is it in a superposition of having passed and not having passed?Is the measuring device (that detects the...
Consider the 2-point correlator of a real scalar field ##\hat{\phi}(t,\mathbf{x})##, $$\langle\hat{\phi}(t,\mathbf{x})\hat{\phi}(t,\mathbf{y})\rangle$$ How does one interpret this quantity physically? Is it quantifying the probability amplitude for a particle to be created at space-time point...
Suppose we have a source of polarization-entangled photons, that fires pairs of photons in opposite directions at two detectors with orientation-adjustable polarizationfilters in front of them. Obviously, there is a correlation between the orientation of the respective filters and the joint...
If we assume there is no counterfactual definiteness, would that mean that measurements on entangled particles needn't be correlated, for if you don't compare the results, you just don't know if they do?
I have a question that seems to reflect my main concern with QM. Here it is:Consider a series of polarisation-entangled photon pairs that are sent in opposite direction to two measuring devices (e.g. at opposite ends of the universe). The measurement consists of detection of a photon after...
If I'm correct, a series of with respect to some property entangled particles exhibits a correlation between several measurements of that property by means of two measuring devices.My question is: is it possible that between measurements the physical constitutions of the measuring devices...
Just taking an advance in what I want to learn someday:I understand that decoherence and entanglement are more or less equivalent. So, I take it decoherence is in principle a process of entanglement.Consider two particles A and B who are entangled. if A decoheres by interacting with particle...
I was wondering. In this example I use polarized photons, but maybe it is applicable to electrons and spin also.We can prepare two completely unentangled polarized photons, and send them in opposite directions to two detectors preceded by a filter at particular angles. Both of them will show a...
Hello I have to variables X and Y, 50,000 in each row (a time series of 2 years). I am curious how to calculate a linear correlation between the two, for example if X decreases does Y increase or decrease.Looking for formula or a MATLAB function :)
As I understand it, the 2-point fnuction is for 1 particle incoming, 1 particle outgoing. The 4-point function is for 2 particles incoming, 2 particles outgoing. Is this correct? So an N-point function describes N/2 incoming particles and N/2 outgoing particles?Thanks!
1. Homework StatementI'm working on path integrals for fermions and I came across an exercise that ask to compute the three point functions , one of that is the:$$<0|J^{\mu}(x_1)J^{\nu}(x_2)J^{\rho}(x_3)|0> $$where $$J^{\mu}$$ is the current $$J^{\mu}=\bar{\psi}\gamma^{\mu}\psi$$.***Can...
How do i check for correlation or test for association between a continuous data and nominal data in SPSS? My nominal data is the "type of GPS receiver" and the continuous data is "latency" in seconds.
Say i m in the cinema and there are 4 seats. say the closest seat is thebest view which is 4 meters away. say the further the distance away theworse the viewing quality. eg [5,6,4,9] are the distances of seats away from screen.Now lets say people sitting in front of you also effects viewing... |
Abbreviation:
DLOS
A
is a structure $\mathbf{A}=\langle A,\vee,\wedge,\cdot\rangle$ of type $\langle 2,2,2\rangle$ such that distributive lattice ordered semigroup
$\langle A,\vee,\wedge\rangle$ is a distributive lattice
$\langle A,\cdot\rangle$ is a semigroup
$\cdot$ distributes over $\vee$: $x\cdot(y\vee z)=(x\cdot y)\vee (x\cdot z)$ and $(x\vee y)\cdot z=(x\cdot z)\vee (y\cdot z)$
Let $\mathbf{A}$ and $\mathbf{B}$ be distributive lattice-ordered semigroups. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x\vee y)=h(x) \vee h(y)$, $h(x\wedge y)=h(x) \wedge h(y)$, $h(x\cdot y)=h(x) \cdot h(y)$
Example 1: Any collection $\mathbf A$ of binary relations on a set $X$ such that $\mathbf A$ is closed under union, intersection and composition.
H. Andreka
1) proves that these examples generate the variety DLOS.
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &6\\ f(3)= &44\\ f(4)= &479\\ f(5)= &\\ \end{array}$
Hajnal Andreka, 1) , Algebra Universalis Representations of distributive lattice-ordered semigroups with binary relations 28(1991), 12–25 |
Definition:Differential/Vector-Valued Function Definition
Let $U \subset \R^n$ be an open set.
Let $f: U \to \R^m$ be a vector-valued function.
Let $f$ be differentiable at a point $x \in U$.
The differential of $f$ at $x$ is the linear transformation $\d f \left({x}\right): \R^n \to \R^m$ defined as: $\d f \left({x}\right) \left({h}\right) = J_f \left({x}\right) \cdot h$
where:
$J_f \left({x}\right)$ is the Jacobian matrix of $f$ at $x$. On an Open Set
Let $O \subseteq \R^n$ be an open set.
$\map {\d f} {x; h} = \map {J_f} x \cdot h$
where $\map {J_f} x$ be the Jacobian matrix of $f$ at $x$.
That is, if $h = \tuple {h_1, \ldots, h_n}$: $\map {\d f} {x; h} = \begin {pmatrix} \map {\dfrac {\partial f_1} {\partial x_1} } x & \cdots & \map {\dfrac {\partial f_1} {\partial x_n} } x \\ \vdots & \ddots & \vdots \\ \map {\dfrac {\partial f_m} {\partial x_1} } x & \cdots & \map {\dfrac {\partial f_m} {\partial x_n} } x \end {pmatrix} \begin {pmatrix} h_1 \\ \vdots \\ h_n \end {pmatrix}$ $\d f \left({x}\right)$ $\d f_x$ $\d_x f$ $D f \left({x}\right)$ $D_x f$ Substituting $\d y$ for $\d f \left({x; h}\right)$ and $\d x$ for $h$, the following notation emerges: $\d y = f' \left({x}\right) \rd x$
hence:
$\d y = \dfrac {\d y} {\d x} \rd x$ Notes 1. When the dimension of $W$ is $1$, the differential of a function is generalised by the notion of differential forms on manifolds. Indeed the differential of $f : V \to W$ is an exact form of degree $1$.
2. The above definition also furnishes differentials of differential functions between affine spaces. This is due to Affine Space with Origin has Vector Space Structure |
On
C 1, density of metrics without invariant graphs β β
1.
Departamento de Geometria e Representação Gráfica, IME-UERJ, R. São Francisco Xavier, 524, Rio de Janeiro, 20550-900, Brazil
2.
Departamento de Matemática PUC-Rio, Rua Marquês de São Vicente 225, Rio de Janeiro 22543-900, Brazil, and Université d'Aix Marseille, France
We show that given any $C^{\infty}$ Riemannian structure $(T^{2},g)$ in the two torus, $\epsilon >0$ and $\beta \in (0,\frac{1}{3})$, there exists a $C^{\infty}$ Riemannian metric $\bar{g}$ with no continuous Lagrangian invariant graphs that is $\epsilon$-$C^{1,\beta}$ close to $g$. The main idea of the proof is inspired in the work of V. Bangert who introduced caps from smoothed cone type $C^{1}$ small perturbations of metrics with non-positive curvature to get conjugate points. Our new contribution to the subject is to show that positive curvature cone type small perturbations are ``less singular" than non-positive curvature cone type perturbations. Positive curvature geometry allows us to get better estimates for the variation of the $C^{1}$ norm of the singular cone in a neighborhood of its vertex.
Keywords:Lagrangian graphs, conjugate points, variational calculus, geodesic flows, local perturbations. Mathematics Subject Classification:Primary:37J30, 53B99;Secondary:37J40. Citation:Rodrigo P. Pacheco, Rafael O. Ruggiero. On C 1,density of metrics without invariant graphs. β Discrete & Continuous Dynamical Systems - A, 2018, 38 (1) : 247-261. doi: 10.3934/dcds.2018012
References:
[1] [2] [3] [4] [5] [6] [7] [8] [9]
R. O. Ruggiero,
The set of smooth metrics in the torus without continuous invariant graphs is open and dense in the
[10]
R. O. Ruggiero,
On the density of mechanical Lagrangians in
[11]
show all references
References:
[1] [2] [3] [4] [5] [6] [7] [8] [9]
R. O. Ruggiero,
The set of smooth metrics in the torus without continuous invariant graphs is open and dense in the
[10]
R. O. Ruggiero,
On the density of mechanical Lagrangians in
[11]
[1] [2] [3]
Artur M. C. Brito da Cruz, Natália Martins, Delfim F. M. Torres.
Hahn's symmetric quantum variational calculus.
[4] [5]
Viviana Alejandra Díaz, David Martín de Diego.
Generalized variational calculus for continuous and discrete mechanical systems.
[6] [7] [8] [9]
Qian Liu, Xinmin Yang, Heung Wing Joseph Lee.
On saddle points of a class of augmented lagrangian functions.
[10] [11] [12] [13] [14]
Artur O. Lopes, Vladimir A. Rosas, Rafael O. Ruggiero.
Cohomology and subcohomology problems for expansive, non Anosov geodesic flows.
[15] [16] [17] [18] [19] [20]
Houyu Jia, Xiaofeng Liu.
Local existence and blowup criterion of the Lagrangian averaged Euler equations in Besov spaces.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D
OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a...
@NeuroFuzzy awesome what have you done with it? how long have you been using it?
it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game
As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity
@Secret I mean more along the lines of the fluid dynamics in that kind of game
@Secret Like how in the dan-ball one air pressure looks continuous (I assume)
@Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A.
I would bet you get lots of cool reaction-diffusion-like patterns with that rule.
(Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ...
Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a...
Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl...
@ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-)
What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ...
and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles
The documentary then showed one of the bird's eye view of the farmlands
(which pardon my sketchy drawing skills...)
Most of the farmland is tiled into grids
Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array
In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl
and in others grass grew
Two blue steel bars were visible laying across the grid, holding up a triangle pool of water
Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e.
ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it
At the end of the documentary, near a university lodge area
I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends
Reality check: I have been to London, but not Belgium
Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order
Presumably one can formulate it (using an example of a 4th order tensor) as follows:
$$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$
and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array
while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$
However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers
@DavidZ in the recent meta post about the homework policy there is the following statement:
> We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems.
This is an interesting statement.
I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking".
I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea.
I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments).
@DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic.
@peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive.
@DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds.
@EmilioPisanty Yes, but I had liked to talk to him here.
@DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things.
@peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck.
4
Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful.
@EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging". |
It looks like you're new here. If you want to get involved, click one of these buttons!
Let's look at some examples of feasibility relations!
Feasibility relations work between preorders, but for simplicity suppose we have two posets \(X\) and \(Y\). We can draw them using Hasse diagrams:
Here an arrow means that one element is less than or equal to another: for example, the arrow \(S \to W\) means that \(S \le W\). But we don't bother to draw all possible inequalities as arrows, just the bare minimum. For example, obviously \(S \le S\) by reflexivity, but we don't bother to draw arrows from each element to itself. Also \(S \le N\) follows from \(S \le E\) and \(E \le N\) by transitivity, but we don't bother to draw arrows that follow from others using transitivity. This reduces clutter.
(Usually in a Hasse diagram we draw bigger elements near the top, but notice that \(e \in Y\) is not bigger than the other elements of \(Y\). In fact it's neither \(\ge\) or \(\le\) any other elements of \(Y\) - it's just floating in space all by itself. That's perfectly allowed in a poset.)
Now, we saw that a
feasibility relation from \(X\) to \(Y\) is a special sort of relation from \(X\) to \(Y\). We can think of a relation from \(X\) to \(Y\) as a function \(\Phi\) for which \(\Phi(x,y)\) is either \(\text{true}\) or \(\text{false}\) for each pair of elements \( x \in X, y \in Y\). Then a feasibility relation is a relation such that:
If \(\Phi(x,y) = \text{true}\) and \(x' \le x\) then \(\Phi(x',y) = \text{true}\).
If \(\Phi(x,y) = \text{true}\) and \(y \le y'\) then \(\Phi(x,y') = \text{true}\).
Fong and Spivak have a cute trick for drawing feasibility relations: when they draw a blue dashed arrow from \(x \in X\) to \(y \in Y\) it means \(\Phi(x,y) = \text{true}\). But again, they leave out blue dashed arrows that would follow from rules 1 and 2, to reduce clutter!
Let's do an example:
So, we see \(\Phi(E,b) = \text{true}\). But we can use the two rules to draw further conclusions from this:
Since \(\Phi(E,b) = \text{true}\) and \(S \le E\) then \(\Phi(S,b) = \text{true}\), by rule 1.
Since \(\Phi(S,b) = \text{true}\) and \(b \le d\) then \(\Phi(S,d) = \text{true}\), by rule 2.
and so on.
Puzzle 171. Is \(\Phi(E,c) = \text{true}\) ? Puzzle 172. Is \(\Phi(E,e) = \text{true}\)?
I hope you get the idea! We can think of the arrows in our Hasse diagrams as
one-way streets going between cities in two countries, \(X\) and \(Y\). And we can think of the blue dashed arrows as one-way plane flights from cities in \(X\) to cities in \(Y\). Then \(\Phi(x,y) = \text{true}\) if we can get from \(x \in X\) to \(y \in Y\) using any combination of streets and plane flights!
That's one reason \(\Phi\) is called a feasibility relation.
What's cool is that rules 1 and 2 can also be expressed by saying
$$ \Phi : X^{\text{op}} \times Y \to \mathbf{Bool} $$is a monotone function. And it's especially cool that we need the '\(\text{op}\)' over the \(X\). Make sure you understand that: the \(\text{op}\) over the \(X\) but not the \(Y\) is why we can drive
to an airport in \(X\), then take a plane, then drive from an airport in \(Y\).
Here are some ways to lots of feasibility relations. Suppose \(X\) and \(Y\) are preorders.
Puzzle 173. Suppose \(f : X \to Y \) is a monotone function from \(X\) to \(Y\). Prove that there is a feasibility relation \(\Phi\) from \(X\) to \(Y\) given by
$$ \Phi(x,y) \text{ if and only if } f(x) \le y .$$
Puzzle 174. Suppose \(g: Y \to X \) is a monotone function from \(Y\) to \(X\). Prove that there is a feasibility relation \(\Psi\) from \(X\) to \(Y\) given by
$$ \Psi(x,y) \text{ if and only if } x \le g(y) .$$
Puzzle 175. Suppose \(f : X \to Y\) and \(g : Y \to X\) are monotone functions, and use them to build feasibility relations \(\Phi\) and \(\Psi\) as in the previous two puzzles. When is
$$ \Phi = \Psi ? $$
To read other lectures go here. |
My question is regarding the following Gauged Linear Sigma Model (GLSM) in two dimensions.
$$\tag{1} S=\int d^2x\Big(-D_{\mu}\overline{\phi} D^{\mu}\phi +\frac{D^2}{2e'^2} +D(|\phi|^2-r)\Big).$$ Here $D_{\mu}\phi=\partial_{\mu}\phi+iQA_{\mu}\phi$, $D$ is an auxiliary field, and $e'$ is a coupling constant. (I have provided this action as a minimal working example in order to understand $\mathcal{N}=(2,2)$ supersymmetric GLSMs.)
Integrating out $D$ via its equation of motion, the action becomes
\begin{equation} \begin{aligned} S&=\int d^2x\Big(-D_{\mu}\overline{\phi} D^{\mu}\phi -\frac{e'^2(|\phi|^2-r)^2}{2}\Big)\\ &=\int d^2x\Big(-D_{\mu}\overline{\phi} D^{\mu}\phi -\frac{e'^2(|\phi|^4-2|\phi|^2r+r^2)}{2}\Big), \end{aligned}\tag{2} \end{equation} There is now a mass term in the action given by $e'^2r|\phi|^2$, and for $r>0$, a Higgs mechanism occurs, spontaneously breaking the gauge symmetry.
I am now interested in knowing how the Higgs mechanism can occur in the GLSM given in equation (1) at the QUANTUM level, where we CANNOT integrate out $D$ using its equation of motion. The following is my attempt: we can find the minimum of the potential energy in (1) to show that the vacuum expectation value $$ \tag{3} \langle D\rangle = -e'^2(\langle|\phi|^2\rangle-r)$$ According to Witten in http://arxiv.org/abs/hep-th/9301042 (page 21), this is only true at tree level, and there are further corrections at least at one-loop. Nevertheless, the leading term is that given in (3), and we should be able to plug this into the quantum EFFECTIVE action, whereby we can show that the Higgs mechanism occurs for $r>0$, as in (2). Is this correct? |
I have read that the notion of limit became rigorous two centuries after the discover of calculus
What Newton had in his mind regarding the notion of limit?
History of Science and Mathematics Stack Exchange is a question and answer site for people interested in the history and origins of science and mathematics. It only takes a minute to sign up.Sign up to join this community
I have read that the notion of limit became rigorous two centuries after the discover of calculus
What Newton had in his mind regarding the notion of limit?
Section 1 of book 1 of Principia opens with a lemma that can strike us as sounding almost modern:
"
Quantities, and the ratios of quantities, which in any finite time converge continually to equality, and before the end of that time approach nearer the one to the other than by any given difference, become ultimately equal".
But a second thought raises some doubts. First, this is a
lemma, not a definition of limit. The meaning of "converge continually" and "approach" are assumed to be already understood, the lemma is meant to derive a property from that. Second, it talks of "quantities". We also talk of quantities and their limits. But Newton surely can not refer to our notion of functions which assign values to arguments that first appears in Dirichlet's work from 19th century. Or even to "analytic expressions" featured in Euler's 18th century textbooks. Finally, Newton does not even have our idea of a real line assembled from points serving as arguments and values, the arithmetical continuum of Weierstrass, Dedekind and Cantor. The 17th century line is still Euclidean/Aristotelian, with points as merely external marks on it.
It becomes clearer that Newton's limits, whatever they are, can not be modern, his primitives are different, he works in a different system of mathematical concepts. There might be a sense in which Porciau's opinion that Newton "
was the first to present an epsilon argument" is justified, but it would be similar to the sense in which Eudoxus was first to work with Dedekind cuts. It only means that some of their manipulations can be closely mimicked by modern ones, and we agree to ignore the meaning of what is being manipulated. And that there is an evolutionary chain connecting one to the other.
What are Newton's "quantities" then? We find an explicit description in his Quadrature of Curves (1692):
"
I don’t here consider Mathematical Quantities as composed of Parts extreamly small, but as generated by a continual motion. Lines are described, and by describing are generated, not by any apposition of Parts, but by a continual motion of Points. Surfaces are generated by the motion of Lines, Solids by the motion of Surfaces, Angles by the Rotation of their Legs, Time by a continual flux, and so in the rest. These Geneses are founded upon Nature, and are every Day seen in the motion of Bodies".
Now it becomes clear where the pre-assumed understanding of "converge continually" and "approach" comes from. Newton takes the idea of motion as
intuitively given, limits with their properties are then founded on it. In the Scholium to Lemma XI of the same section Newton explicitly appeals to the intuitive idea of instantaneous velocity to justify existence of limits, for example. And in Lemma 2 of Book 2 he talks of "genita", " quantities I here consider as variable and indetermined, and increasing or decreasing, as it were, by a perpetual motion or flux".
This conception of limits, and calculus generally, relying on the given intuition of motion and its observed properties, came to be called
kinematic. It has roots in some works of Archimedes, such as On Spirals, where he seems to rely on something like the parallelogram of velocities to draw tangents. Newton's teacher Barrow lectured on Archimedes, and the kinematic conception of curves is explicit in his Geometric Lectures, which Newton helped prepare for publication, see Boyer's History of Calculus, p.189. But in the early years Newton also used manipulations with infinitesimals, inherited through Barrow from Fermat, which he later found objectionable. So I would have to agree with Ferraro's assessment in Some Mathematical Aspects of Newton’s Principia:
"
In effect, Newton does not define the terms “limit” and “ultimate ratio”: these terms have a clear intuitive meaning to him... Indeed, I think that Newton’s concept of first and ultimate ratio can be reduced to the modern concept of limits: it is true that Newton has a clear idea of what meaning “approaching a limit” [is], but this is only an intuitive and non-mathematical idea that is entirely different from the modern, mathematical concept of limit."
Indeed, the "mechanical" aspect of Newton's calculus was explicitly criticized in the 18th century, as "foreign" to pure mathematics, by D'Alambert and l'Huillier, among others. A comprehensive study of 17th century mathematical conceptions is Whiteside's Patterns of Mathematical Thought in the later Seventeenth Century (p.374ff on Newton specifically), on Principia see also his Mathematical Principles Underlying Newton's Principia Mathematica. Arthur in Leery Bedfellows: Newton and Leibniz on the Status of Infinitesimals contrasts Newton's kinematic conception of calculus to the Leibniz's one, including a detailed discussion of "quantities" and Lemma 1, and the changes from early to late works. On the later fates of the kinematic conception, developed by McLaurin and still more than visible in Cauchy (despite his common assimilation to Weierstrass, his "variables" are not unlike Newton's "quantities") see Grabiner's book Origins of Cauchy's Rigorous Calculus.
Newton actually did have a pretty explicit concept of limit, he set it out in section 1 of Book 1 of the
Principia immediately following the definitions and axioms or laws of motion. He did not use the actual word 'limit' but the concept is clearly there in his 'first and last ratios', which by his explanations turn out to be limits of ratios of finite differences, which are approached as the relevant variable controlling the size of both numerator and denominator either declines to zero ('evanescent') or, when considered in reverse, grows from zero ('nascent'). This matter has not gone without notice in the literature. A study by Bruce Pourciau (2001), in Historia Mathematica 28, 18-30, investigates and discusses Newton’s understanding of the limit concept through a study of certain proofs appearing in the Principia, with a focus on parts of Book 1, section 1.
(When I return to my sources, I'm away from base right now, I will put in online references to the
Principia in its English translation of 1729 which is a good source and is online free of copyright, and other sources cited here. For now, one may note that Book 1 in the 1729 translation is online in The Mathematical Principles of Natural Philosophy, vol.1 of 2, and Newton's discussion and explanation of limit-methods extends from page 41 to page 56.)
Newton explained among other things that he relied on limits to justify his methods because the methods of the ancients by reductio ad absurdum (or exhaustion) were too long, and the method of 'indivisibles' was too rough, although he added that 'hereby the same thing is perform'd as by the method of indivisibles'. When Newton wrote, the precursor of 'infinitesimal' methods that was perhaps best known was the much-criticised 1640s work on 'indivisibles' of Bonaventura Cavalieri. Newton clearly considered such methods as not well justified, hence his reliance on limits.
There is further material that contributes to an answer to the current question in Why is calculus missing from Newton's Principia? , (answer in a nutshell, it is not missing, and the answer also provides sources in some detail about Newton's methods and explanations), and in the descriptions of attacks on the calculus in Did Michel Rolle say that the calculus is "a collection of ingenious fallacies"? . The attacks of calculus methods in France from about 1700 onwards by Michel Rolle were defended by Pierre Varignon and then by Joseph Saurin, and the defence by Varignon is specially relevant here because he relied on Book 1 section 1 of Newton's
Principia to provide the justification that did not appear to be available elsewhere. Leibniz, for his part, has been said to have been generally respectful of Newton's justification in terms of limits.
Newton did not have the rigorous concept of limit as in $\epsilon/N$ and $\epsilon/\delta$ formulas. Instead, he had a vague idea of limit in term of motion and used the notion of infinitesimal to calculate derivatives and integrals. For example, calculating the derivative of $y=x^2$ is like $$ \dot{y}=\frac{\Delta y}{\Delta x}=\frac{(x+\Delta x)^2-x^2}{\Delta x}=2x+\Delta x=2x $$ (Newton used $\dot{x}$ for derivative and later Leibniz improved to $\frac{dy}{dx}$). In the last step, $\Delta x=0$, but in $\frac{\Delta y}{\Delta x}$, $\Delta x$ can not be $0$ for $\frac0{0}$ makes no sense. This means that $\Delta x$ (infinitesimal) sometimes is zero and sometimes is not, a fact that Newton could not explain. Nor did Leibniz know the solution. However, this defect of infinitesimal has been largely ignored because the powerful method of Calculus has solved so many and important problems that mankind has even never dreamed before.
The rigorous explanation of infinitesimal through the notion of limit (in forms of $\epsilon/N$ and $\epsilon/\delta$ formulas), however, was not completed until two hundred years after Newton, through the work of Cauchy and Weierstrass in the 19th century. So it is an overstatement to say that Newton knew the exact notion of limit and the rigorous treatment of infinitesimal. However, Newton must be credited for his invention of Calculus through infinitesimal. Likewise, it is again an overstatement to say that someones like Cavalieri or even Archimedes had invented Calculus before Newton. |
How to compute this integral? I stuck at a point where I get $\displaystyle\int\frac{1}{t^5-1}+ \cdots $ $$\int\sqrt[5]{\frac{x+5}{x-5}}\,\mathrm dx$$ using $\displaystyle t=\sqrt[5]{\frac{x+5}{x-5}}$
Using $t = \sqrt[5]{\frac{x+5}{x-5}}$, you get $$ t^5 (x-5) = x+5 $$ $$ x(t^5 - 1) = 5 + 5t^5 $$ $$ x = \frac{5(t^5 + 1)}{t^5-1} $$ and thus $$ dx = 5\frac{(t^5-1)(5t^4)-(t^5+1)(5t^4)}{(t^5-1)^2}dt $$ $$ dx = \frac{-50t^4}{(t-1)^2(t^4 + t^3 + t^2 + t + 1)^2}dt $$ Thus the integral becomes $$ \int \frac{-50t^5}{(t-1)^2(t^4 + t^3 + t^2 + t + 1)^2}dt $$
The integrand is a rational function, and so we may integrate using Partial Fractions.
Before we can do that, however, we have to factor the denominator fully. That is, we need to factor $(t^4 + t^3 + t^2 + t + 1)$.
If we plot the graph of $g(t) = t^4 + t^3 + t^2 + t + 1$, we see that $g(t)$ appears not to have any roots (zeros). We therefore conclude that $g(t)$ has no linear factors, in which case it factors over the reals as a product of two quadratic factors.
In fact, since the leading coefficient and constant coefficient of $g$ are both $1$, we might guess that $g(t) = (t^2 + Vt + 1)(t^2 + Wt + 1)$ for some values of $V$ and $W$. We set $g(t)$ equal to this factored expression and solve for $V$ and $W$ by equating coefficients, and we find that this guess is correct: $$ t^4 + t^3 + t^2 + t + 1 = t^4 + t^3(V + W) + t^2(1 + VW + 1) + t(V + W) + 1 $$ yields $$ V+W = 1 $$ $$ VW + 2 = 1 $$ Substituting the first equation $W = 1-V$ into the second yields $$ V - V^2 + 1 = 0 $$ This equation $V^2 - V - 1= 0$ has two solutions: $$ V = \frac{1\pm \sqrt{5}}{2} $$ Substituting either value in for $V$, we see that $W$ equals the other one: $$ W = 1 - V = 1 - \frac{1 \pm \sqrt{5}}{2} = \frac{2 - (1 \pm \sqrt{5})}{2} = \frac{1 \mp \sqrt{5}}{2} $$ Thus we get $$ g(t) = \big(t^2 + \frac{1+\sqrt{5}}{2}t + 1\big)\big(t^2+\frac{1-\sqrt{5}}{2}t + 1\big), $$ and we may easily check this result by multiplying out the two factors on the right hand side.
It turns out that the discriminant $(b^2 - 4ac)$ of both quadratic factors is negative, so as expected, neither one factors further into linear factors (over the reals). Notice that $\frac{1+\sqrt{5}}{2} = 1.618\ldots$ is the golden ratio, and is commonly referred to as $\phi$. Let's also write $\bar{\phi} = 1 - \phi = \frac{1-\sqrt{5}}{2}$.
Now we may use partial fractions:
$$ \frac{-50t^5}{(t-1)^2(t^2 +\phi t + 1)^2 (t^2 + \bar{\phi}t +1)^2} $$ $$ = \frac{A}{t-1} + \frac{B}{(t-1)^2} + \frac{Ct+D}{t^2 + \phi t + 1} + \frac{Et + F}{(t^2 + \phi t + 1)^2} + \frac{Gt + H}{t^2 + \bar{\phi}t + 1} + \frac{It + J}{(t^2 + \bar{\phi}t + 1)^2} $$
Clearing denominators, we get an equation of polynomials in $t$, and we may solve for the ten variables $A, B, \ldots, J$, by once again setting the corresponding coefficients equal to each other. We get a system of ten linear equations, corresponding to the coefficients of $t^0, t^1, \ldots t^9$, in ten variables. The left hand side of all of the equations will be zero, except for the equation corresponding to $t^5$, where the left hand side is -50.
Needless to say, it is a pain to solve this system by hand. One relation that might help you with this is that $\phi \bar{\phi} = -1$.
I used a computer to solve this system of equations. It says that $A=B=-2, C = 1+\sqrt{5}, D = 5-\sqrt{5}, E = -2\sqrt{5}, F = -5+\sqrt{5}, G = 1-\sqrt{5}$, $H = 5+\sqrt{5}$, $I = 2\sqrt{5}$, and $J = -5-\sqrt{5}$.
We can integrate $\int \frac{A}{t-1} dt = A\ln\lvert t-1 \rvert$ and $\int \frac{B}{(t-1)^2} dt = \frac{-B}{t-1}$ directly, but before integrating the other terms, we would complete the square in the denominator: $$ t^2 + \phi t + 1 = \big(t+\frac{\phi}{2}\big)^2 -\frac{\phi^2}{4} + 1 = \big(t + \frac{\phi}{2}\big)^2 + \big(\frac{4-\phi^2}{4}\big) $$ So we'll make two more substitutions: $$ u = t+\frac{\phi}{2}, $$ $$ v = t+ \frac{\bar{\phi}}{2}, $$ $$ du = dv = dt. $$ We need to integrate $$ \int \Big(\frac{C(u-\frac{\phi}{2}) + D}{u^2 + \frac{4-\phi^2}{4}} + \frac{E(u-\frac{\phi}{2}) + F}{(u^2 + \frac{4-\phi^2}{4})^2}\Big)du, $$ along with a very similar integral in $v$ (not shown). To solve this integral in $u$, let's set $p = \sqrt{\frac{4-\phi^2}{4}}$, which is real and positive since $4 > \phi^2$. Rearranging, this integral is now: $$ \int \Big(\frac{Cu}{u^2 + p^2} + \frac{D-\frac{C\phi}{2}}{u^2 + p^2} + \frac{Eu}{(u^2 + p^2)^2} + \frac{F-\frac{E\phi}{2}}{(u^2 + p^2)^2} \Big)du $$ These four fractions may now be integrated one at a time. The first and third are done by changing variables one more time, with $w = u^2+\frac{4-\phi^2}{4}$, so $\frac{dw}{2} = u\phantom{.} du$. The second and fourth may be done by the trig substitution $u = p \tan(\theta)$, with $du = p \sec^2(\theta)d\theta$.
After integrating with respect to $\theta$ and $w$, substitute back using $\theta = \tan^{-1}\left(\frac{u}{ p}\right)$ and $w = u^2 + \frac{4-\phi^2}{4}$, then $u = t + \frac{\phi}{2}$, and finally $t = \sqrt[5]{\frac{x+5}{x-5}}$ to get the answer in terms of $x$.
Throw in a constant of integration, and you're done.
Substituting $t=\sqrt[\Large5]{\frac{x+5}{x-5}}$, $$ \begin{align} \int\sqrt[\Large5]{\frac{x+5}{x-5}}\,\mathrm{d}x &=\int t\,\mathrm{d}\frac{10}{t^5-1}\\ &=\frac{10t}{t^5-1}-10\int\frac{\mathrm{d}t}{t^5-1}\\ \end{align} $$ Substituting $\sin(2\pi/5)u=t-\cos(2\pi/5)$ and $\sin(4\pi/5)v=t-\cos(4\pi/5)$, $$ \begin{align} &\frac1{t^5-1}\\ &=\frac15\sum_{k=0}^4\frac{e^{2\pi ik/5}}{t-e^{2\pi ik/5}}\\ &=\frac{2t\cos(2\pi/5)-2}{t^2-2t\cos(2\pi/5)+1}+\frac{2t\cos(4\pi/5)-2}{t^2-2t\cos(4\pi/5)+1}+\frac1{t-1}\\ &=2\frac{\cos(2\pi/5)(t-\cos(2\pi/5))-\sin^2(2\pi/5)}{(t-\cos(2\pi/5))^2+\sin^2(2\pi/5)}\\ &+2\frac{\cos(4\pi/5)(t-\cos(4\pi/5))-\sin^2(4\pi/5)}{(t-\cos(4\pi/5))^2+\sin^2(4\pi/5)}\\ &+\frac1{t-1}\\[6pt] &=2\frac{\cot(2\pi/5)u-1}{u^2+1}+2\frac{\cot(4\pi/5)v-1}{v^2+1}+\frac1{t-1} \end{align} $$ so that $$ \begin{align} \int\frac1{t^5-1}\,\mathrm{d}t &=\cos(2\pi/5)\log(u^2+1)-2\sin(2\pi/5)\arctan(u)\\ &+\cos(4\pi/5)\log(v^2+1)-2\sin(4\pi/5)\arctan(v)\\[6pt] &+\log(t-1)\\[6pt] &+C \end{align} $$ Therefore, using the substitutions above, which are unwieldy to write, but simple to compute, $$ \begin{align} \int\sqrt[\Large5]{\frac{x+5}{x-5}}\,\mathrm{d}x &=\frac{10t}{t^5-1}\\[3pt] &-10\cos(2\pi/5)\log(u^2+1)+20\sin(2\pi/5)\arctan(u)\\[6pt] &-10\cos(4\pi/5)\log(v^2+1)+20\sin(4\pi/5)\arctan(v)\\[6pt] &-10\log(t-1)\\[6pt] &-10\,C \end{align} $$
setting $$t=\sqrt[5]\frac{x+5}{x-5}$$ we get $$x=\frac{5(t^5+1)}{t^5-1}$$ and we get $$dx=-\frac{50t^4}{(t-1)^2(t^4+t^3+t^2+t+1)^2}dt$$ thus our integral is $$-50\int\frac{t^5}{(t-1)^2(t^4+t^3+t^2+t+1)^2}dt$$ this is not so easy to solve but rational and can be solved explicitely
Compare e.g., $$\int \sqrt[5]{\frac{x+5}{x−5}}\,dx = \int{(x+1)^{\frac{1}{5}}(x-8+1)^{-\frac{1}{5}}}dx$$ with http://functions.wolfram.com/HypergeometricFunctions/Hypergeometric2F1/07/01/01/.
I would not expect to find a "simpler" answer than one expressed in terms of ${}_2F_1$. |
There is the following sum: $1-\sum\limits_{k=1}^{n}\frac{2^{k+1}}{3^k}\alpha=1-\alpha(1-(\frac{2}{3})^n)$ where $\alpha\in(0,1]$
I do not understand the following equality. I thought it could be derived using geometric series sum:
$1-\sum\limits_{k=1}^{n}\frac{2^{k+1}}{3^k}\alpha=1-2^{-1}\alpha\sum\limits_{k=1}^{n}\frac{2^{k}}{3^k}=1-2^{-1}\alpha(\frac{1-(\frac{2}{3})^n}{\frac{1}{3}})=1-\frac{3}{2}\alpha(1-(\frac{2}{3})^n)$
Question:
Why is the equality ,I derived, different from the initial one presented?
Thanks in advance! |
Bayes’ Theorem Let B 1, B2, B3, …, Bn be n pairwise mutually exclusive and exhaustive set of events connected to a random experiment E where at least one of B1, B2, B3, …, Bn is sure to happen. Let A be an arbitrary event connected to E, where P(A) is not zero. Then,
\[P({{B}_{i}}|A)=\frac{P({{B}_{i}}).P(A|{{B}_{i}})}{\sum\limits_{i=1}^{n}{[P({{B}_{i}}).P(A|{{B}_{i}})]}}\]
Proof:
Here, \[P(A),P({{B}_{i}}),P(A|{{B}_{i}})>0\]
Then we can represent P(A|B
i) in two ways,
\[P(A\cap {{B}_{i}})=P({{B}_{i}}).P(A|{{B}_{i}})…………….(i)\]
\[P(A\cap {{B}_{i}})=P(A).P({{B}_{i}}|A)…………….(ii)\]
Now from (i) and (ii) we get,
\[P({{B}_{i}}).P(A|{{B}_{i}})=P(A).P({{B}_{i}}|A)\]
\[\therefore P({{B}_{i}}|A)=\frac{P({{B}_{i}}).P(A|{{B}_{i}})}{P(A)}…………..(iii)\]
Let, S be the sample space of E. Now,
\[A=SA=({{B}_{1}}+{{B}_{2}}+{{B}_{3}}+…+{{B}_{n}})A\]
\[=({{B}_{1}}A+{{B}_{2}}A+{{B}_{3}}A+…+{{B}_{n}}A)\]
Again,
\[({{B}_{i}}A)({{B}_{j}}A)={{B}_{i}}{{B}_{j}}A=\phi A=\phi (i\ne j)\]
So, B
1A, B 2A, B 3A, …, B nA are mutually exclusive
\[\therefore P(A)=\sum\limits_{i=1}^{n}{P(A\cap {{B}_{i}})}\]
\[=\sum\limits_{i=1}^{n}{P({{B}_{i}})}.P(A|{{B}_{i}})\]
[Using law of Total Probability]
Now from (iii) we get,
\[P({{B}_{i}}|A)=\frac{P({{B}_{i}}).P(A|{{B}_{i}})}{\sum\limits_{i=1}^{n}{[P({{B}_{i}}).P(A|{{B}_{i}})]}}\]
The bag A has 3 white and 2 red balls and another bag B has 4 white and 5 red balls. A bag is chosen randomly and a ball also picked from that bag and seen that the ball picked is red. Find the probability that the bag chosen was B. Solution:
Let, the events E
1 = ‘Chosen bag is A’, E 2 = ‘Chosen bag is B’ and E = ‘red ball picked’.
Since, Picked ball is red so we have to find P(E
2|E).
\[P({{E}_{1}})=\frac{2}{2}=P({{E}_{2}})\]
Again, probability of ‘bag A is chosen and red ball picked’ is
\[P(E|{{E}_{1}})=\frac{2}{5}\]
Similarly,
\[P(E|{{E}_{2}})=\frac{5}{9}\]
Now, from Bayes’ Theorem we have,
\[P({{E}_{2}}|E)=\frac{P({{E}_{2}}).P(E|{{E}_{2}})}{P({{E}_{1}}).P(E|{{E}_{1}})+P({{E}_{2}}).P(E|{{E}_{2}})}\]
\[=\frac{\frac{1}{2}.\frac{5}{9}}{\frac{1}{2}.\frac{2}{5}+\frac{1}{2}.\frac{5}{9}}\]
\[=\frac{\frac{5}{18}}{\frac{1}{5}+\frac{5}{18}}\]
\[=\frac{\frac{5}{18}}{\frac{18+25}{90}}\]
\[=\frac{5}{18}.\frac{90}{43}\]
\[=\frac{25}{43}\]
A company has two plants to manufacture scooters. Plant X manufactures 70% of scooters and plant Y manufactures 30%. At plant X, 80% of scooters are rated standard quality and at plant Y, 90% of scooters are rated standard quality. A scooter is picked up at random and is found to be of standard quality. What is the chance that it has come from plant X, plant Y?
Let us define the following events:
H
1: scooter is manufactured by plant X
H
2: scooter is manufactured by plant Y
A: scooter is rated as standard quality
Then we are given,
\[P({{H}_{1}})=\frac{70}{100}=0.70\]
\[P({{H}_{2}})=\frac{30}{100}=0.30\]
\[P(A/{{H}_{1}})=\frac{80}{100}=0.80\]
\[P(A/{{H}_{2}})=\frac{90}{100}=0.90\]
The probability that scooter comes from plant X is of standard quality is,
\[P({{H}_{1}}/A)=\frac{P({{H}_{1}}).P(A/{{H}_{1}})}{P({{H}_{1}}).P(A/{{H}_{1}})+P({{H}_{2}}).P(A/{{H}_{2}})}\]
\[=\frac{(0.70).(0.80)}{(0.70).(0.80)+(0.30).(0.90)}\]
\[=\frac{56}{83}\]
Similarly,
\[P({{H}_{2}}/A)=\frac{P({{H}_{2}}).P(A/{{H}_{2}})}{P({{H}_{1}}).P(A/{{H}_{1}})+P({{H}_{2}}).P(A/{{H}_{2}})}\]
\[=\frac{(0.30).(0.90)}{(0.70).(0.80)+(0.30).(0.90)}\]
\[=\frac{27}{83}\]
Consider the clinical test described at the start of this section. Suppose that 1 in 1000 of the population is a carrier of the disease. Suppose also that the probability that a carrier tests negative is 1%, while the probability that a no carrier tests positive is 5%. (A test achieving these values would be regarded as very successful). i)A patient has just had a positive test result. What is the probability that the patient is a carrier? ii)A patient has just had a negative test result. What is the probability that the patient is a carrier?
Let A be the event ‘the patient is a carrier’, and B the event ‘the test result is positive’. We are given that P(A) = 0.001 (so that P(A
c) = 0.999), and that
\[P(B|A)=0.99\]
\[P(B|{{A}^{c}})=0.05\]
i)
\[P(A/B)=\frac{P(A).P(B/A)}{P(A).P(B/A)+P({{A}^{c}}).P(B/{{A}^{c}})}\]
\[=\frac{\text{(0}\text{.001 }\!\!\times\!\!\text{ }0.99\text{)}}{\text{(0}\text{.001 }\!\!\times\!\!\text{ 0}\text{.99)}+\text{(0}\text{.999 }\!\!\times\!\!\text{ 0}\text{.05)}}\]
\[\text{=0}\text{.0194}\]
ii)
\[P(A/{{B}^{c}})=\frac{P(A).P({{B}^{c}}/A)}{P(A).P({{B}^{c}}/A)+P({{A}^{c}}).P({{B}^{c}}/{{A}^{c}})}\]
\[=\frac{\text{(0}\text{.001 }\!\!\times\!\!\text{ }0.001\text{)}}{\text{(0}\text{.001 }\!\!\times\!\!\text{ }0.001\text{)}+\text{(0}\text{.999 }\!\!\times\!\!\text{ 0}\text{.95)}}\]
\[\text{=0}\text{.00001}\]
2% of the population has a certain blood disease in a serious form; 10% have it in a mild form; and 88% don’t have it at all. A new blood test is developed; the probability of testing positive is 9/10 if the subject has the serious form, 6/10 if the subject has the mild form, and 1/10 if the subject doesn’t have the disease. I have just tested positive. What is the probability that I have the serious form of the disease?
Let X be ‘has disease in serious form’, Y be ‘has disease in mild form’, and Z be ‘doesn’t have disease’. Let B be ‘test positive’. Then we are given that X, Y, Z form a partition.
P(X) = 0.02, P(Y) = 0.1, P(Z) = 0.88 P(B | X)= 0.9, P(B | Y) = 0.6, P(B | Z) = 0.1
Thus by the theorem of Total Probability,
\[\text{P(B) = 0}\text{.9 }\!\!\times\!\!\text{ 0}\text{.02 +0}\text{.6 }\!\!\times\!\!\text{ 0}\text{.1 +0}\text{.1 }\!\!\times\!\!\text{ 0}\text{.88 = 0}\text{.166,}\]
and then by Bayes’ Theorem,
\[\text{P(X }\!\!|\!\!\text{ B)=}\frac{P(X).P(B|X)}{P(B)}\]
\[=\frac{0.02\text{ }\!\!\times\!\!\text{ }0.9}{0.166}=0.108\] |
If $n$ is an integer and $5^n > 4,000,000.$ What is the least possible value of $n$? (answer: $10$)
How could I find the value of $n$ without using a calculator ?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
If $n$ is an integer and $5^n > 4,000,000.$ What is the least possible value of $n$? (answer: $10$)
How could I find the value of $n$ without using a calculator ?
\begin{eqnarray} & 5^n &>& 4.000.000\\ \Leftrightarrow & 5^n &>& 5^6 \cdot 2^8 \\ \Leftrightarrow & 5^{n-6} &>& 256.\\ \end{eqnarray} Then, $n=10$.
Divide 4000000 by 5,
without a calculator, getting 800000. Divide again; 160000. Again; 32000. Then 6400, then 1280, then 256, then 51 (rounding), then 10, then 2. So $2\times5^9$ is about 4000000, so $5^{10}$ exceeds 4000000.
$4,000,000 = 2^2 \times 10^6 = 2^8 \times 5^6$, so you want $5^{n-6} > 2^8 = 256$. Well, $5^3 = 125\ldots$.
By logarithm rules: $$5^{n}>4\cdot10^{6}\iff n>\log_{5}2^{2}2^{6}5^{6}=\log_{5}2^{8}+\log_{5}5^{6}=\log_{5}2^{8}+6=\log_{5}256+6$$
Since these are relatively small numbers I assume it is ok to write : $5^{3}=125$ thus clearly $3<\log_{5}256<4$ hence the minimal $n$ that satisfies this inequality is $4+6=10$
I dunno, this is a tough one, especially without a calculator.
Here is the Python program I used to figure this one out:
for n in range(1,11): print "5^%s-4,000,000 = %s" % (n, pow(5,n)-4000000)
Here is the output:
5^1-4,000,000 = -3999995.05^2-4,000,000 = -3999975.05^3-4,000,000 = -3999875.05^4-4,000,000 = -3999375.05^5-4,000,000 = -3996875.05^6-4,000,000 = -3984375.05^7-4,000,000 = -3921875.05^8-4,000,000 = -3609375.05^9-4,000,000 = -2046875.05^10-4,000,000 = 5765625.0
It looks like $n=10$ is the answer.
It helps if you remember that $ln(2) \approx 0.7$ and $ln(10) \approx 2.3$. (These are common bases to work in, so they're generally useful numbers.)
$$\begin{align} 5^n &> 4\ 000\ 000\\ \ln(5^n) &> \ln(4\ 000\ 000)\\ n (\ln 10 - \ln 2) &> 2 \ln 2 + 6 \ln 10\\ n (2.3 - 0.7) &> 2 \times 0.7 + 6 \times 2.3\\ 1.6 n &> 1.4 + 13.8\\ 1.6 n &> 15.2\\ 1.6 n &> 16 - 0.8\\ n &> 10 - 0.5\\ n &> 9.5\\ n &= 10 \end{align}$$
A bit much for mental arithmetic, but quite doable just typing into this here box.
The easiest way to multiply by $5$ without a calculator is to multiply by $10$ and then divide by $2$, i.e.: $$1: 5\times 5 = 50/2 = 25$$ $$2: 250/2 = 125$$ $$3: 1250/2 = 625$$ $$4: 6250/2 = 3125\ldots$$ Won't take you very long to get to $10$.
True. And for large power, use approximations :
$$5^{n} = \frac{10^n}{2^n} $$
$$5^9 = \frac{10^9}{2^9} = \frac{1,000,000,000}{512} \cong \frac{1,000,000,000}{500} \cong 2*10^6 < 4*10^6 $$
$$5^{10} = \frac{10^{10}}{2^{10}} = \frac{10,000,000,000}{1024} \cong \frac{10,000,000,000}{1000} \cong 10^7 > 4*10^6$$
It's not a mathematical way to prove, but it's a way to find the result using approximation.
The easiest way to multiply by 5 without a calculator is to multiply by 10 and then divide by 2. ie: 1: 5x5 = 50/2 = 25. 2: 250/2 = 125. 3: 1250/2 = 625. 4: 6250/2 = 3125... Won't take you very long to get to 10.
$$\log_{10}(5^n)=n\cdot \log_{10}(5)\approx n\cdot 0.7$$
$$\log_{10}(4000000)=\log_{10}(4)+6\approx 6.6$$
$$7\cdot 9=63\ \text{so that }\ \boxed{n=10}\ $$
$\log_{10}(2)\approx 0.3$ was only used giving $\log_{10}(5)=\log_{10}(10)-\log_{10}(2)$ and $\log_{10}(4)=2\cdot\log_{10}(2)$
(if non integer values are allowed $n\approx \frac {6.60206}{0.69897}$)
Taking square roots of both sides we solve $5^r>2000=25\cdot80$. The right side is approximated from below by $5^2\cdot5^2\cdot3$ so we want $5^{r-4}>3$ or $r=5$, so $n=2r=10$. Check $n=9$ is too small: $5^9<2^9\cdot 3^9<2^9\cdot 3^5\cdot 3^4=512\cdot 243\cdot 81<125,000\cdot 100<4,000,000$.
$5^n > 4,000,000$ find integer $n$. |
I have given a functional $l$ on $C_c^\infty(\mathbb{R}^n)$. Now let's assume that for any $p \in \mathbb{R}^n$ we have a neighborhood $V_p$ and a $2\pi$-periodic $C^\infty$-function $u_p$ on $\mathbb{R}^n$, such that
$ \forall \varphi \in C_c^\infty(V_p) $ (compact support in $V_p$)$ \colon \, l(\varphi) = \langle u_p , \varphi \rangle := 1/(2\pi)^n \int u_p \varphi$
So locally the functional is given by $u_p$. If I have overlapping neighborhoods $V_p$ and $V_q$ one can easily conclude that $l = \langle u_p , \cdot \rangle = \langle u_q , \cdot \rangle$ on $C_c^\infty(V_p \cap V_q)$. But since $u_p,u_q$ are not compactly supported on $V_p \cap V_q$ I can not conclude directly $u_p = u_q$ on $V_p \cap V_q$.
Am I right so far? How can I show that $u_p = u_q$ on the overlapping area? |
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1995 (9) (remove)
292
Symmetry properties of average densities and tangent measure distributions of measures on the line (1995)
Answering a question by Bedford and Fisher we show that for every Radon measure on the line with positive and finite lower and upper densities the one-sided average densities always agree with one half of the circular average densities at almost every point. We infer this result from a more general formula, which involves the notion of a tangent measure distribution introduced by Bandt and Graf. This formula shows that the tangent measure distributions are Palm distributions and define self-similar random measures in the sense of U. Zähle.
265
In multiple criteria optimization an important research topic is the topological structure of the set \( X_e \) of efficient solutions. Of major interest is the connectedness of \( X_e \), since it would allow the determination of \( X_e \) without considering non-efficient solutions in the process. We review general results on the subject,including the connectedness result for efficient solutions in multiple criteria linear programming. This result can be used to derive a definition of connectedness for discrete optimization problems. We present a counterexample to a previously stated result in this area, namely that the set of efficient solutions of the shortest path problem is connected. We will also show that connectedness does not hold for another important problem in discrete multiple criteria optimization: the spanning tree problem.
268
In this paper we will introduce the concept of lexicographic max-ordering solutions for multicriteria combinatorial optimization problems. Section 1 provides the basic notions of multicriteria combinatorial optimization and the definition of lexicographic max-ordering solutions. In Section 2 we will show that lexicographic max-ordering solutions are pareto optimal as well as max-ordering optimal solutions. Furthermore lexicographic max-ordering solutions can be used to characterize the set of pareto solutions. Further properties of lexicographic max-ordering solutions are given. Section 3 will be devoted to algorithms. We give a polynomial time algorithm for the two criteria case where one criterion is a sum and one is a bottleneck objective function, provided that the one criterion sum problem is solvable in polynomial time. For bottleneck functions an algorithm for the general case of Q criteria is presented.
267
In this paper we investigate two optimization problems for matroids with multiple objective functions, namely finding the pareto set and the max-ordering problem which conists in finding a basis such that the largest objective value is minimal. We prove that the decision versions of both problems are NP-complete. A solution procedure for the max-ordering problem is presented and a result on the relation of the solution sets of the two problems is given. The main results are a characterization of pareto bases by a basis exchange property and finally a connectivity result for proper pareto solutions.
266
262
An improved asymptotic analysis of the expected number of pivot steps required by the simplex algorithm (1995)
Let \(a_1,\dots,a_m\) be i.i .d. vectors uniform on the unit sphere in \(\mathbb{R}^n\), \(m\ge n\ge3\) and let \(X\):= {\(x \in \mathbb{R}^n \mid a ^T_i x\leq 1\)} be the random polyhedron generated by. Furthermore, for linearly independent vectors \(u\), \(\bar u\) in \(\mathbb{R}^n\), let \(S_{u, \bar u}(X)\) be the number of shadow vertices of \(X\) in \(span (u, \bar u\)). The paper provides an asymptotic expansion of the expectation value \(E (S_{u, \bar u})\) for fixed \(n\) and \(m\to\infty\). The first terms of the expansion are given explicitly. Our investigation of \(E (S_{u, \bar u})\) is closely connected to Borgwardt's probabilistic analysis of the shadow vertex algorithm - a parametric variant of the simplex algorithm. We obtain an improved asymptotic upper bound for the number of pivot steps required by the shadow vertex algorithm for uniformly on the sphere distributed data. |
Haunted Graveyard
Tonight is Halloween and Scared John and his friends have decided to do something fun to celebrate the occasion: crossing the graveyard. Although Scared John does not find this fun at all, he finally agreed to join them in their adventure. Once at the entrance, the friends have begun to cross the graveyard one by one, and now it is the time for Scared John. He still remembers the tales his grandmother told him when he was a child. She told him that, on Halloween night, “haunted holes” appear in the graveyard. These are not usual holes, but they transport people who fall inside to some point in the graveyard, possibly far away. But the scariest feature of these holes is that they allow one to travel in time as well as in space; i.e., if you fall inside a “haunted hole”, you appear somewhere in the graveyard a certain time before (or after) you entered the hole, in a parallel universe otherwise identical to ours.
The graveyard is organized as a grid of $W \times H$ cells, with the entrance in the cell at position $(0, 0)$ and the exit at $(W - 1, H - 1)$. Despite the darkness, Scared John can always recognize the exit, and he will leave as soon as he reaches it, determined never to set foot anywhere in the graveyard again. On his way to the exit, he can walk from one cell to an adjacent one, and he can only head to the North, East, South or West. In each cell there can be either one gravestone, one “haunted hole”, or grass:
If the cell contains a gravestone, you cannot walk over it, because gravestones are too high to climb.
If the cell contains a “haunted hole” and you walk over it, you will appear somewhere in the graveyard at a possibly different moment in time. The time difference depends on the particular “haunted hole” you fell into, and can be positive, negative or zero.
Otherwise, the cell has only grass, and you can walk freely over it.
He is terrified, so he wants to cross the graveyard as quickly as possible. And that is the reason why he has phoned you, a renowned programmer. He wants you to write a program that, given the description of the graveyard, computes the minimum time needed to go from the entrance to the exit. Scared John accepts using “haunted holes” if they permit him to cross the graveyard quicker, but he is frightened to death of the possibility of getting lost and being able to travel back in time indefinitely using the holes, so your program must report these situations.
Figure 3 illustrates a possible graveyard (the second test case from the sample input). In this case there are two gravestones in cells $(2, 1)$ and $(3, 1)$, and a “haunted hole” from cell $(3, 0)$ to cell $(2, 2)$ with a difference in time of $0$ seconds. The minimum time to cross the graveyard is $4$ seconds, corresponding to the path:\begin{equation*} (0,0) \xrightarrow [\text {1 sec}]{\text {East}} (1,0) \xrightarrow [\text {1 sec}]{\text {East}} (2,0) \xrightarrow [\text {1 sec}]{\text {East}} (3,0) \xrightarrow [\text {0 sec}]{\text {hole}} (2,2) \xrightarrow [\text {1 sec}]{\text {East}} (3,2) \end{equation*}
If you do not use the “haunted hole”, you need at least $5$ seconds.
Note that the destination of a “haunted hole” may have the entrance to another “haunted hole”. In this situation, Scared John will enter the second hole as soon as he exits the first one.
Scared John will leave the graveyard as soon as he reaches the exit, even if he would be able to travel back in time by continuing walking through the graveyard.
Input
The input consists of several test cases (at most $25$). Each test case begins with a line containing two integers $W$ and $H$ ($1 \leq W, H \leq 30$). These integers represent the width $W$ and height $H$ of the graveyard. The next line contains an integer $G$ ($G \geq 0$), the number of gravestones in the graveyard, and is followed by $G$ lines containing the positions of the gravestones. Each position is given by two integers $X$ and $Y$ ($0 \leq X < W$ and $0 \leq Y < H$).
The next line contains an integer $E$ ($E \geq 0$), the number of “haunted holes”, and is followed by $E$ lines. Each of these contains five integers $X_1, Y_1, X_2, Y_2, T$. $(X_1, Y_1)$ is the position of the “haunted hole” ($0 \leq X_1 < W$ and $0 \leq Y_1 < H$). $(X_2, Y_2)$ is the destination of the “haunted hole” ($0 \leq X_2 < W$ and $0 \leq Y_2 < H$). $T$ ($-10\, 000 \leq T \leq 10\, 000$) is the difference in seconds between the moment somebody enters the “haunted hole” and the moment he appears in the destination position; a positive number indicates that he reaches the destination after entering the hole.
You can safely assume that there are no two “haunted holes” with the same origin, and that neither the entrance nor the destination of a “haunted hole” contains a gravestone. Furthermore, there are neither gravestones nor “haunted holes” at positions $(0,0)$ and $(W-1,H-1)$.
The input will finish with a line containing
0 0, which should not be processed.
Output
For each test case, if it is possible for Scared John to travel back in time indefinitely without passing by the exit, output
Never. Otherwise, print the minimum time in seconds that it takes him to cross the graveyard from the entrance to the exit if it is reachable, and
Impossible if not.
Sample Input 1 Sample Output 1 3 3 2 2 1 1 2 0 4 3 2 2 1 3 1 1 3 0 2 2 0 4 2 0 1 2 0 1 0 -3 0 0 Impossible 4 Never |
Homology, Homotopy and Applications Homology Homotopy Appl. Volume 5, Number 1 (2003), 53-70. Group extensions and automorphism group rings Abstract
We use extensions to study the semi-simple quotient of the group ring $\mathbf{F}_pAut(P)$ of a finite $p$-group $P$. For an extension $E: N \to P \to Q$, our results involve relations between $Aut(N)$, $Aut(P)$, $Aut(Q)$ and the extension class $[E]\in H^2(Q, ZN)$. One novel feature is the use of the
intersection orbit group $\Omega([E])$, defined as the intersection of the orbits $Aut(N)\cdot[E]$ and $Aut(Q)\cdot [E]$ in $H^2(Q,ZN)$. This group is useful in computing $|Aut(P)|$. In case $N$, $Q$ are elementary Abelian $2$-groups our results involve the theory of quadratic forms and the Arf invariant. Article information Source Homology Homotopy Appl., Volume 5, Number 1 (2003), 53-70. Dates First available in Project Euclid: 13 February 2006 Permanent link to this document https://projecteuclid.org/euclid.hha/1139839926 Mathematical Reviews number (MathSciNet) MR1989613 Zentralblatt MATH identifier 1033.20047 Citation
Martino, John; Priddy, Stewart. Group extensions and automorphism group rings. Homology Homotopy Appl. 5 (2003), no. 1, 53--70. https://projecteuclid.org/euclid.hha/1139839926 |
This 18 page article seems pretty good as a historical account of who was responsible for what.
In general, the push for rigor is usually in response to a failure to be able to demonstrate the kinds of results one wishes to. It's usually relatively easy to demonstrate that there exist objects with certain properties, but you need precise definitions to prove that no such object exists. The classic example of this is non-computable problems and Turing Machines. Until you sit down and say "this precisely and nothing else is what it means to be solved by computation" it's impossible to prove that something isn't a computation, so when people start asking "is there an algorithm that does $\ldots$?" for questions where the answer "should be" no, you suddenly need a precise definition. Similar things happened with real analysis.
In real analysis, as mentioned in an excellent comment, there was a shift in what people's conception of the notion of a function was. This broadened conception of a function suddenly allows for a number of famous "counter example" functions to be constructed. These often that require a reasonably rigorous understanding of the topic to construct or to analyze. The most famous is the everywhere continuous nowhere differentiable Weierstrass function. If you don't have a very precise definition of continuity and differentiability, demonstrating that that function is one and not the other is extremely hard. The quest for weird functions with unexpected properties and combinations of properties was one of the driving forces in developing precise conceptions of those properties.
Another topic that people were very interested in was infinite series. There are lots of weird results that can crop up if you're not careful with infinite series, as shown by the now famously cautionary theorem:
Theorem (Summation Rearrangement Theorem): Let $a_n$ be a sequence such that $\sum a_n$ converges conditionally. Then for every $x$ there is some $b_n$ that is a reordering of $a_n$ such that $\sum b_n=x$.
This theorem means you have to be very careful dealing with infinite sums, and for a long time people weren't and so started deriving results that made no sense. Suddenly the usual free-wheeling algebraic manipulation approach to solving infinite sums was no longer okay, because sometimes doing so changed the value of the sum. Instead, a more rigorous theory of summation manipulation, as well as concepts such as uniform and absolute convergence had to be developed.
Here's an example of an problemsurrounding an infinite product created by Euler:
Consider the following formula:
$$x\prod_{n=1}^\infty \left(1-\frac{x^2}{n^2\pi^2}\right)$$
Does this expression even make sense? Assuming it does, does this equal $\sin(x)$ or $\sin(x)e^x$? How can you tell (notice that both functions have the same zeros as this sum, and the same relationship to their derivative)? If it doesn't equal $\sin(x)e^x$ (which it doesn't, it really does equal $\sin(x)$) how can we modify it so that it does?
Questions like this were very popular in the 1800s, as mathematicians were notably obsessed with infinite products and summations. However, most questions of this form require a very sophisticated understanding of analysis to handle (and weren't handled particularly well by the tools of the previous century). |
Definition:Set Union/Set of Sets Contents Definition
Let $\mathbb S$ be a set of sets.
The
union of $\mathbb S$ is: $\displaystyle \bigcup \mathbb S := \set {x: \exists X \in \mathbb S: x \in X}$
That is, the set of all elements of all elements of $\mathbb S$.
Thus the general union of two sets can be defined as: $\displaystyle \bigcup \set {S, T} = S \cup T$ Also denoted as
Some sources denote $\displaystyle \bigcup \mathbb S$ as $\displaystyle \bigcup_{S \mathop \in \mathbb S} S$.
Let:
\(\displaystyle A\) \(=\) \(\displaystyle \set {1, 2, 3, 4}\) \(\displaystyle B\) \(=\) \(\displaystyle \set {a, 3, 4}\) \(\displaystyle C\) \(=\) \(\displaystyle \set {2, a}\) Let $\mathscr S = \set {A, B, C}$.
Then:
$\displaystyle \bigcup \mathscr S = \set {1, 2, 3, 4, a}$
Let $\Z$ denote the set of integers.
Let $\map \Z n$ denote the initial segment of $\Z_{> 0}$:
$\map \Z n = \set {1, 2, \ldots, n}$ Let $\mathscr S := \set {\map \Z n: n \in \Z_{> 0} }$ Then: $\displaystyle \bigcup \mathscr S = \Z_{> 0}$ Also see Union of Doubleton for a proof that $\displaystyle \bigcup \set {S, T} = S \cup T$ Sources 1951: Nathan Jacobson: Lectures in Abstract Algebra: I. Basic Concepts... (previous) ... (next): Introduction $\S 1$: Operations on Sets 1960: Paul R. Halmos: Naive Set Theory... (previous) ... (next): $\S 4$: Unions and Intersections 1964: Steven A. Gaal: Point Set Topology... (previous) ... (next): Introduction to Set Theory: $1$. Elementary Operations on Sets 1965: J.A. Green: Sets and Groups... (previous) ... (next): $\S 1.8$. Sets of sets 1965: Seth Warner: Modern Algebra... (previous) ... (next): $\S 3$: Theorem $3.2$ 1967: George McCarty: Topology: An Introduction with Application to Topological Groups... (previous) ... (next): $\text{I}$: Unions and Intersections 1971: Robert H. Kasriel: Undergraduate Topology... (previous) ... (next): $\S 1.8$: Collections of Sets: Definition $8.1$ 1971: Gaisi Takeuti and Wilson M. Zaring: Introduction to Axiomatic Set Theory: $\S 5.5$, $\S 5.7$ 1993: Keith Devlin: The Joy of Sets: Fundamentals of Contemporary Set Theory(2nd ed.) ... (previous) ... (next): $\S 1$: Naive Set Theory: $\S 1.4$: Sets of Sets 1999: András Hajnal and Peter Hamburger: Set Theory... (previous) ... (next): $1$. Notation, Conventions: $8$ 2000: James R. Munkres: Topology(2nd ed.) ... (previous) ... (next): $1$: Set Theory and Logic: $\S 1$: Fundamental Concepts 2008: Paul Halmos and Steven Givant: Introduction to Boolean Algebras... (previous) ... (next): Appendix $\text{A}$: Set Theory: Operations on Sets |
Abbreviation:
MOLat
A
is an ortholattice $\mathbf{A}=\langle A,\vee,0,\wedge,1,'\rangle$ such that modular ortholattice
the
holds: $x\le z\Longrightarrow (x\vee y) \wedge z\le x\vee (y\wedge z)$ modular law
Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x ... y)=h(x) ... h(y)$
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct |
Nonstandard Constraints and the Power of Weak Contributions
Have you ever wanted to add a certain boundary or domain condition to a physics problem but couldn’t find a built-in feature? Today, we will show you how to implement nonstandard constraints using the so-called weak contributions. Weak contributions are, in fact, what the software internally uses to apply the built-in domain and boundary conditions. They provide a flexible and physics-independent way to extend the applicability of the COMSOL Multiphysics® software.
Introduction to Weak Contributions
Many of the problems solved in COMSOL Multiphysics can be thought of as finding functions that minimize some quantity. In equilibrium problems of elasticity, for example, we look for displacements that minimize the total strain energy. In our blog series on variational problems and constraints, we showed how to use the
Weak Form PDE interface to solve both constrained and unconstrained variational problems. We used a generalized constraint framework to deal with all kinds of restrictions on the solution. There, we showed you how to implement both the unconstrained and constrained problem.
Often, the quantity that has to be minimized is well understood and what we have to prescribe in our specific situations are the constraints. Ideally, we should not reinvent the wheel on the unconstrained problem. Frequently, constraints are boundary conditions, but sometimes they can be requirements to be satisfied at every point or by an integral of the solution. Several options for boundary conditions and other constraints are built into COMSOL Multiphysics, but from time to time, you may want to add a novel constraint or two. Today, we will see how to do so using weak contributions.
In this blog post, we:
Give a quick recap of adding constraints to variational problems Use weak contributions to add a nonstandard constraint to a rather well-known equation Compare this strategy with a more physically motivated implementation Adding Constraints to an Extremization Problem
In our blog series on variational problems and constraints, we discussed in detail the analytical and numerical aspects of the subject as well as the COMSOL® software implementation. Readers unfamiliar with the subject will benefit from going over that series. In this section, we summarize the main ideas needed to work through today’s examples.
The method of Lagrange multipliers is used to recast constrained variational problems to equivalent unconstrained problems. Consider the constrained variational problem
(1)
(2)
The feasible critical points of this constrained problem are the stationary points of the augmented functional
(3)
Let’s take the variational derivative of this functional.
(4)
Say the unconstrained part of the problem is already taken care of and we just want to add what is necessary to enforce a constraint. Our responsibility then is only the second term in the above equation. In COMSOL Multiphysics, when we add a physics interface, it can be thought of as adding an unconstrained variational problem. Afterward, constraints on boundaries and domains can be added through one or several built-in standard boundary conditions. What if we have a nonstandard constraint that is not built in? Using weak contributions gives great flexibility to add such conditions.
Going back to the functional above, let us focus on the contributions coming from the constraint.
For the distributed constraint above, the Lagrange multiplier \lambda (x) is a function defined over the geometric entity subject to the constraint. For a global constraint such as an integral or average constraint, on the other hand, the Lagrange multiplier is one number. Say we want to impose the global integral constraint
The augmented functional is
and its variation is
(5)
Thus, if the boundary condition or other constraint you want to enforce on your solution is not built in, but there is a built-in physics interface for the physics, all you need to do is add the last two terms in the above equation using the
Weak Contribution node. Let’s demonstrate this with an example. Constraining the Average Vertical Displacement of a Spring
In this example, a spring is rigidly fixed at the bottom end and we want the top end (boundary 4 in the model below) to have an average vertical displacement of 2 cm. This is a linear elasticity problem and this physics is built in. Also, rigidly fixing a face is a standard boundary condition. On the other hand, specifying an average displacement on a face is not.
Note that we are not asking for the vertical displacement of each point on the face to be 2 cm. That could have been specified with the built-in
Prescribed Displacement node. What we have is the global constraint
(6)
where A is the face in question and dh is the desired average vertical displacement (2 cm in our case). Here, dh is not a differential of any quantity. It is just the name of a parameter used for the average vertical displacement on the face. We could directly have written 2 cm in its place.
With all but this constraint implemented using standard features, our variational problem becomes finding the stationary point of the augmented functional
The corresponding stationary condition is
(7)
Let us add these two contributions using a boundary weak contribution and a global weak contribution. In the Model Builder, we can distinguish between boundary and global weak contribution from the icons. Boundary contributions have the same icons as boundary conditions whereas global contributions have icons with an integral sign (\int). Additionally, the Settings window for boundary contributions contains a boundary selection section whereas there is no geometric entity selection for a global contribution.
Boundary and global weak contributions to enforce constraint on an average displacement over a surface.
Finally, the variable \lambda is an auxiliary global variable we defined in our Lagrange multiplier method. Any new variable related to the constraint has to be defined either in the
Auxiliary Variable subnode of a Weak Contribution node or in the Global Equations node based on the nature of the constraint.
In our example, we have a global constraint and, as such, we have to define it using a
Global Equations node. Often, an equation will be entered in the Global Equations Settings window as well. This is not necessary here, as we have included in the weak contribution a term containing the variation of the Lagrange multiplier. An alternative — using the Global Equation node to define both the global degree of freedom and its equation — will be discussed later. A global equation adds one degree of freedom to our problem. Defining an auxiliary global unknown.
If we solve this problem, we get the solution shown below. We can see the value of the Lagrange multiplier and the average displacements in
Results > Derived Values. If we look at the vertical displacement on the constrained surface, it is not uniform; it just averages to 2 cm as per the constraint.
We would like to clarify two items about the above implementation.
Both the second and third terms in (Eq. 7) contain integrals. In the boundary weak contribution, Weak Contribution 1, we add just the integrand. The integral in the global weak contribution, on the other hand, needs the integration operator to be explicitly called. Alternatively, we could have added the integrand
test(lam)*w/Areato
Weak Contribution 1and kept only
-test(lam)*dhin
Weak Contribution 2. The constraint in this example is global. For a distributed constraint, the Lagrange multiplier is a function of location and it has to be defined as an auxiliary variable under the boundary weak contribution. See our blog post on variational constraints for more on this distinction. Alternative Implementation
The term multiplying the variation of the Lagrange multiplier, \delta \lambda, can be specified in the
Global Equation node, where the Lagrange multiplier itself is defined. The screenshots below show how to do so. This only replaces the third term in (Eq. 7). The term not containing \delta \lambda still has to be specified as a boundary weak contribution. Note that
intop1() is an integration operator defined over boundary 4 and
Area is the area of that boundary given by
intop1(1).
Alternative specification of the weak term containing a variation of a global Lagrange multiplier. Physical Interpretation of the Lagrange Multiplier
The above solution gives us the displacements and stresses induced by moving boundary 4 by an average vertical displacement of 2 cm. The question is: How do we physically force the structure to conform to our wish? You guessed it: We apply a force. The Lagrange multiplier is related to the force (flux) needed to enforce a constraint. The operative word here is
related. Let us see what we mean here in detail. First, let’s try an alternative formulation of the constraint.
The constraint in (Eq. 6) is mathematically equivalent to
(8)
The augmented functional corresponding to this form of the constraint is
and the corresponding stationary condition is
(9)
If we enter the last two terms in this equation as weak contributions and solve, we get a Lagrange multiplier much different from what was obtained in our first implementation. The displacements and stresses remain the same nevertheless. So, we can suspect that the Lagrange multiplier in and of itself is not a physical quantity and, as such, cannot tell us what to physically do to enforce a desired constraint. One reliable way to find out what we should do physically is to postprocess the results to see reaction forces (fluxes).
To rigorously establish what the Lagrange multiplier is physically, we have to look at the unconstrained part of the equation that we have been hiding so far. In today’s example, that means looking at the weak form of the solid mechanics equation. For a deformable solid in equilibrium, the weak form, also known as the virtual work equation, is given by
(10)
where \mathbf{u} = (u,v,w) is the displacement vector and \sigma, \varepsilon, \mathbf{b}, and \mathbf{\Gamma} are respectively the stress, strain, body load per unit volume, and boundary load. The weak form for any COMSOL Multiphysics physics interface can be viewed by enabling the
Equation View.
If we compare (Eq. 10) with (Eq. 9) and (Eq. 7), we see that the Lagrange multipliers in today’s example appear in the same place as the boundary load on the constrained surface. One difference is that the Lagrange multiplier appears outside the surface integral, whereas in the boundary load is inside the integral. This stems from the global nature of our constraint. A second difference is the Lagrange multiplier in our example goes with the vertical displacement w, whereas the surface load in the solid mechanics equation is dot multiplied by the variation of the displacement vector. Let us reconcile these items one at a time.
Now we see that our Lagrange multiplier is related to the vertical component of a boundary load. Finally, if the boundary load is constant, we have
Comparing this with (Eq. 7), we see that in our first implementation, the Lagrange multiplier corresponds to the total vertical boundary load. Now that we know for the specific physics and a specific form of the constraint equation, the Lagrange multiplier is the total vertical load on a face, we can use the built-in
Boundary Load node to enforce the constraint instead of the weak contribution. This process is shown in the loaded spring example in the Application Gallery. Alternative implementation when we know what the Lagrange multiplier physically corresponds to.
This last implementation can be thought of as asking the software to apply whatever vertical total force is required to enforce an average vertical displacement. We could do that because, by looking at the weak form of the solid mechanics equation, we identify the correspondence between the Lagrange multiplier and the total force. It is not always possible to make such connections with a standard force (flux) term. Note that we could have used the default distributed boundary load, which would only have changed the number of the extra degrees of freedom used internally.
Generically speaking, for dimensional consistency, in (Eq. 3), the product of the Lagrange multiplier and the constraint g should give the density of the “energy” E per unit volume, area, or length depending on the geometric entity the constraint is applied on. In many engineering problems, E literally represents energy and, as such, we say the Lagrange multiplier is energetically conjugate with the constraint. This means, for example, that if we scale, square, or do any operation that mathematically changes the unit of the constraint equation, then the unit and thus the physical meaning of the Lagrange multiplier changes. What doesn’t change, however, is that of \lambda \frac{\partial g}{\partial u}, as it is always energetically conjugate with u (Eq. 5). It is this product of the Lagrange multiplier and the constraint Jacobian that is a force (flux) density in a generalized sense. If \frac{\partial g}{\partial u}=1, which is often the case with linear constraints, the Lagrange multiplier is indeed the generalized force (flux).
The beauty of the weak contribution is that you can enforce the constraint without having to go through the weak form of the built-in physics. Then, you can postprocess the result to find out the physical course of action. The implementation is physics independent.
Concluding Thoughts on Weak Contributions
Today, we have discussed how the COMSOL Multiphysics software facilitates the implementation of nonstandard boundary conditions. Using weak contributions, we have a flexible and physics-independent strategy to add constraints that are not used frequently enough to be standard features in the software. The mathematical roots of this method are in problems where the solution minimizes some quantity, but the strategy can be used for problems given by partial differential equations that do not have a corresponding variational solution. For more background information on this topic, we recommend our blog series on the weak form and on variational problems.
Next Steps
If you have any questions on weak contributions or another topic, or want to learn more about how the features and functionality in COMSOL Multiphysics suit your modeling needs, you’re welcome to contact us.
Check out the following Application Gallery examples and blog posts for more demonstrations of using weak contributions and extra equations in various physics areas:
Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
Search
astrophysics (66)biophysics (16)chemistry (18)electric field (58)electric current (61)gravitational field (64)hydromechanics (123)nuclear physics (34)oscillations (40)quantum physics (25)magnetic field (29)mathematics (75)mechanics of a point mass (219)gas mechanics (79)mechanics of rigid bodies (188)molecular physics (59)geometrical optics (65)wave optics (47)other (135)relativistic physics (33)statistical physics (24)thermodynamics (117)wave mechanics (42)
mechanics of rigid bodies (8 points)2. Series 33. Year - 5. wheel with a spring
We have a perfectly rigid homogeneous disc with a radius $R$ and mass $m$, to which a rubber band is connected. It is fixed by one end in distance $2R$ from an edge of the disc and by the other end at the end of the disc. The rubber band behave as ideal, thin spring with stiffness $k$, rest length $2R$ and negligible mass. Disc is secured in the middle, so it is able to rotate in one axis around this point, but cannot move or change the rotation axis. Figure out relation between the magnitude of moment of force, by which the rubber band will be increasing or decreasing the rotation of disc depending on $\phi $. Also, figure out an equation of motion.
Bonus: Define the period of system's small oscillations. (9 points)5. Series 32. Year - 5. bouncing ball
We spin a rigid ball in the air with angular velocity $\omega $ high enough parallel with the ground. After that we let the ball fall from height $h_0$ onto a horizontal surface. It bounces back from the surface to height $h_1$ and falls to a slightly different spot than the initial spot of fall. Determine the distance between those two spots of fall onto ground, given the coefficient of friction $f$ between the ball and the ground is small enough.
Matej observed Fykos birds playing with a ball
(9 points)4. Series 32. Year - 5. frisbee
A thin homogeneous disc revolves on a flat horizontal surface around a circle with the radius $R$. The velocity of disk's centre is $v$. Find the angle $\alpha $ between the disc plane and the vertical. The friction between the disc and the surface is sufficiently large. You may work under the approximation where the radius of the disc is much less than $R$.
Jáchym hopes that contestants will come up with a solution.
(8 points)3. Series 32. Year - 4. destruction of a copper loop
A copper flexible circular loop of radius $r$ is placed in a uniform magnetic field $B$. The vector of magnetic induction is perpendicular to the plane determined by the loop. The maximal allowed tensile strength of the material is $\sigma _p$. The flux linkage of this circular loop is changing in time as $\Phi (t) = \Phi _0 + \alpha t,$ where $\alpha $ is a positive constant. How long does it take to reach $\sigma _p$?
Hint: Tension force can be calculated as $T = |BIr|$.}
Vítek thinks back to AP Physics.
(6 points)6. Series 31. Year - 3. non-analytic spring
Imagine a pole of length $b = 5 \mathrm{cm}$ and mass $m = 1 \mathrm{kg}$ and a spring of initial length $c = 10 \mathrm{cm}$, spring constant $k = 200 \mathrm{N\cdot m^{-1}}$ and negligible mass, that are connected at one of their ends. The other ends of the spring and the pole are affixed at the same height $a = 10 \mathrm{cm}$ from each other. The spring and the pole can both freely rotate about the fixed points and their joint. Label $\phi $ the angle of the pole to the horizontal. Find all angles $\phi $, for which the system is in an equilibrium. Which of these are stable and which unstable?
Jachym was supposed to come up with an easy problem.
(7 points)6. Series 31. Year - 4. dimensional analysis
Matej was making a gun and wanted to measure what is the speed of the projectiles leaving the barrel. Unfortunately, he doesn't have any other measuring device, than a ruler. However, he found a block that is made half from steel half from wood. He lays it down at the edge of the table (of height $100 \mathrm{cm}$ and length $200 \mathrm{cm}$), and shoots at it horizontally. With the steel part of the block facing the gun, the bullet bounces off perfectly elastically and lands $50 \mathrm{cm}$ from the edge of the table. The block slides $5 \mathrm{cm}$ on the table. Then Matej turns around the block and shoots into the wooden side. This time the bullet stays in the block and the block slides only $4 \mathrm{cm}$. Help Matej with calculating the speed of the bullet. It might be also helpful to know, that when Matej lifts one edge of the table by at least $20 \mathrm{cm}$, the moving block won't stop sliding.
Matej wanted all the variables to have the same unit.
(12 points)4. Series 31. Year - E. heft of a string
Measure the length density of the catgut which arrived to you together with the tasks. You are forbidden to weigh the catgut.
Hint: You can try to vibrate the string.
Mišo wondered about catguts on ITF.
(7 points)3. Series 31. Year - 4. dropped pen
We drop a pen (rigid stick) on a table so that it makes an angle $\alpha $ with horizontal plane during its fall. Calculate the velocity of the higher end during its impact. When we dropped the pen, its center of mass was at height $h$. All collisions are inelastic and friction between the table and the end of the pen large enough.
Bonus: Calculate the angle $\alpha $ so that the velocity (of the second end that touches the table) is maximal. For which height $h$ is it worth to tilt the pen?
Matt was bored. |
I have the following system of equations:
$ \begin{cases} \frac{du}{dt} = v - v^3 \,, \\ \frac{dv}{dt} = -u - u^3 \,. \end{cases} $
I'm asked to find a Lyapunov function (Lyapunov's second method) to determine the stability around the origin. Using a linearization near the origin, I have found that the eigenvalues of the Jacobian are $\pm i$ and hence, the origin is a stable center point.
I figured this means I need to find a positive definite function (that is zero in the origin) and has negative semidefinite derivative (with respect to the system).
The questions in the book $\textit{Elementary Differential Equations and Boundary Value Problems}$ by $\textit{Boyce}$ and $\textit{DiPrima}$ are usually solved by trying the polynomials $V(u,v) = au^2 + bv^2$ or $V(u,v) = au^2 + buv + cv^2$. Sometimes a change to polar coordinates is made to determine a radius in which the derivative is negative. But I can't seem to ensure a derivative that is less or equal to zero in this case, for example:
Take $V(u,v) := au^2 + bv^2$, then
$ \begin{align*} \dot V &= 2auu' + 2bvv' \\ &= 2au(v-v^3) + 2bv(-u-u^3) & \mbox{let (for example) $a=b=1$}\\ &= -2uv^3 - 2vu^3 \end{align*} $
As these are cubic terms, they may very well be positive. |
This is a collection of calculus problems, organized into short chapters by subject area. Each chapter starts with a set of definitions and theorems, then some “guided exercises” (worked problems), and a large collection of exercises, with brief answers in the back.
So far this sounds a lot like
Schaum’s Outlines: Calculus, and in fact there are strong similarities between the books, although the present book only covers single-variable calculus, series, and ordinary differential equations while Schaum also does multi-variable. The present book is more talkative and has much more difficult functions in the examples and exercises (which are therefore probably more realistic). In both books nearly all the exercises concern properties of specific stated functions rather than general theorems, although the present book poses some more open-ended problems (usually near the end of the chapter) and has just a few proof problems.
Overall I rate the raw difficulty of the two books about the same, although the present book is not compartmentalized as much as
Schaum (or most courses) and requires you to remember things from other areas of calculus. For example, the series problems in the present book often involve logs, exponentials, and trig functions, so you really have to have a good understanding of these to solve the problems. The last chapter is miscellaneous problems that are given without any hints to the solution methods to use, and so would be very challenging for most students.
A couple of samples of the more advanced topics: (1) on p. 62ff are several examples of finding the limit of a sequence defined by a recursion; for example, given \(a_1 = 2\) and \(a_{n+1} = (a_n^2 + 3)/(2a_n)\), show \(\lim a_n\) exists and find it. These usually require several stages of bootstrapping (although the book does not use this term), where we successively show the sequence is positive, then monotonic (and so goes to a limit), and then find the limit by taking the limit of the recursion. (2) One of the few proofs: On p. 232 the beautiful but little-known theorem that if a sum \(\sum a_n\) of monotone decreasing positive terms converges, then \(n a_n \to 0\).
The book originates in Italy and is based on many years of written tests at the University of Genova. The book’s original goal was to prepare students to succeed in first-year calculus in Italian universities, but it is now being marketed as a world book. The authors apologize in a few spots for an Italy-centric view, but really these points are very minor and everything here would also work in US curricula.
One big weakness of the book is that it makes no mention of technology. The whole first chapter discusses how to draw graphs without mentioning that there are machines today that can do this for us. In many of these problems (especially on sequences and series) it would be very helpful to work out some samples to help guess a good approach.
Schaum does a much better job here, and often asks the student to use a graphing calculator to try things out.
Bottom line: a good alternative to
Schaum’s Outline for more ambitious students.
Allen Stenger is a math hobbyist and retired software developer. He is an editor of the Missouri Journal of Mathematical Sciences. His personal web page is allenstenger.com. His mathematical interests are number theory and classical analysis. |
Abbreviation:
PoMon
A
is a structure $\mathbf{A}=\langle A,\cdot,1,\le\rangle$ such that partially ordered monoid
$\langle A,\cdot,1\rangle$ is a monoid
$\langle G,\le\rangle$ is a partially ordered set
$\cdot$ is
: $x\le y\Longrightarrow wxz\le wyz$ orderpreserving
Let $\mathbf{A}$ and $\mathbf{B}$ be partially ordered monoids. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is an orderpreserving homomorphism: $h(x \cdot y)=h(x) \cdot h(y)$, $h(1)=1$, $x\le y\Longrightarrow h(x)\le h(y)$
Example 1:
Every monoid with the discrete partial order is a po-monoid.
$\begin{array}{lr} f(1)= &1\\ f(2)= &4\\ f(3)= &37\\ f(4)= &549\\ f(5)= &\\ \end{array}$
Partially ordered semigroups reduced type |
I've found Lagrange's
Sur la résolution des équations algébriques to be a very confusing and difficult read, and I think I'm starting to see why: it seems that Lagrange thinks of algebra in a much more formal/symbolic way than I'm used to. Whereas I think of a symbol $x$ as referring to a specific number (which may be unknown), and I think of $2x + 3x = 5x$ as being justified because it would be true no matter which number $x$ is, Lagrange seems to just view $x$ as a sort of symbol on which certain rules of operation are defined, and $2x + 3x = 5x$ as being justified because it's one of those rules.
Here are some examples:
When explaining Cardan's method for solving the cubic, Lagrange blithely divides by a variable $y$ with no consideration for whether it is or isn't zero. At first I thought this was sloppy reasoning but now I wonder if he basically thinks of this as a calculation in the field of rational functions $\mathbb C(y)$.
Where $a, b, c$ are the roots of a particular cubic, Lagrange states in passing that $\frac {a+\alpha b + \beta c} 3$ will have $3!$ different values when we permute $a, b, c$ in every possible way. This is correct if we think of "different values" as meaning different
forms, but possibly incorrect if $a, b, c$ are standing in for specific numbers (if they're all zero, for instance).
With $\alpha$ a primitive cube root of unity, Lagrange concludes from $\alpha Aa+\alpha Bb + \alpha Cc=Aa+Bc+Cb$ that $\alpha A=A$. This may be incorrect if $A, B, C$ and $a, b, c$ are specific numbers, but correct if we think of them as symbols and just want the expression on the left to be the same as the expression on the right.
Am I correct that 18th century algebra, or at least Lagrange, was done in this more "symbolic" way? If so, how can this be made rigorous in modern terminology? Where can I learn more about the way Lagrange and his contemporaries thought about their subject? |
The Upper Incomplete Gamma function, for $t \in \mathbb{R}$, is defined as:
\begin{equation} \Gamma(α,β)=\int_{β}^{\infty}t^{α-1}e^{-t}dt \end{equation}
For the problem which I am studying it takes the form:
\begin{equation} \Gamma(1+d,A-c\ln x)=\int_{(A-c\ln x)}^{\infty}t^{d}e^{-t}dt \end{equation}
where
\begin{equation} A=\frac{cdr}{1-(1-c)r} \end{equation}
For the parameters it holds that $c\in (0,1]$, $d \in \mathbb{R}$ and $r$ is an arbitary real constant which plays no significant role at this particular stage.
My task is to perturbate the Incomplete Gamma function above by a parameter $\epsilon$, such as $0<\epsilon <<1$. To do so, I have to expand the $Γ(1+d+ε, A-c\ln x)$ function into a Taylor series around the point $0$. But to do a Taylor's expansion I need, by definition:
\begin{equation} Γ(1+d+ε, A-c\ln x)=\sum_{n=0}^{\infty}\frac{\partial^n}{\partialε^n}Γ(1+d+ε, A-c \lim_{x\to 0}\ln x)\frac{ε^n}{n!} \end{equation}
which can't be done since $\ln x \to -\infty$ as $x \to 0$.
I am really looking for reason here. Am I doing something wrong? Should I not take $x \to 0$ but $ε \to 0$ instead?
Another thought which has crossed my mind is perhaps take $Γ(0,0)$? This is how I would have done it if I would like to perturbate an ODE. Therefore, perhaps it needs a Taylor expansion of two variables. I dont know, I have totally got something wrong here..
How am I going to perturbate this function given all the above? If anyone could point out to me how to proceed with the first derivative and the right way of thinking I am pretty sure that I can manage the rest.
Any help would be greatly appreciated. Thank you! |
As far as I can tell, if you start with the integral form of the remainder of a Taylor polynomial then you can derive the Lagrange form by an application of the mean value theorem for integrals. From Spivak's Calculus:
If $f^{\left({n + 1}\right)}$ is continuous on $[a, x]$, then
$$\displaystyle R_{n,a} \left({x}\right) = \int_a^x \dfrac {f^{\left({n + 1}\right)}\left({t}\right)}{n!} \left({x - t}\right)^n \, \mathrm d t$$
Let m and M be the minimum and maximum of $\dfrac {f^{\left({n + 1}\right)}}{n!}$ on $[a, x]$, then $R_{n,a}(x)$ satisfies
$$m\int_a^x \left({x - t}\right)^n \, \mathrm d t \le R_{n,a} \left({x}\right) \le M\int_a^x \left({x - t}\right)^n \, \mathrm d t$$
so we can write
$$\displaystyle R_{n,a} \left({x}\right) = \alpha \cdot \dfrac {\left({x - a}\right)^{n+1}} {{n+1}!}$$
Now here is my question. Shouldn't this imply that there is a number $x^* \in \left[{a \,.\,.\, x}\right]$ such that
$$\displaystyle R_{n,a} \left({x}\right) = \dfrac {f^{\left({n + 1}\right)} \left({x^*}\right)} {\left({n + 1}\right)!} \left({x - a}\right)^{n + 1}$$
in accordance with the mean value theorem for integrals?
Yet all sources I reviewed, including Spivak, state that $x^* \in \left({a \,.\,.\, x}\right)$. |
I need to model the following statement:
if $\sum\limits_{i=1}^N X_i=k$ then $Y=1$ else $Y=0$
$X_i$'s are binary variables
$k$ is an integer between $0$ and $N$
$Y$ is a binary variable
Thank you in advance.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I need to model the following statement:
if $\sum\limits_{i=1}^N X_i=k$ then $Y=1$ else $Y=0$
$X_i$'s are binary variables
$k$ is an integer between $0$ and $N$
$Y$ is a binary variable
Thank you in advance.
In case $N$ is a constant, then add the following two inequalities to your MILP: $$NY \leq k $$ $$N-1+Y \geq k$$
As $Y$ is a binary variable, if $k\lt N$ then $Y$ must be $0$ to fulfill both inequalities. And when $k=N$ then $Y$ must be $1$ to fulfill both inequalities
To your question in the comments: $N$ positive constant, $k$ positive integer constant with $k\leq N$, $X$ decision variable with $0\leq X \leq N$.
Add 3 binary variables $r_1, r_2,r_3$ together with the following constraints: $$ \eqalign{ kr_1 & \leq X \\ X & \leq (1-r_1)(k-1)+r_1N \\ (k+1)(1-r_2) & \leq X\\ X & \leq kr_2+(1-r_2)N\\ 2r_3 & \leq r_1 +r_2 \\ r_1 +r_2 & \leq 1+ r_3 }$$
Then $r_3 = 1$ iff $X=k$, else $r_3 = 0$ |
I am hung up on this question for real analysis ( intro to anaylsis ).
Find $\inf D$ and $\sup D$
$$\mathrm{D}=\left\{\frac{m+n\sqrt{2}}{m+n\sqrt{3}} :m,n\in\Bbb{N}\right\}$$
I have spent enough time staring at this thing that I know the $\sup D=1$ and $\inf D=\frac{\sqrt{2}}{\sqrt{3}}$.
for $\sup D$: $$m+n\sqrt{2}<m+n\sqrt{3}\implies\frac{m+n\sqrt{2}}{m+n\sqrt{3}}<1$$ so $1$ is an upper bound for $D$, and then for the confirmation that 1 is the least upper bound I can prove by contradiction that $\sup D$ cannot be less than $1$, because I could always find a $d \in D$ such that $$\sup D<d<1$$, which is the contradiction since no $d \in D$ can be greater than $\sup D$.(proof omited)
So my problem is with $\inf D$. I am having trouble establishing that $\frac{\sqrt{2}}{\sqrt{3}}$ is a lower bound. I am just not seeing it. The intuition is that if $m$ is small and $n$ is large than the fraction $\frac{\sqrt{2}}{\sqrt{3}}$ dominates the expression, however it will always be slightly greater than $\frac{\sqrt{2}}{\sqrt{3}}$. Analytically I am just not able to show it.
Any help would be greatly appreciated |
In
Report on the Theory of Numbers, H.J.S. Smith writes:
"The impossibility of solving [Fermat's] equation has been demonstrated by M. Kummer, first, for all values of $\lambda$ not included among the exceptional primes; and secondly, for all exceptional primes which satisfy the three following conditions:
That the first factor of H, though divisible by $\lambda$, is not divisible by $\lambda^2$. That a complex modulus can be assigned, for which a certain definite complex unit is not congruous to a perfect $\lambda$-th power. That $B_{\kappa \lambda}$ is not divisible by $\lambda^3$, $B_{\kappa}$ representing that Bernoullian number $[\kappa \leq \mu-1]$ which is divisible by $\lambda$.
Three numbers below 100, viz. 37, 59, 67, are, as we have seen, exceptional primes. But it has been ascertained by M. Kummer that the three conditions just given are satisfied in the case of each of these three numbers; so that the impossibility of Fermat's equation has been demonstrated for all values of the exponent up to 100.
Indeed, it would probably be difficult to find an exceptional prime not satisfying the three conditions, and consequently excluded from M. Kummer's demonstration."
Can anyone cite an exceptional prime that does not satisfy the three conditions? |
It is hard for me to find a site/book/article explaining how, exactly, tau particles are created.
Tauons are leptons like the electron and the muon, only heavier. They weigh in at 1776.86MeV according to the Particle Data Group (cf http://pdg.lbl.gov).
They can be produced in a particle accelerator directly via particle + anti-particle annihilation - as long as the total energy in the center of mass system is $> 2\cdot 1776.86MeV$. For example $e^+ + e^- \rightarrow \tau^+ + \tau^-$.
An alternate pathway is when the particle + anti-particle annihilation produces a pair of heavy quarks (b or t) and at least one member of the pair decays via the weak decay into a $\tau$, $\tau$-neutrino plus other particles.
The tau lepton is a very short-lived particle with a mean life time of only 0.29ps. So, it is in practice only detected by recording its decay products and reconstructing (ie computing) its mass from those.
For more details look at http://www.thefullwiki.org/Lepton |
Search
Now showing items 1-2 of 2
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ... |
3 Ways to Optimize the Current in Electromagnetic Coils
If you design electromagnetic coils, the combination of the AC/DC and Optimization modules with the COMSOL Multiphysics® software gives you the power to quickly come up with improved design iterations. Today, we will look at designing a coil system to achieve a desired magnetic field distribution by changing the coil’s driving currents. We will also introduce three different optimization objectives and constraints. This topic is of interest to anyone who is modeling coils or curious about optimization.
The Magnetic Field Model and Optimization Problem Statement
The problem we will look at today is the optimization of a ten-turn axisymmetric coil structure, as shown in the image below. Each of the five turns on either side of the
xy-plane is symmetrically but independently driven. A ten-turn coil with five independently driven coil pairs. The objective is to alter the magnetic field at the centerline (green highlight).
The coil is both rotationally symmetric and symmetric about the
z = 0 plane, so we can reduce the computational model to a 2D axisymmetric model, as shown in the schematic below. Our modeling domain is truncated with an infinite element domain. We use the Perfect Magnetic Conductor boundary condition to exploit symmetry about the z = 0 plane. Thus, our model reduces to a quarter-circle domain with five independent coils that are modeled using the Coil Domain feature. A schematic of the computational model.
If all of the coils are driven with the same current of 10 A, we can plot the
z-component of the magnetic flux density along the centerline, as shown in the image below. It is this field distribution along a part of the centerline that we want to change via optimization. The magnetic field distribution along the coil centerline. We want to adjust the magnetic field within the optimization zone.
From the image above, we see the magnetic field along a portion of the centerline due to a current of 10 A through each coil. It is this field distribution that we want to change by adjusting the current flowing through the coils. Our design variables are the five unique coil currents: I_1, I_2, \dotso, I_5. These design variables have bounds: -I_{max}\le I_{1,\dotso,5}\le I_{max}. That is, the current cannot be too great in magnitude, otherwise the coils will overheat.
We will look at three different optimization problem statements:
To have the magnetic field at the centerline be as close to a desired target value as possible To minimize the power needed to drive the coil, along with a constraint on the field minimum at several points To minimize the gradient of the magnetic field along the centerline, along with a constraint on the field at one point Optimizing for a Particular Field Value
Let’s state these optimization problems a bit more formally. The first optimization problem can be written as:
& \underset{I_1, \ldots ,I_5}{\text{minimize:}}
& & \frac{1}{L_0} \int_0^{L_0} \left( \frac{B_z}{B_0} -1 \right) ^2 d l \\
& \text{subject to:}
& & -I_{max} \leq I_1, \ldots ,I_5 \leq I_{max}\\
\end{aligned}
The objective here is to minimize the difference between the computed
B z-field and the desired field, B 0= 250 μT, integrated over a line along the center of the coil, stretching from z= 0 to z= L 0. Note that this objective is normalized with respect to L 0and B 0. This is done such that the magnitude of this objective function will be around one. The scaling of the objective function should always be done for any optimization problem.
Now, let’s look at the implementation of this problem within COMSOL Multiphysics. We begin by adding an
Optimization interface to our model, which contains two features. The first feature is the Global Control Variables, as shown in the screenshot below. We can see that five control variables are set up:
I1,...,I5. These variables are used to specify the current flowing through the five
Coil features in the Magnetic Fields interface.
The
Initial Value, Upper Bound, and Lower Bound to these variables are also specified by two Global Parameters,
I_init and
I_max. Also note that the
Scale factor is set such that the optimization variables also have a magnitude close to one. We will use this same setup for the control variables in all three examples. Setting up the Global Control Variables feature, which specifies the coil currents.
Next, the objective function is defined via the
Integral Objective feature over a boundary, as shown in the screenshot below. Note that the Multiply by 2πr option is toggled off. The implementation of the objective function to achieve a desired field along one boundary.
We include an
Optimization step in the Study, as shown in the screenshot below. Since our objective function can be analytically differentiated with respect to the design variables, we can use the SNOPT solver. This solver takes advantage of the analytically computed gradient and solves the optimization problem in a few seconds. All of the other solver settings can be left at their defaults. The Optimization study step.
After solving, we can plot the fields and results. The figure below shows that the
B z-field matches the target value very well. Results of optimizing for a target value of magnetic flux along the centerline. Minimizing the Power with a Constraint on the Field
Our second optimization problem is to minimize the total power needed to drive the coil and to include a constraint on the field minimum at several points along the centerline. This can be expressed as:
& \underset{I_1, \ldots ,I_5}{\text{minimize:}}
& & \frac{1}{P_o}\sum_{k=1}^{5} P_{coil}^k \\
& \text{subject to:}
& & -I_{max} \leq I_1, \ldots ,I_5 \leq I_{max}\\
& & & 1 \le B_z^i/B_{0}, i=1, \ldots, M
\end{aligned}
where
P o is the initial total power dissipated in all coils and P_{coil}^k is the power dissipated in the k thcoil.
We further want to constrain the fields at
M number of points on the centerline to be above a value of B 0.
The implementation of this problem uses the same Global Control Variables feature as before. The objective of minimizing the total dissipated coil power is implemented via the
Global Objective feature, as shown in the screenshot below. The built-in variables for the dissipated power (
mf.PCoil_1,...,mf.PCoil5) in each Coil feature can be used directly. The objective is normalized with respect to the initial total power so that it is close to unity.
Implementation of the objective to minimize total power.
The constraint on the field minimum has to be implemented at a set of discrete points within the model. In this case, we introduce five points evenly distributed over the optimization zone. Each of these constraints has to be introduced with a separate
Point Sum Inequality Constraint feature, as shown below. We again apply a normalization such that this constraint has a magnitude of one. Note that the Multiply by 2πr option is toggled off, since these points lie on the centerline. The implementation of the constraint on the field minimum at a point.
We can solve this problem using the same approach as before. The results are plotted below. It is interesting to note that the minimal dissipated power solution does not result in a very uniform field distribution over the target zone.
Results of optimizing for a minimum power dissipation with a constraint on the field minimum. Minimizing the Gradient with a Constraint on the Field
Finally, let’s consider minimizing the gradient of the field along the optimization zone, with a constraint on the field at the centerpoint. This can be expressed as:
& \underset{I_1, \ldots ,I_5}{\text{minimize:}}
& & \frac{1}{L_0 B_{0}} \int_0^{L_0} \left( \frac{\partial B_z}{\partial z } \right) ^2 d l \\
& \text{subject to:}
& & -I_{max} \leq I_1, \ldots ,I_5 \leq I_{max}\\
& & & B_z(r=0,z=0) = B_0 \end{aligned}
The constraint here fixes the field at the centerpoint of the coil. Although the
Optimization interface does have an explicit equality constraint, we can achieve the same results with an inequality constraint with equal upper and lower bounds. We again apply a normalization such that our constraint is actually 1 \le B_z/B_0 \le 1, as shown in the image below. The implementation of an equality constraint.
The objective of minimizing the gradient of the field within the target zone is implemented via the Integral Objective feature (shown below). The gradient of the
B z-field with respect to the z-direction is taken with the derivative operator:
d(mf.Bz,z).
The objective of minimizing the gradient of the field.
We can use the same solver settings as before. The results for this case are shown below. The field within the optimization zone is quite uniform and matches the target at the centerpoint.
Results of optimizing for a minimum field gradient with a constraint on the field at a point.
Although the field here appears almost identical to the first case, the solution in terms of the coil currents is quite different, which raises an interesting point. There are multiple combinations of coil currents that will give nearly identical solutions in terms of minimizing the field difference or gradient. Another way of saying this is that the objective function has multiple local minimum points.
The SNOPT optimization solver uses a type of gradient-based approach and will approach different local minima for different initial conditions for the coil currents. Although a gradient-based solver will converge to a local minimum, there is no guarantee that this is in fact the global minimum. In general (unless we perform an exhaustive search of the design space), it is never possible to guarantee that the optimized solution is a global minimum.
Furthermore, if we were to increase the number of coils in this problem, we can get into the situation where multiple combinations of coil currents are nearly equivalently optimal. That is, there is no single optimal point, but rather an “optimal line” or “optimal surface” in the design space (the combination of coil currents) that is nearly equivalent. The optimization solver does not provide direct feedback of this, but will tend to converge more slowly in such cases.
Closing Remarks on Optimizing Current in Electromagnetic Coils
We have shown three different ways to optimize the currents flowing through the different turns of a coil. These three cases introduce different types of objective functions and constraints and can be adapted for a variety of other cases. Depending upon the overall goals and objectives of your coil design problem, you may want to use any one of these or even an entirely different objective function and constraint set. These examples show the power and flexibility of the Optimization Module in combination with the AC/DC Module.
Of course, there is even more that can be done with these problems. In an upcoming blog post, we will look at adjusting the locations of the coil turns — stay tuned.
Editor’s note: We have published the follow-up post in this blog series. Read it here: “How to Optimize the Spacing of Electromagnetic Coils“. Further Resources Browse the COMSOL Blog for more information on modeling electromagnetic coils: Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
The proofs I will present are based on techniques relevant to the fact that the CES production function has the form of a
generalized weighted mean. This was used in the original paper where the CES function was introduced, Arrow, K. J., Chenery, H. B., Minhas, B. S., & Solow, R. M. (1961). Capital-labor substitution and economic efficiency. The Review of Economics and Statistics, 225-250. The authors there referred their readers to the book Hardy, G. H., Littlewood, J. E., & Pólya, G. (1952). Inequalities , chapter $2 $.
We consider the general case$$Q_k=\gamma[a K^{-\rho} +(1-a) L^{-\rho} ]^{-\frac{k}{\rho}},\;\; k>0$$
$$\Rightarrow \gamma^{-1}Q_k = \frac 1{[a (1/K^{\rho}) +(1-a) (1/L^{\rho}) ]^{\frac{k}{\rho}}}$$
1) Limit when $\rho \rightarrow \infty$ Since we are interested in the limit when $\rho\rightarrow \infty$ we can ignore the interval for which $\rho \leq0$, and treat $\rho$ as strictly positive.
Without loss of generality, assume $K\geq L \Rightarrow (1/K^{\rho})\leq (1/L^{\rho})$. We also have $K, L >0$. Then we verify that the following inequality holds:
$$(1-a)^{k/\rho}(1/L^{k})\leq \gamma Q_k^{-1} \leq (1/L^{k}) $$
$$\implies (1-a)^{k/\rho}(1/L^{k})\leq [a (1/K^{\rho}) +(1-a) (1/L^{\rho}) ]^{\frac{k}{\rho}} \leq (1/L^{k}) \tag{1}$$
by raising throughout to the $\rho/k$ power to get
$$(1-a)(1/L^{\rho}) \leq a (1/K^{\rho}) +(1-a) (1/L^{\rho}) \leq (1/L^{\rho}) \tag {2}$$which indeed holds, obviously, given the assumptions. Then go back to the first element of $(1)$ and
$$\lim_{\rho\rightarrow \infty} (1-a)^{k/\rho}(1/L^{k}) =(1/L^{k})$$
which sandwiches the middle term in $(1)$ to $(1/L^{k})$ , so
$$\lim_{\rho\rightarrow \infty}Q_k = \frac {\gamma }{1/L^k} = \gamma L^k = {\gamma }\big[\min\{K,L\}\big]^{k} \tag{3}$$
So for $k=1$
we obtain the basic Leontief production function.
2) Limit when $\rho \rightarrow 0$ Write the function using exponential as
$$\gamma^{-1}Q_k=\exp\left\{-\frac k{\rho}\cdot \ln\big[a (K^{\rho})^{-1} +(1-a) (L^{\rho})^{-1}\big]\right\} \tag {4}$$
Consider the first-order Maclaurin expansion (Taylor expansion centered at zero) of the term inside the logarithm, with respect to $\rho$:
$$a (K^{\rho})^{-1} +(1-a) (L^{\rho})^{-1} \\= a (K^{0})^{-1} +(1-a) (L^{0})^{-1} -a (K^{0})^{-2}K^{0}\rho\ln K- (1-a) (L^{0})^{-2}L^{0}\rho\ln L + O(\rho^2) \\$$
$$=1 - \rho a\ln K - \rho(1-a)\ln L+ O(\rho^2) = 1 +\rho \big[\ln K^{-a}L^{-(1-a)}\big]+ O(\rho^2)$$
Insert this back into $(4)$ and get rid of the outer exponential,
$$\gamma^{-1}Q_k = \left(1 +\rho \big[\ln K^{-a}L^{-(1-a)}\big]+ O(\rho^{2})\right)^{-k/\rho}$$
In case it is opaque, define $r\equiv 1/\rho$ and re-write
$$\gamma^{-1}Q_k = \left(1 +\frac{\big[\ln K^{-a}L^{-(1-a)}\big]}{r}+ O(r^{-2})\right)^{-kr}$$
Now it does look like an expression whose limit at infinity will give us something exponential:
$$\lim_{\rho\rightarrow 0}\gamma^{-1}Q_k = \lim_{r\rightarrow \infty}\gamma^{-1}Q_k = \left(\exp\left\{ \ln K^{-a}L^{-(1-a)}\right\} \right)^{-k}$$
$$\Rightarrow \lim_{\rho\rightarrow 0}Q_k =\gamma\left(K^{a}L^{1-a}\right)^k$$
The degree of homogeneity $k$ of the function is preserved, and if $k=1$
we obtain the Cobb-Douglas function.
It was this last result that made Arrow and Co to call $a$ the "distribution" parameter of the CES function. |
Prove the following inequality
$$\ln \frac{\pi + 2}{2} \cdot \frac{2}{\pi} < \int \limits_0^{\pi/2} \frac{\sin\ x}{x^2 + x} < \ln \frac{\pi + 2}{2}$$
I can prove that $\frac{\sin\ x}{x^2 + x} < \frac{1}{x + 1} \ \forall x \in (0, +\infty) \Rightarrow \int \limits_0^{\pi/2} \frac{\sin\ x}{x^2 + x} < \int \limits_0^{\pi/2} \frac{1}{x + 1} = \ln \frac{\pi + 2}{2}$.
Unfortunately, I don't know what happens when $x = 0$.
I can prove $\leqslant$ on LHS using Mean Value Theorem, but I have no idea how to prove that the sign is strict. |
A pseudo-Riemannian manifold $M$ of dimension $n$ is said to be
maximally symmetric if the space of its Killing vector fields has $n(n+1)/2$ dimensions.
If $M$ is maximally symmetric, then we have the following: for every $p\in M$ and every linear isometry $\Lambda:T_pM\to T_pM$ connected with the identity, there exists an isometry $\sigma:M\to M$ such that $\sigma(p)=p$ and $d\sigma_p=\Lambda$.
On the other hand, all maximally symmetric spaces I know of (flat spaces, sphere, hyperbolic space, de Sitter and Anti de Sitter spacetimes) have a stronger property: for every $p\in M$ and every linear isometry $\Lambda:T_pM\to T_pM$ (not necessarily connected with the identity), there exists an isometry $\sigma:M\to M$ such that $\sigma(p)=p$ and $d\sigma_p=\Lambda$.
My question is: is this stronger property a consequence of maximal symmetry? In case it is not, I would like to know of an example of a maximally symmetric space not having this stronger property. |
My friends and I are debating over the correct answer for this particular physics problem.
There's a pendulum with a $2kg.$ mass and a $5m.$ string attached to a ceiling at $53^{\circ}$ to the vertical. It's released. At point B, $37^{\circ}$ from the vertical, we are asked to find the tension in the string.
Both of us agree that $v = 2\sqrt{5}$.
My equation was $T\cos 37^{\circ} = 20 + F_c\cos{37^{\circ}}$, which gives me $T = 33N$ I got the upward component of $T$ and set it equal to the weight of the block + the downward component of the centripetal force.
My friend's equation was $T = 20\cos{37^{\circ}} + F_c$ which makes $T = 24N$. She got the component of the weight at $37^{\circ}$.
Conceptually, I think we're both correct, but we get different answers. Could somebody answer who's right and why?
Note: we assume $\cos 37^{\circ} = 0.6, \cos 53^{\circ} = 0.8, g = 10m^2/s$ |
First let me mention a minor point concerning terminology. The type of channel you are suggesting is often called a
Pauli channel; the term depolarizing channel usually refers to the case where $p_x = p_y = p_z$.
Anyway, it is not really correct to say that Pauli channels are the channel model considered for quantum error correction. Standard quantum error correcting codes can protect against
arbitrary errors (represented by any quantum channel you might choose) so long as the errors do not affect too many qubits.
As an example, let us consider an arbitrary single-qubit error, represented by a channel $\Phi$ mapping one qubit to one qubit. Such a channel can be expressed in Kraus form as$$\Phi(\rho) = A_1 \rho A_1^{\dagger} + \cdots + A_m \rho A_m^{\dagger}$$for some choice of Kraus operators $A_1,\ldots,A_m$. (For a qubit channel we can always take $m = 4$ if we want.) You could, for instance, choose these operators so that $\Phi(\rho) = |0\rangle \langle 0|$ for every qubit state $\rho$, you could make the error unitary, or whatever else you choose. The choice can even be adversarial, selected after you know how the code works.
Each of the Kraus operators $A_k$ can be expressed as a linear combination of Pauli operators, because the Pauli operators form a basis for the space of 2 by 2 complex matrices:$$A_k = a_k I + b_k X + c_k Y + d_k Z.$$If you now expand out the Kraus representation of $\Phi$ above, you will obtain a messy expression where $\Phi(\rho)$ looks like a linear combination of operators of the form $P_i \rho P_j$ where $i,j\in\{1,2,3,4\}$ and $P_1 = I$, $P_2 = X$, $P_3 = Y$, and $P_4 = Z$.
Now imagine that you have a quantum error correcting code that protects against an $X$, $Y$, or $Z$ error on one qubit. The usual way this works is that some extra qubits in the 0 state are tacked on to the encoded data and a unitary operation is performed that reversibly computes into these extra qubits a syndrome describing which error occurred, if any, and which qubit was affected.
Supposing that the arbitrary error $\Phi$ happened on the first qubit for simplicity, after the syndrome computation you will end up with a state that looks like a linear combination of terms like this:$$P_i |\psi\rangle \langle \psi| P_j \otimes |P_i\: \text{syndrome}\rangle\langle P_j\:\text{syndrome}|.$$The assumption here is that $|\psi\rangle$ represents the encoded data without any noise, $P_i$ and $P_j$ act on the first qubit, and that "$P_i$ syndrome" and "$P_j$ syndrome" refer to the standard basis states that indicate that these errors have occurred on the first qubit. (The situation is similar for the error affecting any other qubit; I'm just trying to keep the notation simple by assuming the error happened to the first qubit.)
Now the key is that you
measure the syndrome to see what error occurred, and all of the cross terms disappear because of the measurement. You are left with a probabilistic mixture of states that look like$$P_i |\psi\rangle \langle \psi| P_i \otimes |P_i\: \text{syndrome}\rangle\langle P_i\:\text{syndrome}|.$$The error is corrected and the original state is recovered. In effect, by measuring the syndrome, you "project" or "collapse" the error to something that looks like a Pauli channel.
This is all described (somewhat briefly) in Section 10.2 of Nielsen and Chuang. |
The terminology of 'surface code' is a little bit variable. It might refer to a whole class of things, variants of the Toric code on different lattices, or it might refer to the Planar code, the specific variant on a square lattice with open boundary conditions.
The Toric Code
I'll summarise some of the basic properties of the Toric code. Imagine a square lattice with periodic boundary conditions, i.e. the top edge is joined to the bottom edge, and the left edge is joined to the right edge. If you try this with a sheet of paper, you'll find you get a doughnut shape, or torus. On this lattice, we place a qubit on each edge of a square.
Stabilizers
Next, we define a whole bunch of operators. For every square on the lattice (comprising 4 qubits in the middle of each edge), we write$$B_p=XXXX,$$acting a Pauli-$X$ rotation on each of the 4 qubits. The label $p$ refers to 'plaquette', and is just an index so we can later count over the whole set of plaquettes. On every vertex of the lattice (surrounded by 4 qubits), we define$$A_s=ZZZZ.$$$s$ refers to the star shape and again, will let us sum over all such terms.
We observe that all of these terms mutually commute. It's trivial for $[A_s,A_{s'}]=[B_p,B_{p'}]=0$ because Pauli operators commute with themselves and $\mathbb{I}$. More care is required with $[A_s,B_p]=0$, bot note that these two terms either have 0 or 2 sites in common, and pairs of different Pauli operators commute, $[XX,ZZ]=0$.
Codespace
Since all these operators commute, we can define a simultaneous eigenstate of them all, a state $|\psi\rangle$ such that$$\forall s:A_s|\psi\rangle=|\psi\rangle\qquad\forall p:B_p|\psi\rangle=|\psi\rangle.$$This defines the codespace of the code. We should determine how large it is.
For an $N\times N$ lattice, there are $N^2$ qubits, so the Hilbert space dimension is $2^{N^2}$. There are $N^2$ terms $A_s$ or $B_p$, which we collectively refer to as stabilizers. Each has eigenvalues $\pm 1$ (to see, just note that $A_s^2=B_p^2=\mathbb{I}$) in equal number, and when we combine them, each halves the dimension of the Hilbert space, i.e. we would
think that this uniquely defines a state.
Now, however, observe that $\prod_sA_s=\prod_pB_p=\mathbb{I}$: each qubit is included in two stars and two plaquettes. This means that one of the $A_s$ and one of the $B_p$ is linearly dependent on all the others, and does not further reduce the size of the Hilbert space. In other words, the stabilizer relations define a Hilbert space of dimension 4; the code can encode two qubits.
Logical Operators
How do we encode a quantum state in the Toric code? We need to know the logical operators: $X_{1,L}$, $Z_{1,L}$, $X_{2,L}$ and $Z_{2,L}$. All four must commute with all the stabilizers, and be linearly independent from them, and must generate the algebra of two qubits. Commutation of operators on the two different logical qubits:$$[X_{1,L},X_{2,L}]=0\quad [X_{1,L},Z_{2,L}]=0 \quad [Z_{1,L},Z_{2,L}]=0\quad [Z_{1,L},X_{2,L}]=0$$and anti-commutation of the two on each qubit:$$\{X_{1,L},Z_{1,L}\}=0\qquad\{X_{2,L},Z_{2,L}\}=0$$
There's a couple of different conventions for how to label the different operators. I'll go with my favourite (which is probably the less popular):
Take a horizontal line on the lattice. On every qubit, apply $Z$. This is $Z_{1,L}$. In fact,
any horizontal line is just as good.
Take a vertical line on the lattice. On every qubit, apply $Z$. This is $X_{2,L}$ (the other convention would label it as $Z_{2,L}$)
Take a horizontal strip of qubits, each of which is in the middle of a vertical edge. On every qubit, apply $X$. This is $Z_{2,L}$.
Take a vertical strip of qubits, each of which is in the middle of a horizontal edge. On every qubit, apply $X$. This is $X_{1,L}$.
You'll see that the operators that are supposed to anti-commute meet at exactly one site, with an $X$ and a $Z$.
Ultimately, we define the logical basis states of the code by$$|\psi_{x,y}\rangle: Z_{1,L}|\psi_{x,y}\rangle=(-1)^x|\psi_{x,y}\rangle,\qquad Z_{2,L}|\psi_{x,y}\rangle=(-1)^y|\psi_{x,y}\rangle$$
The distance of the code is $N$ because the shortest sequence of single-qubit operators that converts between two logical states constitutes $N$ Pauli operators on a loop around the torus.
Error Detection and Correction
Once you have a code, with some qubits stored in the codespace, you want to keep it there. To achieve this, we need error correction. Each round of error correction comprises measuring the value of every stabilizer. Each $A_s$ and $B_p$ gives an answer $\pm 1$. This is your error syndrome. It is then up to you, depending on what error model you think applies to your system, to determine where you think the errors have occurred, and try to fix them. There's a lot of work going on into fast decoders that can perform this classical computation as efficiently as possible.
One crucial feature of the Toric code is that you do not have to identify exactly where an error has occurred to perfectly correct it; the code is
degenerate. The only relevant thing is that you get rid of the errors without implementing a logical gate. For example, the green line in the figure is one of the basic errors in the system, called an anyone pair. If the sequence of $X$ rotations depicted had been enacted, than the stabilizers on the two squares with the green blobs in would have given a $-1$ answer, while all others give $+1$. To correct for this, we could apply $X$ along exactly the path where the errors happened, although our error syndrome certainly doesn't give us the path information. There are many other paths of $X$ errors that would give the same syndrome. We can implement any of these, and there are two options. Either, the overall sequence of $X$ rotations forms a trivial path, or one that loops around the torus in at least on direction. If it's a trivial path (i.e. one that forms a closed path that does not loop around the torus), then we have successfully corrected the error. This is at the heart of the topological nature of the code; many paths are equivalent, and it all comes down to whether or not these loops around the torus have been completed. Error Correcting Threshold
While the distance of the code is $N$, it is not the case that every combination of $N$ errors causes a logical error. Indeed, the vast majority of $N$ errors can be corrected. It is only once the errors become of much higher density that error correction fails. There are interesting proofs that make connections to phase transitions or the random bond Ising model, that are very good at pinning down when that is. For example, if you take an error model where $X$ and $Z$ errors occur independently at random on each qubit with probability $p$, the threshold is about $p=0.11$, i.e. $11\%$. It also has a finite fault-tolerant threshold (where you allow for faulty measurements and corrections with some per-qubit error rate)
The Planar Code
Details are largerly identical to the Toric code, except that the boundary conditions of the lattice are open instead of periodic. This mens that, at the edges, the stabilizers get defined slightly differently. In this case, there is only one logical qubit in the code instead of two. |
Here's a proof which gives an algorithm for constructing an involution. If the order ideal $\mathcal{A}=\emptyset$ we're done; otherwise we can select a maximal element $X$ of $\mathcal{A}$ (with respect to inclusion.) If we're really lucky, there is another maximal element $Y$ of $\mathcal{A}$ disjoint from $X$. If so, we can set $X\leftrightarrow Y$; since $\mathcal{A}'=\mathcal{A}\setminus\{X,Y\}$ is an order ideal, we can find a suitable involution on $\mathcal{A}'$ by induction on $|\mathcal{A}|$, so we're done.
In general, we won't be so lucky, but the intuition is the same. Given a maximal element $X$ of $\mathcal{A}$, select a maximal element $Y$ in$\{ Y\in \mathcal{A}\mid Y\cap X=\emptyset \}.$We can pair $X$ with $Y$, but in general $Y$ is not maximal in $\mathcal{A}$ so we have to do a little more work before we can appeal to induction. By the choice of $Y$, every set in $\mathcal{A}$ containing $Y$ has the form $Y\cup B$ for some $B\subset X$. Let$$\mathcal{B} =\{B\subset X\mid Y\cup B\in\mathcal{A}\};$$note that $\mathcal{B}$ is closed under taking subsets since $\mathcal{A}$ is.
For each $B\in\mathcal{B}$, we pair$Y\cup B$ with $X\setminus B$(which we know is in$\mathcal{A}$ since $\mathcal{A}$ is an order ideal.) In short, this matches the elements of $Y\cup\mathcal{B}$ with the elements of $X\setminus\mathcal{B}$. Let $\mathcal{A}'$ be the result of removing all these paired elements from $\mathcal{A}$.Once we check $\mathcal{A}'$ is an order ideal, we're done by induction.
All we need to check is that the set of elements we removed is an order coideal of $\mathcal{A}$. (i.e. if we removed $Z$, and $W\supset Z$ is in $\mathcal{A}$, then we also removed $W$).But $Y\cup \mathcal{B}$ is an order coideal by construction; $X\setminus\mathcal{B}$ is one since $\mathcal{B}$ is an order ideal; and the union of order coideals is an order coideal. So $\mathcal{A}'$ is an order ideal, and we're done by induction. $\square$
Remark: if $\mathcal{A}$ is the power set of $X$, then $X$ is the unique maximal element of $\mathcal{A}$; the algorithm picks $Y=\emptyset$, and pairs each $B$ with the complement $X\setminus B$, so this generalizes the special case pointed out in the original post. It also generalizes the special case at the beginning of the post: if $Y$ is maximal in $\mathcal{A}$ then $\mathcal{B}=\emptyset.$ |
Suppose there are $n$ bidders and a seller. Bidder $i$ observes a private signal $v_i$ from $[a,b]$. Let $\mathcal{X} = \times_{i=1}^n[a,b]$ Each bidder is represented by a random variable, that has a joint distribution $F(\textbf{v})$, where $\textbf{v} = (v_1,v_2,...,v_n)$. Let $(\textbf{Q}(\textbf{v}),\textbf{M}(\textbf{v}))$ be the direct mechanism, where $\textbf{Q}(\textbf{v})$ is the allocation rule and $\textbf{M}(\textbf{v})$ is the payment rule, where $\textbf{Q}(\textbf{v}) = (Q_1(\textbf{v}),Q_2(\textbf{v}),...,Q_n(\textbf{v}))$ and $\textbf{M}(\textbf{v}) = (M_1(\textbf{v}),M_2(\textbf{v}),...,M_n(\textbf{v}))$
The ex-post utility for bidder $i$ is given as $U_i(v_i) = v_iQ_i(v_i,v_{-i}) - M_i(v_i,v_{-i})$. From this, we can find out the expected utility function as $$u_i(v_i) = \int_{\mathcal{X}_{-i}}(v_iQ_i(v_i,v_{-i}) - M_i(v_i,v_{-i}))\,f(v_{-i}|v_i)dv_{-i}$$ Writing $\int_{\mathcal{X}_{-i}}Q_i(v_i,v_{-i})f(v_{-i}|v_i)dv_{-i} = q_i(v_i)$ and $\int_{\mathcal{X}_{-i}}M_i(v_i,v_{-i})f(v_{-i}|v_i)dv_{-i} = m_i(v_i)$, the expected utility function can be re-written as $u_i(v_i) = v_iq(v_i)-m_i(v_i)$. $F(\textbf{v})$ can be any joint distribution,i.e, it is not necessary that the joint distribution can be written as the product of marginal distributions.
Incentive compatibility now dictates that $u_i(v_i) \equiv v_iq(v_i)-m_i(v_i) \geq u_i(v_i^{'}) \equiv v_iq(v_i^{'})-m_i(v_i^{'})$. From here, we get that
\begin{equation} \begin{split} u_i(v_i) & \geq v_iq(v_i^{'})-m_i(v_i^{'})\\ &=v_iq(v_i^{'})-m_i(v_i^{'}) + v_i^{'}q(v_i^{'}) - v_i^{'}q(v_i^{'}) \\ &= (v_i - v_i^{'})q(v_i^{'}) + (v_i^{'}q(v_i^{'})-m_i(v_i^{'}))\\ &= (v_i - v_i^{'})q(v_i^{'}) + u_i(v_i^{'}), \,\,\, or, \\ u_i(v_i)-u_i(v_i^{'}) & \geq (v_i - v_i^{'})q(v_i^{'})\,\,\,\,\,\,\,\, -(1) \end{split} \end{equation} Similarly, we can get $$u_i(v_i^{'})-u_i(v_i) \geq (v_i^{'} - v_i)q(v_i)\,\,\,\,\,\,\,\, -(2)$$ From $(1)$ and $(2)$, we get that $$(v_i - v_i^{'})q(v_i^{'}) \leq u_i(v_i)-u_i(v_i^{'}) \leq (v_i - v_i^{'})q(v_i)$$
Given the above expression, is it possible to write the expected utility function as $$u_i(v_i) = u_i(a) + \int_a^{v_i}q_i(t)\, dt,$$ for any distribution. Specifically, I know that this holds true for the IPV case. So, my question is that is it possible to write the utility function as the integral of $q_i(.)$ without the assumption of independence? |
I am studying the interaction between two spherical particles of radius $a$ in a low Reynolds number flow. Because of linearity, I know that their respective velocities will be linear in the forces applied to them. Similarly, the force $\boldsymbol{F}_j$ applied on one particle contributes to the velocity $\boldsymbol{v}_i$ of the other through a term which is linear in $\boldsymbol{F}_j$. I write this as follows
$$\boldsymbol{v}_1=(6\pi a)^{-1}\boldsymbol{F}_{1}+\boldsymbol{H}\left(r_{12}\right)\cdot\boldsymbol{F}_{2}$$ $$\boldsymbol{v}_2=(6\pi a)^{-1}\boldsymbol{F}_{2}+\boldsymbol{H}\left(r_{21}\right)\cdot\boldsymbol{F}_{1}$$
where $H$ is the hydrodynamic interaction tensor that depends on the relative positions $\boldsymbol{r}_{ij}$ of the two spheres ($i=1,2$).
Here is my question: if I wanted to look at the limit of far field, in principle I would assume that $a\ll r_{ij}$ and look at what happens to the equations. This can be done formally by nondimensionalising with respect to the typical distance $\ell$ such that $r_{ij}\sim \ell$, define $$\epsilon=\frac{a}{\ell}$$ and take the limit $\epsilon\rightarrow 0$. However, this seems to present problems, because the friction terms are proportional to $a^{-1}$, so would diverge in such an expansion. What am I missing? If the divergence is indeed physically relevant, what is its meaning? How can one deal with it in order to study the limit of far field? |
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1995 (9) (remove)
292
Symmetry properties of average densities and tangent measure distributions of measures on the line (1995)
Answering a question by Bedford and Fisher we show that for every Radon measure on the line with positive and finite lower and upper densities the one-sided average densities always agree with one half of the circular average densities at almost every point. We infer this result from a more general formula, which involves the notion of a tangent measure distribution introduced by Bandt and Graf. This formula shows that the tangent measure distributions are Palm distributions and define self-similar random measures in the sense of U. Zähle.
262
An improved asymptotic analysis of the expected number of pivot steps required by the simplex algorithm (1995)
Let \(a_1,\dots,a_m\) be i.i .d. vectors uniform on the unit sphere in \(\mathbb{R}^n\), \(m\ge n\ge3\) and let \(X\):= {\(x \in \mathbb{R}^n \mid a ^T_i x\leq 1\)} be the random polyhedron generated by. Furthermore, for linearly independent vectors \(u\), \(\bar u\) in \(\mathbb{R}^n\), let \(S_{u, \bar u}(X)\) be the number of shadow vertices of \(X\) in \(span (u, \bar u\)). The paper provides an asymptotic expansion of the expectation value \(E (S_{u, \bar u})\) for fixed \(n\) and \(m\to\infty\). The first terms of the expansion are given explicitly. Our investigation of \(E (S_{u, \bar u})\) is closely connected to Borgwardt's probabilistic analysis of the shadow vertex algorithm - a parametric variant of the simplex algorithm. We obtain an improved asymptotic upper bound for the number of pivot steps required by the shadow vertex algorithm for uniformly on the sphere distributed data.
265
In multiple criteria optimization an important research topic is the topological structure of the set \( X_e \) of efficient solutions. Of major interest is the connectedness of \( X_e \), since it would allow the determination of \( X_e \) without considering non-efficient solutions in the process. We review general results on the subject,including the connectedness result for efficient solutions in multiple criteria linear programming. This result can be used to derive a definition of connectedness for discrete optimization problems. We present a counterexample to a previously stated result in this area, namely that the set of efficient solutions of the shortest path problem is connected. We will also show that connectedness does not hold for another important problem in discrete multiple criteria optimization: the spanning tree problem.
266
268
In this paper we will introduce the concept of lexicographic max-ordering solutions for multicriteria combinatorial optimization problems. Section 1 provides the basic notions of multicriteria combinatorial optimization and the definition of lexicographic max-ordering solutions. In Section 2 we will show that lexicographic max-ordering solutions are pareto optimal as well as max-ordering optimal solutions. Furthermore lexicographic max-ordering solutions can be used to characterize the set of pareto solutions. Further properties of lexicographic max-ordering solutions are given. Section 3 will be devoted to algorithms. We give a polynomial time algorithm for the two criteria case where one criterion is a sum and one is a bottleneck objective function, provided that the one criterion sum problem is solvable in polynomial time. For bottleneck functions an algorithm for the general case of Q criteria is presented.
267
In this paper we investigate two optimization problems for matroids with multiple objective functions, namely finding the pareto set and the max-ordering problem which conists in finding a basis such that the largest objective value is minimal. We prove that the decision versions of both problems are NP-complete. A solution procedure for the max-ordering problem is presented and a result on the relation of the solution sets of the two problems is given. The main results are a characterization of pareto bases by a basis exchange property and finally a connectivity result for proper pareto solutions. |
Let $(x_n)$ a decreasing sequence and $\sum x_n\to s$. Then $(n\cdot x_n)\to 0$
Check my proof please, Im not completely sure about it correctness.
If $\sum_{k=h}^\infty x_k= s$ then we can rewrite the sum for starting index $1$ with the change $k-h=j$, then
$$\left(\sum_{j=1}^n x_j\right)-n\cdot x_n=\sum_{j=1}^n (x_j-x_n)$$
Then taking limits
$$\color{red}{\lim_{n\to\infty}\left[\left(\sum_{j=1}^n x_j\right)-n\cdot x_n\right]}=\lim_{n\to\infty}\sum_{j=1}^n (x_j-x_n)=\sum_{j=1}^\infty (x_j-0)=\color{red}{\lim_{n\to\infty}\sum_{j=1}^n x_j}=s$$
where I used the fact that $(x_n)\to 0$. Thus equating the colored expressions this implies that $\lim_{n\to\infty} nx_n=0$.
The proof, to my eyes, seems correct but I dont needed in any moment to use the fact that $(x_n)$ is a monotonic sequence so it is possible that I make a mistake somewhere or that the proof is incorrect.
My second attempt
Because $\sum x_k$ converges and is positive (cause $(x_n)\downarrow 0$) we can write
$$\sum_{k=n+1}^{2n+m}x_k=\left|\sum_{k=n+1}^{2n+m}x_k\right|<\epsilon/2,\quad \forall n,m\ge N$$
Then, cause $(x_n)$ is decreasing
$$(n+m)x_{2n+m}\le\sum_{k=n+1}^{2n+m}x_k<\epsilon/2\\\implies (2n+m)x_{2n+m}\le2(n+m)x_{2n+m}<\epsilon,\quad\forall n,m\ge N$$
Because $m$ is arbitrary setting $M=2N>N$ we can finally write
$$nx_n<\epsilon,\quad\forall n\ge M$$
It is this proof correct? Thank you. |
This isn't an answer, just remarks with myself ideas about even perfect numbers, that I think similar than the statement of previous authors. I am an amateur mathematician.
My belief is that the following conjeture holds (thus that it is very difficult to find a counterexample). I don't know if this conjecture is in the literature.
Conjecture. An integer $n\geq 1$ is an even perfect number if an only if $$\operatorname{rad}(n)=\frac{1}{\frac{1}{2}-2\frac{\varphi(n)}{\sigma(n)}}.\tag{1}$$
Here $\sigma(m)$ is the sum of divisors function, $\varphi(m)$ the Euler's totient function and $\operatorname{rad}(m)$ the radical of the integer $m\geq 1$ (see the Wikipedia
Radical of an integer).
Thus with this
answer you can make a comparison with the statement you cite from the authors, because it is easy/obvious to prove, by cases, that any even perfect number satisfies $(1)$. Thus it is obvious that if $n$ is an even perfect number then $$\frac{1}{\frac{1}{2}-2\frac{\varphi(n)}{\sigma(n)}}=\frac{1}{\frac{1}{2}-\frac{\varphi(n)}{n}}\tag{2}$$ is integer, and it is the integer $\operatorname{rad}(n)$. We see that $(2)$ can be written as $\frac{n}{\frac{1}{2}n-\varphi(n)},$ thus we've next easy/obvious fact.
Fact. If $n$ is an even perfect number then $$\frac{n}{\frac{1}{2}n-\varphi(n)}$$ is an integer.
That is the fact that I wanted to evoke if you want to make comparisons to yourself problem. I hope that you get an answer for your nice question. Good luck.
I tried to relate even perfect numbers to the Euler's totient, my guess is that even perfect numbers maybe are closely-related to the Euler's totient function, in fact here we've the following conjecture.
Conjecture. An integer $m\geq 1$ satisfies $$2\varphi(\varphi((m+1)(2m+1)))=(m+1)\varphi(m)\tag{3}$$
if and only if $(m+1)(2m+1)$ is an even perfect number.
Again I know how to prove $\Rightarrow$, but I can not to get the full proof or find a counterexample. |
I'm working on a programing project. For that project I have a triangle with points $A,B,C$, where $A(a_1,a_2,a_3);B(b_1,b_2,b_3);C(c_1,c_2,c_3)$. Given the coordinates of the points $A,B$ and $C$, I want to find the coordinates of the orthocenter, circumcenter,incenter and the points where the perpendicular bisectors ,altitudes and angle bisectors meet with the $AB,BC,CA$. I believe there are formulas for each of these things. I tried looking online, but I couldn't find anything, so I'm asking you - Are there formulas for these things, and if so what are they?
Of course there are formulas, but it is probably easier to derive them than to find them online. The derivations become much easier if you work with vectors and take point $A$ to be $(0,0,0)$ (that is, translating by subtracting (a_1, a_2, a_3)$ from all of the points until the very end when you add it back.
Let's take the meeting points of the altitudes with the sides first, and look for point $P$ where the altitude from $C$ meets $AB$ (the other two cases are easy switches of $A$, $B$ and $C$ once you know that case). Let $AB = \vec{b}$ and $AC = \vec{c}$ and $AP = \vec{p}$.Then because $P$ is on (extended) line $AB$, $$\vec{p} = k\vec{b}$$for some scalar $k$. And since $CP \perp AP$,
$$ \vec{c}-\vec{p} \perp \vec{b} \rightarrow (\vec{c} - k\vec{b})\cdot \vec{b} = 0 \rightarrow k = \frac{\vec{c}\cdot \vec{b}}{|b|^2} = \frac{b_1 c_1 + b_2 c_2 + b_3 c_3}{b_1^2 + b_2^2 + b_3^2} $$ $$\vec{p} = \frac{\vec{c}\cdot \vec{b}}{|b|^2}\vec{b} = \frac{b_1 c_1 + b_2 c_2 + b_3 c_3}{b_1^2 + b_2^2 + b_3^2}\left( b_1, b_2, b_3\right) $$ Translating back the the original coordinates this gives $$ \left( a_1 + k (b_1-a_1), a_2 + k (b_2-a_2), a_2 + k (b_2-a_2) \right) $$ with $$ k=\frac{(b_1-a_1) (c_1-a_1) + (b_2-a_2) (c_2-a_2) + (b_3-a_3) (c_3-a_3)}{(b_1-a_1)^2 + (b_2-a_2)^2 + (b_3-a_3)^2} $$ (You can see how to do the translation to original coordinates from this; from here forward I will only show the work in the coordinates with $A$ at the origin.)
As long as we are working with altitudes, let's do the orthocenter next: We start with $Q$, the foot of the altitude on $AC$ which by the same reasoning as above is at $$ \vec{q} = k_b\vec{c} $$ where $$ k_b = \frac{\vec{b}\cdot \vec{c}}{|c|^2} = $$ and for notational symmetry we write the $k$ given above as $$ k_c = \frac{\vec{c}\cdot \vec{b}}{|b|^2} = $$ Line $BQ$ is described by $\vec{b} + \alpha (\vec q - \vec{b}) $ and line $CP$ is described by $\vec{c} + \beta (\vec p - \vec{c}) $. Setting these equal, we have: $$\begin{array}{l} \alpha\vec{q} + (1-\alpha)\vec{b} = \beta\vec{p} + (1-\beta)\vec{c} \\ \alpha k_b \vec{c} + (1-\alpha)\vec{b} = \beta k_c \vec{b} + (1-\beta)\vec{c} \\ (1-\alpha - \beta k_b) \vec{b} = (1-\beta - \alpha k_c) \vec{c} \end{array} $$ and since $\vec{b}$ and $\vec{c}$ are not linearly dependent, this can only be true if $$\left\{ \begin{array}{l} 1-\alpha - \beta k_b = 0\\ 1-\beta - \alpha k_c =0 \end{array} \right. $$ then $$ \left\{ \begin{array}{l} \alpha = \frac{1-k_b}{1-k_c}\\ \beta = \frac{1-k_c}{1-k_b} \end{array} \right. $$ so the orthocenter is at $$ \vec{b} + \alpha (\vec q - \vec{b}) = \vec{b} + \frac{1-k_b}{1-k_c}(k_b\vec{c} - \vec{b}) $$ On the computer you calculate $k_b$ and $k_c$ and then combine in this way.
The perpendicular bisector points are trivial in this scheme: On $AB$ the point is $\vec{b}/2$ for example.
The three perpendicular bisectors meet at the circumcenter. At the circumcenter we are on the line from $\vec{c}$ to $\vec{b}/2$ and also on the line from $\vec{b}$ to $\vec{c}/2$ so $$ \begin{array}{l} \alpha \vec{c} + (1-\alpha)\vec{b}/2) = \beta \vec{b} + (1-\beta)\vec{c} \\ \left( \alpha - \frac{1-\beta}{2} \right) \vec{c}= \left( \beta - \frac{1-\alpha}{2} \right) \vec{b} \end{array} $$ and as before, each of those coefficients must be zero so $$ \begin{array}{l} \beta = \frac{1-\alpha}{2} \\ -\frac{1}{2} + \alpha + \frac{1-\alpha}{4} = 0 \\ \alpha = \frac{1}{3} \end{array} $$ and the circumcenter is at $$ \frac{\vec{b}+\vec{c}}{3} $$ The same sort of techniques work to find the other points. Probably your professor wanted you to do these calculations as part of your project, so I won't finish it all for you. The hardest one will be the incenter, which is the intersection of the angle bisectors. |
A very simple version of central limited theorem as below $$ \sqrt{n}\bigg(\bigg(\frac{1}{n}\sum_{i=1}^n X_i\bigg) - \mu\bigg)\ \xrightarrow{d}\ \mathcal{N}(0,\;\sigma^2) $$ which is Lindeberg–Lévy CLT. I do not understand why there is a $\sqrt{n}$ on the left handside. And Lyapunov CLT says $$ \frac{1}{s_n} \sum_{i=1}^{n} (X_i - \mu_i) \ \xrightarrow{d}\ \mathcal{N}(0,\;1) $$ but why not $\sqrt{s_n}$? Would anyone tell me what are these factors, such $\sqrt{n}$ and $\frac{1}{s_n}$? how do we get them in the theorem?
Nice question (+1)!!
You will remember that for independent random variables $X$ and $Y$, $Var(X+Y) = Var(X) + Var(Y)$ and $Var(a\cdot X) = a^2 \cdot Var(X)$. So the variance of $\sum_{i=1}^n X_i$ is $\sum_{i=1}^n \sigma^2 = n\sigma^2$, and the variance of $\bar{X} = \frac{1}{n}\sum_{i=1}^n X_i$ is $n\sigma^2 / n^2 = \sigma^2/n$.
This is for the
variance. To standardize a random variable, you divide it by its standard deviation. As you know, the expected value of $\bar{X}$ is $\mu$, so the variable
$$ \frac{\bar{X} - E\left( \bar{X} \right)}{\sqrt{ Var(\bar{X}) }} = \sqrt{n} \frac{\bar{X} - \mu}{\sigma}$$ has expected value 0 and variance 1. So if it tends to a Gaussian, it has to be the standard Gaussian $\mathcal{N}(0,\;1)$. Your formulation in the first equation is equivalent. By multiplying the left hand side by $\sigma$ you set the variance to $\sigma^2$.
Regarding your second point, I believe that the equation shown above illustrates that you have to divide by $\sigma$ and not $\sqrt{\sigma}$ to standardize the equation, explaining why you use $s_n$ (the estimator of $\sigma)$ and not $\sqrt{s_n}$.
Addition: @whuber suggests to discuss the why of the scaling by $\sqrt{n}$. He does it there, but because the answer is very long I will try to capture the essense of his argument (which is a reconstruction of de Moivre's thoughts).
If you add a large number $n$ of +1's and -1's, you can approximate the probability that the sum will be $j$ by elementary counting. The log of this probability is proportional to $-j^2/n$. So if we want the probability above to converge to a constant as $n$ goes large, we have to use a normalizing factor in $O(\sqrt{n})$.
Using modern (post de Moivre) mathematical tools, you can see the approximation mentioned above by noticing that the sought probability is
$$P(j) = \frac{{n \choose n/2+j}}{2^n} = \frac{n!}{2^n(n/2+j)!(n/2-j)!}$$
which we approximate by Stirling's formula
$$ P(j) \approx \frac{n^n e^{n/2+j} e^{n/2-j}}{2^n e^n (n/2+j)^{n/2+j} (n/2-j)^{n/2-j} } = \left(\frac{1}{1+2j/n}\right)^{n+j} \left(\frac{1}{1-2j/n}\right)^{n-j}. $$
$$ \log(P(j)) = -(n+j) \log(1+2j/n) - (n-j) \log(1-2j/n) \\ \sim -2j(n+j)/n + 2j(n-j)/n \propto -j^2/n.$$
There is a nice theory of what kind of distributions can be limiting distributions of sums of random variables. The nice resource is the following book by Petrov, which I personally enjoyed immensely.
It turns out, that if you are investigating limits of this type $$\frac{1}{a_n}\sum_{i=1}^nX_n-b_n, \quad (1)$$ where $X_i$ are independent random variables, the distributions of limits are only certain distributions.
There is a lot of mathematics going around then, which boils to several theorems which completely characterizes what happens in the limit. One of such theorems is due to Feller:
Theorem Let $\{X_n;n=1,2,...\}$ be a sequence of independent random variables, $V_n(x)$ be the distribution function of $X_n$, and $a_n$ be a sequence of positive constant. In order that
$$\max_{1\le k\le n}P(|X_k|\ge\varepsilon a_n)\to 0, \text{ for every fixed } \varepsilon>0$$
and
$$\sup_x\left|P\left(a_n^{-1}\sum_{k=1}^nX_k<x\right)-\Phi(x)\right|\to 0$$
it is necessary and sufficient that
$$\sum_{k=1}^n\int_{|x|\ge \varepsilon a_n}dV_k(x)\to 0 \text{ for every fixed }\varepsilon>0,$$
$$a_n^{-2}\sum_{k=1}^n\left(\int_{|x|<a_n}x^2dV_k(x)-\left(\int_{|x|<a_n}xdV_k(x)\right)^2\right)\to 1$$
and
$$a_n^{-1}\sum_{k=1}^n\int_{|x|<a_n}xdV_k(x)\to 0.$$
This theorem then gives you an idea of what $a_n$ should look like.
The general theory in the book is constructed in such way that norming constant is restricted in any way, but final theorems which give
necessary and sufficient conditions, do not leave any room for norming constant other than $\sqrt{n}$.
s$_n$ represents the sample standard deviation for the sample mean. s$_n$$^2$ is the sample variance for the sample mean and it equals S$_n$$^2$/n. Where S$_n$$^2$ is the sample estimate of the population variance. Since s$_n$ =S$_n$/√n that explains how √n appears in the first formula. Note there would be a σ in the denominator if the limit were
N(0,1) but the limit is given as N(0, σ$^2$). Since S$_n$ is a consistent estimate of σ it is used in the secnd equation to taken σ out of the limit.
Intuitively, if $Z_n \to \mathcal N(0, \sigma^2)$ for some $\sigma^2$ we should expect that $\mbox{Var}(Z_n)$ is roughly equal to $\sigma^2$; it seems like a pretty reasonable expectation, though I don't think it is necessary in general. The reason for the $\sqrt n$ in the first expression is that the variance of $\bar X_n - \mu$ goes to $0$ like $\frac 1 n$ and so the $\sqrt n$ is inflating the variance so that the expression just has variance equal to $\sigma^2$. In the second expression, the term $s_n$ is defined to be $\sqrt{\sum_{i = 1} ^ n \mbox{Var}(X_i)}$ while the variance of the numerator grows like $\sum_{i = 1} ^ n \mbox{Var}(X_i)$, so we again have that the variance of the whole expression is a constant ($1$ in this case).
Essentially, we know something "interesting" is happening with the distribution of $\bar X_n := \frac 1 n \sum_i X_i$, but if we don't properly center and scale it we won't be able to see it. I've heard this described sometimes as needing to adjust the microscope. If we don't blow up (e.g.) $\bar X - \mu$ by $\sqrt n$ then we just have $\bar X_n - \mu \to 0$ in distribution by the weak law; an interesting result in it's own right but not as informative as the CLT. If we inflate by any factor $a_n$ which is dominated by $\sqrt n$, we still get $a_n(\bar X_n - \mu) \to 0$ while any factor $a_n$ which dominates $\sqrt n$ gives $a_n(\bar X_n - \mu) \to \infty$. It turns out $\sqrt n$ is just the right magnification to be able to see what is going on in this case (note: all convergence here is in distribution; there is another level of magnification which is interesting for almost sure convergence, which gives rise to the law of iterated logarithm). |
Arithmetic Mean:
Here we will learn all the
Arithmetic Mean Formula With Example. Arithmetic mean is defined as the sum of all values divided by total number of values. Arithmetic mean is also called arithmetic average. It is most commonly used measure of central tendency.
Arithmetic averages are of two types:
Calculation of Simple Arithmetic Mean/Average
We calculate simple arithmetic mean for the Individual Series, Discrete Series and Continuous Series. Following are the methods for finding arithmetic mean:
Direct Method Individual Series
Let X is the variable which takes values x
1, x 2, x 3, …, x n over ‘n’ times, then arithmetic mean, simply the mean of X, denoted by bar over the variable X is given by,
\[\overline{X}=\frac{{{x}_{1}}+{{x}_{2}}+{{x}_{3}}+…+{{x}_{n}}}{n}=\frac{\sum\limits_{i=1}^{n}{{{x}_{i}}}}{n}\]
Discrete Series
\[\overline{X}=\frac{\sum{fx}}{N=\sum{f}}\]
∑fx = Sum of the product of the values and their corresponding frequencies
N = Sum of the frequencies i.e., ∑f or total number of observations. Continuous Series
\[\overline{X}=\frac{\sum{fm}}{N=\sum{f}}\]
∑fm = Sum of the product of mid values and their corresponding frequencies
N = Sum of the frequencies i.e., ∑f or total number of observations.
Example 01 Calculate the mean of following data. Marks obtained by 6 students given: 20, 15, 23, 22, 25, 20. Solution:
Mean marks,
\[\overline{X}=\frac{{{x}_{1}}+{{x}_{2}}+…+{{x}_{n}}}{n}\]
\[=\frac{20+15+23+22+25+20}{6}\]
\[=\frac{125}{6}=20.83\]
Example 02 Six month income of a departmental store is given below. Find mean income of store.
Month Jan Feb Mar Apr May June Income($) 25000 30000 45000 20000 25000 20000 Solution:
n = Total number of items (observations) = 6
Total income = ∑ x i = $(25000 + 30000 + 45000 + 20000 + 25000 + 20000) = $165000
Mean Income,
\[=\frac{\sum{{{x}_{i}}}}{n}=\frac{$ 165000}{6}=$ 27500\]
Example 03 Calculate the arithmetic mean by direct method from the following data
Wage 10 20 30 40 50 No. of Workers 4 5 3 2 5 Solution:
Let us denote the wages by x and number of workers by f.
No. of Workers (f) fx 4 40 5 100 3 40 2 50 5 ∑ f = 19
Hence, mean wage of the workers,
\[\overline{X}=\frac{\sum{fx}}{\sum{f}}=\frac{560}{19}=29.47\]
Example 04 Calculate the missing value when its mean is 115.86.
Wage 110 112 113 117 – 125 128 130 No. of Workers 25 17 13 15 14 8 6 2 Solution: Let us denote the wages by x, number of workers by f and missing item by ‘a’.
No. of Workers (f) 25 2750 112 17 13 1469 117 15 14 14a 125 8 6 768 130 2
\[\overline{X}=\frac{\sum{fx}}{\sum{f}}\]
\[\Rightarrow 115.86=\frac{9906+14a}{100}\]
\[\Rightarrow 115.86\times 100=9906+14a\]
\[\Rightarrow 14a=11586-9906\]
\[\Rightarrow 14a=1680\]
\[\Rightarrow a=\frac{1680}{14}\]
\[\therefore a=120\]
Example 05 Find the mean for the following distribution by using direct method.
Class Interval 84-90 90-96 96-102 102-108 108-114 frequency 8 12 15 10 5
This is the case of continuous series. Let ‘m’ be the mid-value and ‘f’ be the frequency.
Class interval
Mid-value (m) Frequency (f) fx 84-90 97 8
776
90-96
93 12 1116 96-102 99 15
1485
102-108
105 10 1050 108-114 111 5
555
Hence, the mean,
\[\overline{X}=\frac{\sum{fm}}{\sum{f}}=\frac{4982}{50}=99.64\]
Short Cut Method Individual Series
The arithmetic mean can also be calculated by taking deviations from any arbitrary points. Steps of this method are given below.
Step 1: Assume any one value as a mean which is called arbitrary average (A). Step 2: Find the difference (deviations) of each value from arbitrary average. d = x i – A Step 3: Add all differences to get ∑ d. Step 4: Use following equation and compute the mean value.
\[\overline{X}=A+\frac{\sum{d}}{n}\]
Discrete Series
\[\overline{X}=A+\frac{\sum{fd}}{\sum{f}}\]
Continuous Series
\[\overline{X}=A+\frac{\sum{fd}}{\sum{f}}\]
Where,
n = Total number of observations. ∑ d = Total deviation value d = Deviation of item from the assumed mean. A = Assumed mean. ∑ fd = Sum of products of deviations and their corresponding frequencies.
Example 01 Determine the average salary of a staff from the following data relating to the monthly salaries of the teaching staff of a college by using short cut method.
Salary ($) 2200 2500 3000 3700 4500 No. of Staff 5 10 15 7 3 Solution: Let assumed mean A = 3000
x f D = (x – A) fd 2200 5 -800 -4000 2500 10 -500 -5000 3000 15 000 000 3700 7 700 4900 4500 3 1500 4500 ∑ f = 40 ∑ fd = 400
We have,
\[\overline{X}=A+\frac{\sum{fd}}{\sum{f}}=3000+\frac{400}{40}=3000+10=3010\]
Example 02 Calculate the average marks obtained by BCA students in Mathematics paper by short cut method
Class of Marks 0-10 10-20 20-30 30-40 40-50 No. of students 5 3 7 25 20 Solution: Let A = 25
Marks of Students
No. of Students(f) Mid-values(m) d = m – 25
fd
0-10
5 5 -20 -100 10-20 3 15 -10
-30
20-30
7 25 0 00 30-40 25 35 10
250
40-50
20 45 20 400 ∑ f = 60
∑ fd = 520
\[\overline{X}=A+\frac{\sum{fd}}{\sum{f}}\]
\[=25+\frac{520}{60}=25+8.67=33.67\]
Step Deviation Method
This method is extension of short cut method. This method is used when the figures of deviations appear to be big and divisible by a common factor.
Individual Series
\[\overline{X}=A+\frac{\sum{{{d}^{‘}}}}{N}\times c\]
Discrete Series
\[\overline{X}=A+\frac{\sum{f{{d}^{‘}}}}{\sum{f}}\times c\]
Continuous Series
\[\overline{X}=A+\frac{\sum{f{{d}^{‘}}}}{\sum{f}}\times c\]
Where,
c = common factor by which each of the deviation is divided. d’ = the deviation from the assumed average divided by the common factor.
Example 01 Calculate the arithmetic mean by means of step deviation method
Marks 0-10 10-20 20-30 30-40 40-50 No. of students 20 24 40 36 20 Solution:
Let Assumed average A = 25
Marks
Mid-values(m) f A = 25,
d = m – 25
d’ = (d/10)
fd’
0-10
5 20 -20 -2 -40 10-20 15 24 -10 -1
-24
20-30
25 = A 40 0 0 0 30-40 35 36 10 1
36
40-50
45 20 20 2 40 ∑ f = 140
∑ fd’ = 12
\[\overline{X}=A+\frac{\sum{f{{d}^{‘}}}}{\sum{f}}\times c\]
\[=25+\frac{12}{140}\times 10=25.85\]
Weighted Arithmetic Average
In case of simple arithmetic mean equal importance is given to every item of the series. But there may be cases where not all the items are given equal importance. So, different weights are given to the different items in accordance with the nature and purpose of the study. Weighted average is always advisable for comparative studies.
Direct Method
\[\overline{{{X}_{W}}}=\frac{\sum{Wx}}{\sum{W}}\]
Short Cut Method
\[\overline{{{X}_{W}}}=A+\frac{\sum{Wd}}{\sum{W}}\]
Step Deviation Method
\[\overline{{{X}_{W}}}=A+\frac{\sum{W{{d}^{‘}}}}{\sum{W}}\times c\]
Where,
∑ Wx = Sum of the products of the values and their corresponding weights. ∑ W = Sum of weights. A = Assumed average. ∑ Wd = Sum of the products of deviations from the assumed average and their corresponding weights. c = Common factor.
Example 01 Calculate the weighted mean of the following data:
Items 81 76 74 58 70 73 Weights 2 3 6 7 3 7 Solution:
From the above data we have,
x
W Wx 81 2
162
76
3 228 74 6
444
58
7 406 70 3
210
73
7 511 ∑ W = 28
∑ Wx = 1961
\[\overline{{{X}_{W}}}=\frac{\sum{Wx}}{\sum{W}}=\frac{196}{28}=70.04\] |
It looks like you're new here. If you want to get involved, click one of these buttons!
Now let's look at a mathematical approach to resource theories. As I've mentioned, resource theories let us tackle questions like these:
Our first approach will only tackle question 1. Given \(y\), we will only ask
is it possible to get \(x\). This is a yes-or-no question, unlike questions 2-4, which are more complicated. If the answer is yes we will write \(x \le y\).
So, for now our resources will form a "preorder", as defined in Lecture 3.
Definition. A preorder is a set \(X\) equipped with a relation \(\le\) obeying: reflexivity: \(x \le x\) for all \(x \in X\). transitivity \(x \le y\) and \(y \le z\) imply \(x \le z\) for all \(x,y,z \in X\).
All this makes sense. Given \(x\) you can get \(x\). And if you can get \(x\) from \(y\) and get \(y\) from \(z\) then you can get \(x\) from \(z\).
What's new is that we can also
combine resources. In chemistry we denote this with a plus sign: if we have a molecule of \(\text{H}_2\text{O}\) and a molecule of \(\text{CO}_2\) we say we have \(\text{H}_2\text{O} + \text{CO}_2\). We can use almost any symbol we want; Fong and Spivak use \(\otimes\) so I'll often use that. We pronounce this symbol "tensor". Don't worry about why: it's a long story, but you can live a long and happy life without knowing it.
It turns out that when you have a way to combine things, you also want a special thing that acts like "nothing". When you combine \(x\) with nothing, you get \(x\). We'll call this special thing \(I\).
Definition. A monoid is a set \(X\) equipped with:
such that these laws hold:
the
associative law: \( (x \otimes y) \otimes z = x \otimes (y \otimes z) \) for all \(x,y,z \in X\)
the
left and right unit laws: \(I \otimes x = x = x \otimes I\) for all \(x \in X\).
You know lots of monoids. In mathematics, monoids rule the world! I could talk about them endlessly, but today we need to combine the monoids and preorders:
Definition. A monoidal preorder is a set \(X\) with a relation \(\le\) making it into a preorder, an operation \(\otimes : X \times X \to X\) and element \(I \in X\) making it into a monoid, and obeying:
$$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg
and a slice of bread into a fried egg and a piece of toast!
You know lots of monoidal preorders, too! Many of your favorite number systems are monoidal preorders:
The set \(\mathbb{R}\) of real numbers with the usual \(\le\), the binary operation \(+: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \) and the element \(0 \in \mathbb{R}\) is a monoidal preorder.
Same for the set \(\mathbb{Q}\) of rational numbers.
Same for the set \(\mathbb{Z}\) of integers.
Same for the set \(\mathbb{N}\) of natural numbers.
Money is an important resource: outside of mathematics, money rules the world. We combine money by addition, and we often use these different number systems to keep track of money. In fact it was bankers who invented negative numbers, to keep track of debts! The idea of a "negative resource" was very radical: it took mathematicians over a century to get used to it.
But sometimes we combine numbers by multiplication. Can we get monoidal preorders this way?
Puzzle 60. Is the set \(\mathbb{N}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\) and the element \(1 \in \mathbb{N}\) a monoidal preorder? Puzzle 61. Is the set \(\mathbb{R}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}\) and the element \(1 \in \mathbb{R}\) a monoidal preorder? Puzzle 62. One of the questions above has the answer "no". What's the least destructive way to "fix" this example and get a monoidal preorder? Puzzle 63. Find more examples of monoidal preorders. Puzzle 64. Are there monoids that cannot be given a relation \(\le\) making them into monoidal preorders? Puzzle 65. A monoidal poset is a monoidal preorder that is also a poset, meaning
$$ x \le y \textrm{ and } y \le x \textrm{ imply } x = y $$ for all \(x ,y \in X\). Are there monoids that cannot be given any relation \(\le\) making them into monoidal posets?
Puzzle 66. Are there posets that cannot be given any operation \(\otimes\) and element \(I\) making them into monoidal posets? |
I have a
One-Versus-All classification task with 80 different labels. In order to parallelize the problem to take advantage of multiple nodes on a computer cluster, I first trained 80 binary SVM classifiers in parallel with MATLAB's frontend of libSVM. All of them use the default RBF kernel. The models are cross-validated and all get dumped to disk, so I can load them at any time later on and run predictions on testing data. The idea is to run every model on the testing data and assign every testing point the label of the classifier that provides me with the most confident response, in classic OVA fashion.
Unfortunately, for binary classification tasks, the function
svm_predict does not output confidences of any form, for instance distance from the margin or probabilities. I quote the following from the general-purpose (top-level) README of libSVM:
`svm-predict' Usage
Usage: svm-predict [options] test_file model_file output_file
options:
-b probability_estimates: whether to predict probability estimates, 0 or 1 (default 0);
for one-class SVM only 0 is supported
This tells us that we cannot use probability estimates for binary SVMs, which would be fine if we had some other estimate of the classifier's confidence for every test point. For multi-class SVMs,
svm_predict outputs such a confidence value, as this quote from the MATLAB readme (libsvm-root/matlab/README) of libsvm suggests:
Result of Prediction
The function 'svmpredict' has three outputs. The first one, predictd_label, is a vector of predicted labels. The second output, accuracy, is a vector including accuracy (for classification), mean squared error, and squared correlation coefficient (for regression).
The third is a matrix containing decision values or probability estimates (if '-b 1' is specified). If k is the number of classes in training data, for decision values, each row includes results of predicting k(k-1)/2 binary-class SVMs. For classification, k = 1 is a special case. Decision value +1 is returned for each testing instance, instead of an empty vector. For probabilities, each row contains k values indicating the probability that the testing instance is in each class.Note that the order of classes here is the same as 'Label' field in the model structure.
So, not only does this give me some decision values based on an All-Versus-All fashion (since each row contains $k \choose 2$ values), instead of my desired One-Versus-All fashion, for k=1 this row is not even well-defined, giving me a decision value of +1. Now,
if I used a linear kernel instead, computing the distance from the hyperplane for a test point x would be very easy, I would just have to do $w \dot x + b$. This post from Stats shows that this is possible:
And the question-answer pair "How could I generate the primal variable w of linear SVM" from the libsvm FAQ:
shows how one can retrieve the primal variable $\vec{w}$ from the dual $\vec{\alpha}$ by using variables stored in the trained model. In the case of the RBF kernel, however, I am not sure at all how I can find such a decision value. Is there any way I can calculate this without having to repeat all my experiments again with a linear kernel instead? |
Can anyone suggest me some book/article explaining how and why the quantity named 'Energy' made its way into physics?
I have gone through "Lectures on Physics" (Vol. 1) by R.P. Feynman and have been convinced that at first scientists were in search of a quantity which remains constant w.r.t. any other internal change,in a closed system.The quantity later turned out to be 'Force x displacement'.
But we know that momentum is also conserved in closed systems. So why don't we invent a scalar 'mass x speed' (in order to fix the problem that momentum is a vector) and use this instead of 'Energy'? While we could take the advantage that 'mass x speed' is much simpler than '1/2 (mass x square of velocity)'.
One of my teachers, on being asked this question, said that energy is more fundamental than momentum. And giving the example of a field force, he wrote:
$$\vec{F}=\frac{\partial \varphi}{\partial x}\hat{\imath}+\frac{\partial \varphi}{\partial y}\hat{\jmath}+\frac{\partial \varphi}{\partial z}\hat{k}$$
And showed that the quantity $\varphi$ turns out to be the potential energy of a particle (e.g. a point mass in case of a gravitational field) in the field at the point $\vec r = (x,y,z)$. But what is the thought behind the approach to find out such a quantity whose change w.r.t. position will describe the force? And at which point 'Force x displacement' becomes more fundamental than momentum? |
This was shown by Hardy back in 1903/1904.
A mention of it can be found here: Quarterly Journal Of Pure And Applied Mathematics, Volume 35, Page 203, which is somewhere in the middle of a long paper.
Here is a snapshot in case that link does not work:
Note, the integral is slightly different, but I suppose it won't be too hard to convert it into the form you have.
See also Hardy's response to Ramanujan here: http://books.google.com/books?id=Of5G0r6DQiEC&pg=PA46. Note: 1b.
(Edit:) Since the journal itself has no reliable electronic copies, and the proof is actually somewhat more involved then just the excerpt shown above, I'll give a quick description of the proof that Hardy provided.
First is the concept of
reciprocal functions of the first and second kind introduced by Cauchy. Two functions $\phi$ and $\psi$ defined on the positive real line is called reciprocal functions of the first kind if $$\phi(y) = \sqrt{\frac{2}{\pi}} \int_0^\infty \cos(y x) \psi(x) dx$$ and also the same formula with $\phi$ and $\psi$ swapped. They are called reciprocal functions of the second kind if the $\cos$ in the formula above is replaced by $\sin$. Cauchy gave several examples of each, and also examples of functions which are their own reciprocal function of the first kind (but not for the second), and proved that those functions have the following property: whenever $\alpha \beta = \pi$ $$\sqrt\alpha \left( \frac12 \phi(0) + \phi(\alpha) + \phi(2\alpha) + \cdots\right) = \sqrt\beta \left(\frac12 \psi(0) + \psi(\beta) + \psi(2\beta) + \cdots \right)$$
In the article linked above, Hardy proved the following two facts (among others).
The function $f(x) = e^{x^2/2}\int_x^\infty e^{-t^2/2}dt$ is its own reciprocal function of the second kind. (That proof is about 3 pages long, condensed in the typical Hardy fashion.) If $\phi$ and $\psi$ are reciprocal functions of the second kind, the following summation formula (analogue of the one above for functions of the first kind) holds: when $\lambda \mu = 2\pi$, one has $$ \sqrt\lambda \sum_0^\infty (-1)^n \phi\left( (n + \frac12)\lambda\right) = \sqrt\mu \sum_0^\infty (-1)^n \psi\left( (n+\frac12)\mu\right)$$ This expression being the one termed equation (9) in the screenshot above.
Hardy provided two proves of the formula asked about above in the question. The first proof proceeds by giving the series expansion $$\int_0^\infty \frac{e^{-\alpha x^2}}{\cosh \pi x} dx = \frac{2}{\pi} \sum (-1)^n F\left( (n + \frac12)\alpha\right)$$ where $$F(x) = \sqrt\pi e^{x^2}\int_x^\infty e^{-t^2}dt$$ and using equation (9) above. The second proof is shown in section 10 in the image above: he obtained a
different series expansion of the expression we want on the left hand side, which can be shown to be term by term equal to the first series expansion of the expression on the right hand side, avoiding the need to invoke equation (9). |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
June 2010 , Volume 28 , Issue 2
A special issue
Dedicated to Louis Nirenberg on the Occasion of his 85th Birthday Part I
Select all articles
Export/Reference:
Abstract:
"One of the wonders of mathematics is you go somewhere in the world and you meet other mathematicians, and it is like one big family. This large family is a wonderful joy."
Louis Nirenberg, in an interview in the
Notices of the AMS, April 2002.
Louis Nirenberg was born in Hamilton, Ontario on February 28, 1925. He was attracted to physics as a high school student in Montreal while attending the Baron Byng School. He completed a major in Mathematics and Physics at McGill University. Having met Richard Courant, he went to graduate school at NYU and what would become the Courant Institute. There he completed his PhD degree under the direction of James Stoker. He was then invited to join the faculty and has been there ever since. He was one of the founding members of the Courant Institute of Mathematical Sciences and is now an Emeritus Professor.
For more information please click the “Full Text” above.
Abstract:
A representation of the sharp constant in a pointwise estimate of the gradient of a harmonic function in a multidimensional half-space is obtained under the assumption that function's boundary values belong to $L^p$. This representation is concretized for the cases $p=1, 2,$ and $\infty$.
Abstract:
In this paper, we study the convexity, interior gradient estimate, Liouville type theorem and asymptotic behavior at infinity of translating solutions to mean curvature flow as well as the nonlinear flow by powers of the mean curvature.
Abstract:
We show how hypotheses for many problems can be significantly reduced if we employ the monotonicity method. We apply it to problems for the semilinear wave equation, where infinite dimensional methods are needed.
Abstract:
We prove a representation theorem for Palais-Smale sequences involving the p-Laplacian and critical nonlinearities. Applications are given to a critical problem.
Abstract:
We study strong ratio limit properties of the quotients of the heat kernels of subcritical and critical operators which are defined on a noncompact Riemannian manifold.
Abstract:
We prove the validity of the Euler-Lagrange equation for a solution $u$ to the problem of minimizing $\int_{\Omega}L(x,u(x),\nabla u(x))dx$, where $L$ is a Carathéodory function, convex in its last variable, without assuming differentiability with respect to this variable.
Abstract:
We analyze the possible nucleation of cracked surfaces in materials in which changes in the material texture have a prominent influence on the macroscopic mechanical behavior. The geometry of crack patterns is described by means of stratified families of curvature varifolds with boundary. Possible non-local actions of the microstructures are accounted for. We prove existence of ground states of the energy in terms of deformation, descriptors of the microstructure and varifolds.
Abstract:
In this paper we discuss some extensions to a fully nonlinear setting of results by Y.Y. Li and L. Nirenberg [25] about gradient estimates for non-negative solutions of linear elliptic equations. Our approach relies heavily on methods developed by L. Caffarelli in [3] and [4].
Abstract:
Given $\Omega,\Lambda \subset \R^n$ two bounded open sets, and $f$ and $g$ two probability densities concentrated on $\Omega$ and $\Lambda$ respectively, we investigate the regularity of the optimal map $\nabla \varphi$ (the optimality referring to the Euclidean quadratic cost) sending $f$ onto $g$. We show that if $f$ and $g$ are both bounded away from zero and infinity, we can find two open sets $\Omega'\subset \Omega$ and $\Lambda'\subset \Lambda$ such that $f$ and $g$ are concentrated on $\Omega'$ and $\Lambda'$ respectively, and $\nabla\varphi:\Omega' \to \Lambda'$ is a (bi-Hölder) homeomorphism. This generalizes the $2$-dimensional partial regularity result of [8].
Abstract:
It is well known through the work of Majumdar, Papapetrou, Hartle, and Hawking that the coupled Einstein and Maxwell equations admit a static multiple blackhole solution representing a balanced equilibrium state of finitely many point charges. This is a result of the exact cancellation of gravitational attraction and electric repulsion under an explicit condition on the mass and charge ratio. The resulting system of particles, known as an
extremely chargeddust, gives rise to examples of spacetimes with naked singularities. In this paper, we consider the continuous limit of the Majumdar-Papapetrou-Hartle-Hawking solution modeling a space occupied by an extended distribution of extremely charged dust. We show that for a given smooth distribution of matter of finite ADM mass there is a continuous family of smooth solutions realizing asymptotically flat space metrics. Abstract:
We discuss some recent developments of the theory of $BV$ functions and sets of finite perimeter in infinite-dimensional Gaussian spaces. In this context the concepts of Hausdorff measure, approximate continuity, rectifiability have to be properly understood. After recalling the known facts, we prove a Sobolev-rectifiability result and we list some open problems.
Abstract:
We present some results on the local solvability of the Nirenberg problem on $\mathbb S^2$. More precisely, an $L^2(\mathbb S^2)$ function near $1$ is the Gauss curvature of an $H^2(\mathbb S^2)$ metric on the round sphere $\mathbb S^2$, pointwise conformal to the standard round metric on $\mathbb S^2$, provided its $L^2(\mathbb S^2)$ projection into the the space of spherical harmonics of degree $2$ satisfy a matrix invertibility condition, and the ratio of the $L^2(\mathbb S^2)$ norms of its $L^2(\mathbb S^2)$ projections into the the space of spherical harmonics of degree $1$ vs the space of spherical harmonics of degrees other than $1$ is sufficiently small.
Abstract:
In continuation of [20], we analyze the properties of spectral minimal $k$-partitions of an open set $\Omega$ in $\mathbb R^3$ which are nodal, i.e. produced by the nodal domains of an eigenfunction of the Dirichlet Laplacian in $\Omega$. We show that such a partition is necessarily a nodal partition associated with a $k$-th eigenfunction. Hence we have in this case equality in Courant's nodal theorem.
Abstract:
In this paper we study the existence and multiplicity of radial solutions for Neumann problems in a ball and in an annular domain, associated to pendulum-like perturbations of mean curvature operators in Euclidean and Minkowski spaces and of the $p$-Laplacian operator. Our approach relies on the Leray-Schauder degree and the upper and lower solutions method.
Abstract:
For double-periodic and Dirichlet-periodic boundary conditions, we prove the existence of solutions to a forced semilinear wave equation with asymptotically linear nonlinearity, no resonance, and non-monotone nonlinearity when the forcing term is not
flaton characteristics. The solutions are in $L^{\infty}$ when the forcing term is in $L^{\infty}$ and continous when the forcing term is continuous. This is in contrast with the results in [4], where the non-enxistence of continuous solutions is established even when forcing term is of class $C^{\infty}$ but is flat on a characteristic. Abstract:
We present the concept of sc-smoothness for Banach spaces, which leads to new models of spaces having locally varying dimensions called M-polyfolds. We present detailed proofs of the technical results needed for the applications, in particular, to the Symplectic Field Theory. We also outline a very general Fredholm theory for bundles over M-polyfolds. The concepts are illustrated by holomorphic mappings between conformal cylinders which break apart as the modulus tends to infinity.
Abstract:
The arguments in paper [2] have been refined to prove a microscopic convexity principle for fully nonlinear elliptic equation under a more natural structure condition. We also consider the corresponding result for the partial convexity case.
Abstract:
Two indices, which are similar to the Krasnoselski's genus on the sphere, are defined on the product of spheres. They are applied to investigate the multiple non semi-trivial solutions for elliptic systems. Both constraint and unconstraint problems are studied.
Abstract:
In this paper we study the existence of solutions $u\in H^1(\R^N)$ for the problem $-\Delta u+a(x)u=|u|^{p-2}u$, where $N\ge 2$ and $p$ is superlinear and subcritical. The potential $a(x)\in L^\infty(\R^N)$ is such that $a(x)\ge c>0$ but is not assumed to have a limit at infinity. Considering different kinds of assumptions on the geometry of $a(x)$ we obtain two theorems stating the existence of positive solutions. Furthermore, we prove that there are no nontrivial solutions, when a direction exists along which the potential is increasing.
Abstract:
We prove the existence of the principal eigenvalues for the Pucci operators in bounded domains with boundary condition $\frac{\partial u}{\partial\vec n}=\alpha u$ corresponding respectively to positive and negative eigenfunctions and study their asymptotic behavior when $\alpha$ goes to $+\infty$.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Please read this introduction first before looking through the solutions. Here’s a quick index to all the problems in this section.
If $P’$, $Q’$, and $R’$ are, respectively, the images of three collinear points, $P$, $Q$ and $R$, under an affine transformation, then as the directions, and hence the slopes, of the segments $\overline{PQ}$ and $\overline{PR}$ are the same, using Theorem 3, we get
$$\frac{\overline{P’Q’}}{PQ} = \frac{\overline{Q’R’}}{QR}$$ $$\implies \frac{\overline{P’Q’}}{Q’R’} = \frac{\overline{PQ}}{QR}$$
Hence, the ratio in which $Q$ divides the segment $\overline{PR}$ is equal to the ratio in which $Q’$ divides the segment $\overline{P’R’}$.
2. Prove that an affine transformation always transforms an ellipse into an ellipse, a parabola into a parabola, and a hyperbola into a hyperbola.
From Exercise 6 Sec 2.3, we know that a conic in the euclidean plane is a hyperbola, a parabola, or an ellipse according as its representation in $E_2^+$ intersects the ideal line in two points, one point, or no (real) points.
As the ideal line is invariant under an affine transformation and incidence is preserved under any collineation, the number of intersections of a conic section with the ideal line also remains unchanged. Hence the classification of the transformed conic section in the euclidean plane is the same as its preimage.
Multiplying two matrices representing two different affine transformationsclearly demonstrates that the group of affine transformations is
not acommutative group. Type I
$\begin{pmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{pmatrix}$
Type II
$\begin{pmatrix} 2 & 3 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{pmatrix}$
Type III
$\begin{pmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{pmatrix}$
Type IV
$\begin{pmatrix} 1 & 2 & 3 \\ 0 & 1 & 4 \\ 0 & 0 & 1 \end{pmatrix}$
Type V
$\begin{pmatrix} 1 & 0 & 2 \\ 0 & 1 & 3 \\ 0 & 0 & 1 \end{pmatrix}$
Type VI
$\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$
5. Give a proof of Theorem 3 based on the invariance of the cross ratio under an arbitrary collineation.
Let $\Omega$ be the ideal point on line $PQR$. Then its image, $\Omega’$, will be another point on the ideal line as the ideal line is invariant under an affine transformation.
By the invariance of the cross ratio under a collineation, we have
$$℞(P, R, Q, \Omega) = ℞(P’, R’, Q’, \Omega’)$$
From the Exercise 11, Sec 4.7, we know that
$$℞(P, R, Q, \Omega) = -\frac{(PQ)}{(RQ)}$$
where $(AB)$ denotes the length of the segment $\overline{AB}$.
Combining the two results above, we get $$\frac{(PQ)}{(RQ)} = \frac{(P’Q’)}{(R’Q’)}$$ $$\implies \frac{(PQ)}{(P’Q’)} = \frac{(RQ)}{(R’Q’)}$$
Hence the length of each segment on a line is scaled by the same factor under an affine transformation.
This is as far as I could go with the proof; I couldn’t prove that the scaling factor depends only on the direction of the line and the transformation. Maybe I’ll get back to it someday when I have the time.
6. Prove that the images of all circles under a given affine transformation are similar ellipses whose major axes are all parallel.
This one is the most unconventional use of Theorem 1 in my opinion.
We already know from #2 that the image of an ellipse will be an ellipse under an affine transformation. As a circle is an ellipse, its image will also be an ellipse under an affine transformation.
Consider a general circle $\Gamma$ that gets transformed into an ellipse $\Gamma’$. The diameter of $\Gamma$ that gets transformed into the major axis of $\Gamma’$ is also the direction in which scaling factor of the affine transformation is the largest (by the definition of the major axis of an ellipse).
Using Theorem 1, under a given affine transformation this scaling factor only depends on the direction of the diameter. Hence the diameter of any other circle in the same direction will also get transformed to be the major axis of its image ellipse. Finally, as affine transformations preserve parallelism, the major axes of the image ellipses will be parallel (in the same direction).
Consider a general affine transformation $$A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ 0 & 0 & 1 \end{pmatrix}$$
$$A^2 = \begin{pmatrix} a_{11}^2 + a_{12}a_{21} & a_{12}a_{22} + a_{11}a_{12} & a_{12}a_{23} + a_{11}a_{13} + a_{13} \\ a_{21}a_{22} + a_{11}a_{21} & a_{22}^2 + a_{12}a_{21} & a_{22}a_{23} + a_{23} + a_{21}a_{13} \\ 0 & 0 & 1 \end{pmatrix}$$
For $A$ to be periodic, the following must hold. $$a_{11}^2 + a_{12}a_{21} = 1$$ $$a_{12}a_{22} + a_{11}a_{12} = 0$$ $$a_{12}a_{23} + a_{11}a_{13} + a_{13} = 0$$ $$a_{21}a_{22} + a_{11}a_{21} = 0$$ $$a_{22}^2 + a_{12}a_{21} = 1$$ $$a_{22}a_{23} + a_{23} + a_{21}a_{13} = 0$$
This implies $$a_{11} = -a_{22}$$ $$a_{21} = \frac{1 - a_{11}^2}{a_{12}}$$ $$a_{13} = \frac{-a_{12}a_{23}}{1 + a_{11}}$$
Writing $\frac{a_{23}}{1 + a_{11}}$ as $k$, for $A$ to be period it must have the form
$$A = \begin{pmatrix} a_{11} & a_{12} & -ka_{12} \\ \frac{1 - a_{11}^2}{a_{12}} & -a_{11} & k(1 + a_{11}) \\ 0 & 0 & 1 \end{pmatrix}$$
8. Are there any directions such that the length of segments in these directions are unaltered by a given affine transformation?
Setting the scaling factor to 1 we get
$$(a_{12}^2 + a_{22}^2)m^2 + 2(a_{11}a_{12} + a_{21}a_{22})m + a_{11}^2 + a_{21}^2 = 1 + m^2$$ $$\implies(a_{12}^2 + a_{22}^2 - 1)m^2 + 2(a_{11}a_{12} + a_{21}a_{22})m + a_{11}^2 + a_{21}^2 - 1 = 0$$
The roots of this quadratic equation in $m$ are the two directions in which length of segments are unaltered for a given affine transformation.
9. Show that there is a maximum and minimum to the value of the factor by which an affine transformation multiplies the length of an arbitrary segment.
Consider the scaling factor $$\frac{(a_{12}^2 + a_{22}^2)m^2 + 2(a_{11}a_{12} + a_{21}a_{22})m + a_{11}^2 + a_{21}^2}{1 + m^2}$$
Dividing both numerator and denominator by $m^2$, we get $$\frac{(a_{12}^2 + a_{22}^2) + \frac{2(a_{11}a_{12}}{m} + a_{21}a_{22})m + \frac{a_{11}^2 + a_{21}^2}{m^2}}{\frac{1}{m^2} + 1}$$
When $m \rightarrow \pm \infty$, this value is still a scalar. Hence, the scaling factor is bounded and has a maximum and minimum value.
Another way to look at this is that under any affine transformation, a circle is transformed into an ellipse. This means there exist directions corresponding to the maximum and minimum scaling factors which are the preimages of the major and minor axes of the ellipse respectively. If one of the extrema didn’t exist, the image would be a parabola and if both didn’t exist, it would be a hyperbola which is not the case.
10. Prove that the directions in which an affine transformation multiplies lengths by the greatest and least factors are perpendicular.
This is similar to the proof above.
Under any affine transformation a circle is transformed into an ellipse with the major and minor axes being the directions of most and least scaling respectively. We know that the major and minor axes of an ellipse are perpendicular to each other. Hence, the directions in which an affine transformation multiplies lengths by the greatest and least factors are perpendicular.
11. Let $\Gamma’: a’(x’_1)^2 + 2b’x’_1x’_2 + c’(x’_2)^2 + 2d’x’_1x’_3 + 2e’x’_2x’_3 + f’(x’_3)^2 = 0$ be the image of the conic $\Gamma: a(x_1)^2 + 2bx_1x_2 + c(x_2)^2 + 2dx_1x_3 + 2ex_2x_3 + f(x_3)^2 = 0$ under an arbitrary affine transformation. What relation, if any, exists between the quantities $b’^2 - a’c’$ and $b^2 - ac$? What relation, if any, exists between the discriminants of $\Gamma$ and $\Gamma’$?
This is pretty straightforward; it’s not even related to affine transformations specifically.
Under the collineation $X’ = AX$, a general conic $X^TBX = 0$ gets transformed to $(A^{-1}X’)^TB(A^{-1}X’) = 0$ or $X’^TA^{-T}BA^{-1}X = 0$. So the new matrix of quadratic form is $B’ = A^{-T}BA^{-1}$. This means $B = A^TB’A$.
$b^2 - ac$ and $b’^2 - a’c’$ are the $2\times2$ subdeterminants of $B$ and $B’$ respectively. Expanding $B = A^TB’A$ and calculating the value of these subdeterminants, it’s easy to see that
$$b^2 - ac = (a_{11}a_{22} - a_{12}a_{21})^2(b’^2 - a’c’)$$
As the first term is a square, the signs of both $b^2 - ac$ and $b’^2 - a’c’$ terms are the same. This once again proves that an affine transformation maintains the euclidean classification of conics.
The discriminant of a conic is the determinant of its quadratic form. From $B = A^TB’A$ using the multiplication rule of determinants, we get $$|B| = |A^T||B’||A|$$ As the determinant of the transpose is equal to determinant of the matrix, we get $$|B| = |A|^2|B’|$$
This shows that nonsingular conics remain nonsingular under an affine transformation.
12. Let $\bigtriangleup PQR$ and $\bigtriangleup P’Q’R’$ be two finite triangles. Prove that there is a unique affine transformation which maps $P$ into $P’$, $Q$ into $Q’$, and $R$ into $R’$.
As we have four sets of lines and their images, no three of which are concurrent, $PQ \rightarrow P’Q’$, $QR \rightarrow Q’R’$, $PR \rightarrow P’R’$ and the invariant ideal line, by the dual of Theorem 4, Sec 5.8, we can uniquely determine an affine transformation to perform this mapping.
13. Using the fact that the diagonals of a parallelogram bisect each other, give a geometric proof of the fact that if $P’$, $Q’$ and $R’$ are, respectively, the images of three collinear points $P$, $Q$, $R$ under an affine transformation, and if $Q$ is the midpoint of the segment $\overline{PR}$, then $Q’$ is the midpoint of the segment $\overline{P’R’}$.
As an affine transformation preserves parallelism between lines, a parallelogram will transform into a parallelogram under an affine transformation. Also, as a general collineation preserves incidence, the point of intersection of the two diagonals of the original parallogram will be transformed into the point of intersection of the two diagonals of the transformed parallelogram.
Hence, if we consider two diagonally opposite vertices of the original parallelogram to be $P$ and $R$ with the point of intersection of its diagonals being $Q$, it follows that $Q’$ will be the midpoint of $P’R’$ as it will be the point of intersection of $P’R’$ and the other diagonal of the transformed parallelogram.
14. Using the appropriate property of a general parallelogram, prove that if $P’$, $Q’$, $R’$ are, respectively the images of three collinear points $P$, $Q$, $R$ under an affine transformation, and if $Q$ divides the segment $\overline{PR}$ in the ratio $\frac{1}{3}$ then $Q’$ divides the segment $\overline{P’R’}$ in the ratio $\frac{1}{3}$.
Let $P$ and $R$ be diagonally opposite sides of a parallelogram $PSRT$ with $Q$ dividing $\overline{PR}$ in the ratio $1:3$. Let $B$ be the point where the two diagonals intersect.
Consider $\bigtriangleup PTS$. $PB$ is one of the medians of the triangle and as $Q$ divides it in the ratio $2:1$, $TA$ must also be a median of the triangle and $A$ must be the midpoint of $PS$.
As we have already proven in #13 that the midpoint of a segment will be transformed to the midpoint of the image of that segment under an affine transformation, $T’A’$ and $P’B’$ will remain medians of $\bigtriangleup P’T’S’$. This means the intersection of the medians, $Q’$, will divide $P’B’$ in the ratio $2:1$. Hence $Q’$ will divide $P’R’$ in the ration $1:3$.
15. Can Exercise 14 be generalized to provide a proof of the fact that if $P’$, $Q’$, $R’$ are respectively the images of three collinear points $P$, $Q$, $R$ under an affine transformation, and if $Q$ divides the segment $\overline{PR}$ in the ratio $\frac{1}{n}$ then $Q’$ divides the segment $\overline{P’R’}$ in the ratio $\frac{1}{n}$?
From the solution to the previous exercise, we know that the line connecting one of the vertices of the parallelogram to the midpoint of the opposite vertex intersects the opposite diagonal in a point that divides the diagonal in the ratio $1:3$. This statement can be generalized to the following statement.
The line joining a vertex of the parallelogram to the point that divides the opposite side in the ratio $1: n-1$ intersects the diagonal opposite to the vertex at a point that divides the diagonal in the ratio $1:n$.
I didn’t prove this statement but had a hunch and tried it out in geogebra to find, to my pleasant surprise, that it’s correct. Geogebra is so useful. I also want to acknowledge my engineering background that gave me the nudge to just “try it out”. Maybe when I get more time I’ll come back to figure out the proof.
In this parallelogram,
$BC:BC = 1:1 \implies BG:GD = 1:2$
$BE:EC = 1:2 \implies BL:LD = 1:3$
$BJ:JC = 1:3 \implies BH:HD = 1:4$
$BF:FC = 1:4 \implies BI:ID = 1:5$
Using the general statement above we can generalize the proof of exercise 14 to prove that if $P’$, $Q’$, $R’$ are respectively the images of three collinear points $P$, $Q$, $R$ under an affine transformation, and if $Q$ divides the segment $\overline{PR}$ in the ratio $\frac{1}{n}$ then $Q’$ divides the segment $\overline{P’R’}$ in the ratio $\frac{1}{n}$comments powered by Disqus |
As it is known from the classification of finite subgroups of $SO(3)$ (for example, here on page 3), there are such groups $G$ and their orders $e_i$ of stabilizer subgroups of point of set $S$ which contains points that are fixed by some nontrivial element of $G$:
Cyclic: n, n Dihedral: 2, 2, n/2 Tetrahedral: 2, 3, 3 Octahedral: 2, 3, 4 Icosahedral: 2, 3, 5
It is easy to check that there are all possible cases of orders $e_i$ using Riemann-Hurwitz formula for ramified covering $X \rightarrow X/G$ by this subgroup ($n=|G|$ and $g=g(X/G)$):
$$2 = n (2-2g) - \sum \frac{n}{e_i}(e_i-1)$$
There is $\frac{n}{e_i}$ is size of orbit and $e_i$ is ramification index equal to size of stabilizer subgroup.
I am interesting whether it is possible to prove that there is only one group of defined orders of stabilizers - my plan is like this:
Let X, Y be two Riemann spheres with finite subgroups G, H with equal orbit sizes Then both $X/G$ and $Y/H$ are of genus 0 so Riemann spheres If there are three orbits, there are three branch points on $X/G$ and $Y/H$ so one can identify this points using 3-transitivity of $PSL(2,C)$ I suppose that one can also identify two ramified coverings of Riemann sphere by other Riemann sphere with equal ramification, but I have no idea how Then $G$ is the group of deck transformations of this covering |
Keith Winstein,
EDIT: Just to clarify, this answer describes the example given in Keith Winstein Answer on the King with the cruel statistical game. The Bayesian and Frequentist answers both use the same information, which is to ignore the information on the number of fair and unfair coins when constructing the intervals. If this information is not ignored, the frequentist should use the integrated Beta-Binomial Likelihood as the sampling distribution in constructing the Confidence interval, in which case the Clopper-Pearson Confidence Interval is not appropriate, and needs to be modified. A similar adjustment should occur in the Bayesian solution.
EDIT: I have also clarified the initial use of the clopper Pearson Interval.
EDIT: alas, my alpha is the wrong way around, and my clopper pearson interval is incorrect. My humblest apologies to @whuber, who correctly pointed this out, but who I initially disagreed with and ignored.
The CI Using the Clopper Pearson method is very good
If you only get one observation, then the Clopper Pearson Interval can be evaluated analytically. Suppose the coin is comes up as "success" (heads) you need to choose $\theta$ such that
$$[Pr(Bi(1,\theta)\geq X)\geq\frac{\alpha}{2}] \cap [Pr(Bi(1,\theta)\leq X)\geq\frac{\alpha}{2}]$$
When $X=1$ these probabilities are $Pr(Bi(1,\theta)\geq 1)=\theta$ and $Pr(Bi(1,\theta)\leq 1)=1$, so the Clopper Pearson CI implies that $\theta\geq\frac{\alpha}{2}$ (and the trivially always true $1\geq\frac{\alpha}{2}$) when $X=1$. When $X=0$ these probabilities are $Pr(Bi(1,\theta)\geq 0)=1$ and $Pr(Bi(1,\theta)\leq 0)=1-\theta$, so the Clopper Pearson CI implies that $1-\theta \geq\frac{\alpha}{2}$, or $\theta\leq 1-\frac{\alpha}{2}$ when $X=0$. So for a 95% CI we get $[0.025,1]$ when $X=1$, and $[0,0.975]$ when $X=0$.
Thus, one who uses the Clopper Pearson Confidence Interval will
never ever be beheaded. Upon observing the interval, it is basically the whole parameter space. But the C-P interval is doing this by giving 100% coverage to a supposedly 95% interval! Basically, the Frequentists "cheats" by giving a 95% confidence interval more coverage than he/she was asked to give (although who wouldn't cheat in such a situation? if it were me, I'd give the whole [0,1] interval). If the king asked for an exact 95% CI, this frequentist method would fail regardless of what actually happened (perhaps a better one exists?).
What about the Bayesian Interval? (specifically the Highest Posterior Desnity (HPD) Bayesian Interval)
Because we know
a priori that both heads and tails can come up, the uniform prior is a reasonable choice. This gives a posterior distribution of $(\theta|X)\sim Beta(1+X,2-X)$ . Now, all we need to do now is create an interval with 95% posterior probability. Similar to the clopper pearson CI, the Cummulative Beta distribution is analytic here also, so that $Pr(\theta \geq \theta^{e} | x=1) = 1-(\theta^{e})^{2}$ and $Pr(\theta \leq \theta^{e} | x=0) = 1-(1-\theta^{e})^{2}$ setting these to 0.95 gives $\theta^{e}=\sqrt{0.05}\approx 0.224$ when $X=1$ and $\theta^{e}= 1-\sqrt{0.05}\approx 0.776$ when $X=0$. So the two credible intervals are $(0,0.776)$ when $X=0$ and $(0.224,1)$ when $X=1$
Thus the Bayesian will be beheaded for his HPD Credible interval in the case when he gets the bad coin
and the Bad coin comes up tails which will occur with a chance of $\frac{1}{10^{12}+1}\times\frac{1}{10}\approx 0$.
First observation, the Bayesian Interval is smaller than the confidence interval. Another thing is that the Bayesian would be closer to the actual coverage stated, 95%, than the frequentist. In fact, the Bayesian is just about as close to the 95% coverage as one can get in this problem. And contrary to Keith's statement, if the bad coin is chosen, 10 Bayesians out of 100 will on average lose their head (not all of them, because the bad coin must come up heads for the interval to not contain $0.1$).
Interestingly, if the CP-interval for 1 observation was used repeatedly (so we have N such intervals, each based on 1 observation), and the true proportion was anything between $0.025$ and $0.975$, then coverage of the 95% CI will always be 100%, and not 95%! This clearly depends on the true value of the parameter! So this is at least one case where repeated use of a confidence interval does not lead to the desired level of confidence.
To quote a
genuine 95% confidence interval, then by definition there should be some cases (i.e. at least one) of the observed interval which do not contain the true value of the parameter. Otherwise, how can one justify the 95% tag? Would it not be just a valid or invalid to call it a 90%, 50%, 20%, or even 0% interval?
I do not see how simply stating "it actually means 95% or more" without a complimentary restriction is satisfactory. This is because the obvious mathematical solution is the whole parameter space, and the problem is trivial. suppose I want a 50% CI? if it only bounds the false negatives then the whole parameter space is a valid CI using only this criteria.
Perhaps a better criterion is (and this is what I believe is implicit in the definition by Kieth) "as close to 95% as possible, without going below 95%". The Bayesian Interval would have a coverage closer to 95% than the frequentist (although not by much), and would not go under 95% in the coverage ($\text{100%}$ coverage when $X=0$, and $100\times\frac{10^{12}+\frac{9}{10}}{10^{12}+1}\text{%} > \text{95%}$ coverage when $X=1$).
In closing, it does seem a bit odd to ask for an interval of uncertainty, and then evaluate that interval by the using the true value which we were uncertain about. A "fairer" comparison, for both confidence and credible intervals, to me seems like
the truth of the statement of uncertainty given with the interval. |
Bristlecone's native operation is the CZ, not CNOTs. However, you can transform between the two with Hadamard gates so this is sort of a trivial difference.Bristlecone can perform a CZ between any adjacent pair of qubits on a grid. You can see the grid by installing cirq and printing out the Bristlecone device:$ pip install cirq$ python>>> ...
From the original blog post presenting the Bristlecone quantum chip, here is the connectivity map of the chip:Each cross represent a qubit, with nearest-neighbour connectivity. If you number the qubits from left to right, top to bottom (just like how you read english), starting by $0$ then the connectivity map would be given by:connectivity_map = {i ...
GridQubit has comparison methods defined, so sorted will give you a list of the qubits in row-major order:>>> sorted(cirq.google.Foxtail.qubits)[GridQubit(0, 0), GridQubit(0, 1), [...] GridQubit(1, 9), GridQubit(1, 10)]Once you have that, you're one list comprehension away:>>> [(q.row, q.col) for q in sorted(cirq.google.Foxtail....
Take a look again at the Hamiltonian, which is$$H = \sum_{\langle i, j \rangle} J_{i j} Z_i Z_j + \sum_{i} h_i Z_i$$Then notice that ZPowGate is generated by the the Pauli Z operator, and CZPowGate is equivalent to an operator generated by $Z \otimes Z$ up to single-qubit rotations. The idea is that Step 2 of the ansatz corresponds to applying a pulse ...
Yes, it is possible to create controlled gates with an exponent in Cirq.For the specific case of the Z gate, Cirq includes a dedicated CZ gate that can be raised to a power:cs = cirq.CZ**0.5More generally, cirq.ControlledGate works on any gate. It's a bit clunkier than the dedicated gates, but it does support being raised to a power (as long as the ...
Cirq uses numpy's pseudo random number generator to pick measurement results, e.g. here is code from XmonStepper.simulate_measurement:def simulate_measurement(self, index: int) -> bool:[...]prob_one = np.sum(self._pool.map(_one_prob_per_shard, args))result = bool(np.random.random() <= prob_one)[...]Cirq ...
In Cirq v0.5.0 and later you can use the controlled_by method on any Operation:op = cirq.X(target_qubit).controlled_by(control_qubit)You can also use controlled_by on any Gate, i.e. before specifying the target qubits:op = cirq.X.controlled_by(control_qubit).on(target_qubit)And you can also make a controlled version of the gate with an initially ...
Cirq distinguishes between "running" a circuit, which is generally supposed to act like hardware would (e.g. only getting samples), and "simulating" a circuit, which has more freedom.Most "simulate" methods, like cirq.Simulator().simulate(...) have a parameter initial_state which can either be a computational basis state (specified as an integer e.g. ...
This is actually very easy in Cirq. The controlled_by method can be used to automatically make any given gate controlled by an arbitrary number of control qubits. Here is a simple example for creating an X gate with 5 controls:import cirqqb = [cirq.LineQubit(i) for i in range(6)]cnX = cirq.X.controlled_by(qb[0], qb[1], qb[2], qb[3], qb[4]);circuit = ...
When using a simulator, it doesn't really matter what kind of qubit you refer to. You can even mix-and-match the types. The type of qubit only becomes relevant when you intend to run on a device, because devices have qubits at specific locations.For example, if you wanted to run on Bristlecone, you would limit yourself to GridQubit instances that actually ...
If you are looking for a more complete implementation of a quantum variational algorithm in the context of Cirq, I would recommend looking at the second example in the OpenFermion-Cirq notebook found here. It uses a custom ansatz for hydrogen in a minimal basis, but makes a bit more explicit all the required pieces. Another good example, perhaps without ...
The Fourier transform part (everything from the swaps onward) looks correct. The initialization (column of Hadamards) looks correct. But the part where you do controlled modular multiplications doesn't, because there's no operations controlled on the 2nd through fifth qubits that you are QFT-ing.You also seem to expect the output to be the period, when ...
Looking at the documentation and the GitHub, there is a something called ControlledGate. This class is said to augment existing gates with a control qubit.You can look at the test file.I can see line 72 :cxa = cirq.ControlledGate(cirq.X**cirq.Symbol('a'))Could you try:gate = cirq.ControlledGate(cirq.X**0.5) ?
This is going to change somewhat radically in the next version of cirq, so I'll give an answer for both versions.In v0.3, in order for a simulator to understand a custom gate, the gate must implement either cirq.CompositeGate or cirq.KnownMatrix. For your case, the simplest is to implement the matrix:# assuming cirq v0.3import cirqimport numpy as np...
I searched for doing a custom gate on the Cirq documentation and here are the results :Gate setsThe xmon simulator is designed to work with operations thatare either a GateOperation applying an XmonGate, a CompositeOperationthat decomposes (recursively) to XmonGates, or a 1-qubit or 2-qubitoperation with a KnownMatrix. By default the ...
The current version of PyQuil provides an "ISA" object that houses the information that you want about Rigetti's quantun processors, but it isn't formatted as you request. I'm a poor Python programmer, so you'll have to excuse my non-Pythonic-ness—but here's a snippet that will take a device_name and reformat the pyQuil ISA into one of your dictionaries:...
You can test stand alone the a modular multiplication circuit. In this case $\text{base} = 2$ and $N = 3$. However the smallest useful composite $N = 15 = 3 \times 5$.Let's take a well known Multiplication by 7 modulo 15 circuitWe start with input $$\ |1\rangle \text{ gives } |7\rangle$$$$\ |7\rangle \text{ gives } |4\rangle$$$$\ |4\rangle \text{ gives ...
The endian-ness of the qubits is the answer. Both QFT and phase estimation rely on certain endianness of the register, and the representations used in the controlled-unitary part has to match the endianness used in the QFT part (and in the answer). This circuit produces the expected outcome with the inverse QFT block:
In the current release of Cirq (0.4.0) there is a strong limitation on symbols: you can't scale them or add them (Why? We were worried about being pulled down the rabbit hole of implementing a whole symbolic algebra system.). Making matters worse, Cirq internally works in radians divided by pi to avoid some minor sources of floating point error. So when you ...
This is the matrix for $Z^t$:$$Z^t = \begin{bmatrix}1&0\\0&(-1)^t\end{bmatrix} = \begin{bmatrix}1&0\\0&e^{i \pi t}\end{bmatrix}$$This is the matrix for $R_Z(\pi t)$:$$R_Z(\pi t) = e^{-iZt/2} = \begin{bmatrix}e^{-i \pi t / 2}&0\\0&e^{+i \pi t / 2}\end{bmatrix} = e^{-i \pi t/2} Z^t$$Which means that$$Z^t \equiv R_Z(\pi ...
I am definitely biased (writing a book on quantum computing with Python and Q#), but I am a Pythonista and love using Q#. The design of the language is good for long term quantum computing development; it allows you to think more at the algorithmic level, not at the assembly level as many other quantum programming languages are targeting. It has a Jupyter ...
I would suggest to start with Quirk as it offers a drag-and-drop circuit model. Furthermore, Quirk offers some subroutines such as basic arithmetic operation (on integers) and allows to easily define new subroutines. (All drag an drop!) It can simulate up to 17 (?) qubits.Once you want to go beyond an "easy" circuit representation I suggest Microsofts Q#. ...
In my opinion, at the moment, qiskit is the most suitable one for learning and teaching.For a basic introductory material (we have used it in 14 two-day or three-day workshops), I recommend the following repo:https://gitlab.com/qkitchen/basics-of-quantum-computingthe link to workshops: https://qsoftware.lu.lv/index.php/workshops/
Cirq's simulator is a state vector simulator, which cannot be told to focus on the amplitude of a specific output state or combination of output states.Some tensor network based simulators can get benefit from focusing on a specific output state, but there isn't such a simulator in Cirq.
If you want a random computational basis state, set the input state to the integer random.randint(0, 2**qubits-1).If you want a random superposition sampled from the Haar measure, there is a method cirq.testing.random_superposition(dim=2**qubits).Once you have created your initial state, you pass it into the simulator like cirq.Simulator().simulate(...
The following code snippet will do almost what you want:class MyLayerGate(cirq.Gate):def _decompose_(self, qubits):a, b, c = qubitsreturn my_layer(a, b, c)# [will be unnecessary in v0.5.0] workaround for cirq.unitary ignoring _decompose_:def _unitary_(self):return cirq.unitary(cirq.Circuit.from_ops(...
This looks right, although I would emphasise that it is not really best practice to have to ask this question at this stage. The whole point of doing a particularly simple example is so that you can confirm that it's doing what you've already calculated analytically. It's quite important to do the analytic bit first to avoid confirmation bias.Anyway, let's ... |
An algebraic structure $\mathbf{A}$ has the
(CEP) if for anyalgebraic substructure $\mathbf{B}\le\mathbf{A}$ andany congruence relation $\theta$ on $\mathbf{B}$ there exists a congruence relation $\psi$ on $\mathbf{A}$such that $\psi\cap(B\times B)=\theta$. congruence extension property
A class of algebraic structures has the
if each of its members has the congruence extensionproperty. congruence extension property
For a class $\mathcal{K}$ of algebraic structures, a congruence $\theta$ on an algebra $\mathbf{B}$ is a $\mathcal{K}$-congruence if $\mathbf{B}//\theta\in\mathcal{K}$. If $\mathbf{B}$ is a subalgebra of $\mathbf{A}$, we say that a $\mathcal{K}$-congruence $\theta$ of $\mathbf{B}$ can be extended to $\mathbf{A}$ if there is a $\mathcal{K}$-congruence $\psi$ on $\mathbf{A}$ such that $\psi\cap(B\times B)=\theta$.
Note that if $\mathcal{K}$ is a variety and $B\in\mathcal{K}$ then every congruence of $\mathbf{B}$ is a $\mathcal{K}$-congruence.
A class $\mathcal{K}$ of algebraic structures has the
((P)RCEP) if for every algebra $\mathbf{A}\in\mathcal{K}$ any (principal) $\mathcal{K}$-congruenceof any subalgebra of $\mathbf{A}$ can be extended to $\mathbf{A}$. (principal) relative congruence extension property
W. J. Blok and D. Pigozzi,
, Algebra Universalis, On the congruence extension property 38, 1997, 391–394 MRreview shows that for a quasivarieties $\mathcal{K}$, PRCEP implies RCEP. |
What is Standard Deviation?
Standard deviation is the root of sum of the squares of deviations divided by their numbers. It is also called ‘Mean error deviation’, Mean square error deviation or Root mean square deviation. It is a second moment of dispersion. Since the sum of squares of deviations from the mean is a minimum, the deviations are taken only from the mean (but not from median and mode).
The standard deviation is Root Mean Square (RMS) average of all the deviations from the mean. It is denoted by sigma (σ).
Standard Deviation Formula
For discrete series without frequency it is given by:
\[Variance=\frac{\sum{{{\left( X-\overline{X} \right)}^{2}}}}{N}………..(1)\] \[\sigma =\sqrt{Variance}\] For discrete series with frequency, it is given by: \[Variance=\frac{\sum{{{\left( X-\overline{X} \right)}^{2}}f}}{\sum{f}}……….(2)\] \[\sigma =\sqrt{Variance}\] Where, ‘X’ is the mid value of class interval for continuous series. In case of grouped data, alternative form (1) & (2) are the followings – \[For(1)\to Variance=\frac{\sum{{{d}^{2}}}}{\sum{f}}-{{\left( \overline{d} \right)}^{2}}\] \[\sigma =\sqrt{Variance}\] \[For(2)\to Variance=\left( \frac{\sum{f{{d}^{2}}}}{\sum{f}}-{{\left( \frac{\sum{fd}}{\sum{f}} \right)}^{2}} \right)\times {{\left( C.f. \right)}^{2}}\] \[\sigma =\sqrt{Variance}\] \[Where,d=\frac{X-A}{C.F.}\] \[A=Assumed-mean,C.F.=Class-width\] Note: The Square of standard deviation is called variance. It is denoted by σ 2. Properties of Standard Deviation
1. It is independent of origin but not independent of scale.
2. Standard deviation is always non negative value. 3. It is the least of all root-mean-square deviations. Combined Standard Deviation
Suppose the mean of n
1 values is X’ 1 and that of n 2 values is X’ 2 and standard deviation of the n 1 and n 2 values is σ 1 and σ 2 respectively. Then the combined standard deviation of both the values is given by: \[Variance=\frac{{{n}_{1}}\left( {{\sigma }_{1}}^{2}+{{d}_{1}}^{2} \right)+{{n}_{2}}\left( {{\sigma }_{2}}^{2}+{{d}_{2}}^{2} \right)}{{{n}_{1}}+{{n}_{2}}}\] \[\sigma =\sqrt{Variance}\] \[Where,{{d}_{1}}=\overline{X}-\overline{{{X}_{1}}},and,{{d}_{2}}=\overline{X}-\overline{{{X}_{2}}}\] \[\overline{X}=combined-mean-of,{{n}_{1}}\And {{n}_{2}}\] Advantages of Standard Deviation
It is –
1. Rigidly defined 2. Based on all values 3. Capable of further algebraic treatment 4. Not very much affected by sampling fluctuations Disadvantages of Standard Deviation
It is –
1. Difficult to understand 2. Gives undue weightage for extreme values 3. Can’t be calculated for classes with open end interval
Example 01 The means of two samples of sizes 50 and 100 respectively are 54.1 and 50.3 and there standard deviations are 8 and 7 respectively. Obtain the SD for combined group. Solution: \[Given,{{n}_{1}}=50,\overline{{{X}_{1}}}=54.1,{{\sigma }_{1}}=8\] \[{{n}_{2}}=100,\overline{{{X}_{{}}}}=50.3,{{\sigma }_{2}}=7\] \[Now,\overline{X}=\frac{{{n}_{1}}\overline{{{X}_{1}}}+{{n}_{2}}\overline{{{X}_{2}}}}{{{n}_{1}}+{{n}_{2}}}\] \[\Rightarrow \overline{X}=\frac{\left( 50\times 54.1 \right)+\left( 100\times 50.3 \right)}{50+100}=51.57\] \[Variance=\frac{{{n}_{1}}\left( {{\sigma }_{1}}^{2}+{{d}_{1}}^{2} \right)+{{n}_{2}}\left( {{\sigma }_{2}}^{2}+{{d}_{2}}^{2} \right)}{{{n}_{1}}+{{n}_{2}}}\] \[i.e.,{{\sigma }^{2}}=\frac{{{n}_{1}}\left( {{\sigma }_{1}}^{2}+{{d}_{1}}^{2} \right)+{{n}_{2}}\left( {{\sigma }_{2}}^{2}+{{d}_{2}}^{2} \right)}{{{n}_{1}}+{{n}_{2}}}\] \[Where,{{d}_{1}}=\overline{X}-\overline{{{X}_{1}}},and,{{d}_{2}}=\overline{X}-\overline{{{X}_{2}}}\] \[\therefore {{d}_{1}}=54.1-51.57=2.53\] \[\therefore {{d}_{2}}=50.3-51.57=-1.27\] \[\Rightarrow {{d}_{1}}^{2}=6.40,{{d}_{2}}^{2}=1.61\] \[\therefore {{\sigma }^{2}}=\frac{50\left( {{8}^{2}}+6.40 \right)+100\left( {{7}^{2}}+1.61 \right)}{50+100}\] \[\Rightarrow {{\sigma }^{2}}=\frac{3520+5061}{150}=\frac{8521}{150}=57.20\] \[\therefore \sigma =\sqrt{57.20}=7.56\] Therefore, the Standard Deviation is 7.56.
Example 02 Ten students of a class have obtained the following marks in a particular subject out of 100. Calculate SD: 5, 10, 20, 25, 40, 42, 45, 48, 70, 80 Solution: \[\overline{X}=\frac{\sum{X}}{N}=\frac{385}{10}=3805\]
Sl. No. Marks (X) \[d=X-\overline{X}\] \[{{\left( X-\overline{X} \right)}^{2}}\] 1. 5 -33.5 1122.25 2. 10 -28.5 812.25 3. 20 -18.5 342.25 4. 25 -13.5 182.25 5. 40 1.5 2.25 6. 42 3.5 12.25 7. 45 6.5 42.25 8. 48 9.5 90.25 9. 70 31.5 992.25 10. 80 41.5 1722.25 ∑ X = 385 \[\sum{{{\left( X-\overline{X} \right)}^{2}}=\sum{{{d}^{2}}=5320.50}}\]
\[\therefore Variance=\frac{\sum{{{\left( X-\overline{X} \right)}^{2}}}}{N}\]
\[\Rightarrow \sigma =\sqrt{\frac{\sum{{{\left( X-\overline{X} \right)}^{2}}}}{N}}=\sqrt{\frac{5320.50}{10}}=23.066\]
Hence the SD is 23.066
Example 03 The diastolic blood pressures of men are distributed as shown in table. Find the SD and variance.
Pressure 78-80 80-80 82-84 84-86 86-88 88-90 No. of Men 3 15 26 23 9 4 Solution: The table represents the frequency distribution of data required for calculating the standard deviation.
Class Interval Mid Value (X) Frequency (f) X-83 d=(X-83)/2 d 2 fd fd 2 78-80 79 3 -4 -2 4 -6 12 80-82 81 15 -2 -1 1 -15 15 82-84 83 26 0 0 0 0 0 84-86 85 23 2 1 1 23 23 86-88 87 9 4 2 4 18 36 88-90 89 4 6 3 9 12 36 ∑ f = 80 ∑ fd = 32 ∑ fd 2 = 122
\[\therefore {{\sigma }^{2}}=\left( \frac{\sum{f{{d}^{2}}}}{\sum{f}}-{{\left( \frac{\sum{fd}}{\sum{f}} \right)}^{2}} \right)\times {{\left( C.f. \right)}^{2}}\]
\[\therefore {{\sigma }^{2}}=\left( \frac{122}{80}-{{\left( \frac{32}{80} \right)}^{2}} \right)\times {{\left( 2 \right)}^{2}}=\left( 1.525-0.16 \right)\times 4=5.46units\] Therefore, Variance = 5.46 units SD = σ = 2.337 |
So, I was doing a Calculus problem a few minutes ago and just recalled something that my real analysis professor said during a lecture years ago...
To provide context, take the function $f$ defined by $f(x) = x+4$ for example.
Let's, for example, show that $f$ is continuous at $x = 3$.
Find $\lim\limits_{x \to 3}(x+4)$ by plugging in $3$ for $x$: you get $7$.
Since $\lim\limits_{x \to 3}f(x)= f(3)$, $f$ is continuous at $x = 3$.
Specifically, here's what I recall my professor saying:
The way continuity is taught in Calculus requires circular logic.
Clearly, I used circular logic in my example, since I assumed I could plug in $3$ to get the limit.
With a polynomial, I don't see this being too much of a problem. If I recall, there is a proof given early on in a Calculus I class which states that if $p$ is a polynomial defined by $p(x) = a_nx^{n} + a_{n-1}x^{n-1} + \cdots + a_1x +a_0$ for some positive integer $n$, then $\lim\limits_{x \to a}p(x) = p(a)$ - which I
believe is proved before discussing continuity in Calculus. (We can use anything like Stewart's Calculus book as a textbook for a "typical" Calculus course.)
But how about trigonometric functions? $a^{x}$ equations for some constant $a > 0$? Natural logarithms? Powers of $x$ - $x^{b}$ - where $b$ isn't a positive integer?
How can one bypass these problems in a Calculus I course? Furthermore, is there a way to do this without using $\delta$-$\epsilon$ and just using limit theorems?
[I am willing to move this to Math Educators SE if desired, and if the question is deemed to be too broad, I can delete this.] |
Claim: If $f \in {\mathscr R[a,b]}$, then $f$ has infinitely many points of continuity.
1.) I read that it is a corollary of the Lebesgue integrability criterion. Is it possible to prove the claim without invoking the concept of measure(or using less abstraction) ?
2.) Here is attempt: Given $\epsilon > 0, \exists$ Partition, $P = \left\{ x_0 =a,...,x_n =b \right\} $ of $[a,b]$ such that $\sum^n_{i=1} (M_i -m_i)\Delta x_i < (b-a)\epsilon$, where $M_i= \sup \left\{ f(x) : x \in \Delta x_i \right\}$ and $ m_i= \inf \left\{ f(x) : x \in \Delta x_i \right\} $
Let $(M_j -m_j)=\min \left\{ M_i -m_i : i=0,...,n \right\} $. $ \implies (M_j-m_j)(b-a) \leq \sum^n_{i=1} (M_i -m_i)\Delta x_i < (b-a)\epsilon$ $ \implies (M_j-m_j)<\epsilon. $
Let $ c\in (x_{j-1},x_j)$ and $\delta$ be any positive number such that $(c-\delta, c+\delta) \subseteq (x_{j-1}, x_j)$.
It follows that $ \left| f(x) - f(c) \right| < \epsilon, $ whenever $ \left| x - c \right| < \delta $.
Since $c$ is arbitrary and $[x_{j-1},x_j]$ is an interval, there are infinitely many points of continuity.
There is something wrong with my proof since Thomae's function is a counterexample. Could anyone point out the mistakes in my proof? Thank you. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.