text stringlengths 256 16.4k |
|---|
Apart from the ones you mentioned, another application of (a modified) Grover's algorithm which I'm aware of is solving the Collision problem in complexity theory, quantum computing and computational mathematics. It's also called the BHT algorithm.
Introduction:
The collision problem most often refers to the 2-to-1
version which was described by Scott Aaronson in his PhD thesis. Given that $n$ is
even and a function $f:\{1,...,n\}\to\{1,...,n\}$ we know beforehand
that either $f$ is 1-to-1 or 2-to-1. We are only allowed to make
queries about the value of $f(i)$ for any $i\in\{1,2,...,n\}$. The
problem then asks how many queries we need to make to determine with
certainty whether $f$ is 1-to-1 or 2-to-1.
Solving the 2-to-1 version deterministically requires $n/2+1$ queries,
and in general distinguishing r-to-1 functions from 1-to-1 functions
requires $n/r+1$ queries.
Deterministic classical solution:
This is a straightforward application of the pigeonhole principle: if
a function is r-to-1, then after $n/r+1$ queries we are guaranteed to
have found a collision. If a function is 1-to-1, then no collision
exists. If we are unlucky then $n/r$ queries could return distinct
answers. So $n/r+1$ queries are necessary.
Randomized classical solution:
If we allow randomness, the problem is easier. By the birthday
paradox, if we choose (distinct) queries at random, then with high
probability we find a collision in any fixed 2-to-1 function after
$\Theta(\sqrt{n})$ queries.
Quantum BHT solution:
Intuitively, the algorithm combines the square root speedup from the
birthday paradox
using (classical) randomness with the square root speedup from
Grover's (quantum) algorithm.
First, $n^{1/3}$ inputs to $f$ are selected at random and $f$ is
queried at all of them. If there is a collision among these inputs,
then we return the colliding pair of inputs. Otherwise, all these
inputs map to distinct values by $f$. Then Grover's algorithm is used
to find a new input to $f$ that collides. Since there are only
$n^{2/3}$ such inputs to $f$, Grover's algorithm can find one (if it
exists) by making only
$\mathcal{O}(\sqrt{n^{2/3}})=\mathcal{O}(n^{1/3})$ queries to $f$.
Sources:
https://en.wikipedia.org/wiki/Collision_problem
https://en.wikipedia.org/wiki/BHT_algorithm
Quantum Algorithm for the Collision Problem - Gilles Brassard, Peter Hoyer, Alain Tapp |
hide
Free keywords: General Relativity and Quantum Cosmology, gr-qc,Astrophysics, Cosmology and Extragalactic Astrophysics, astro-ph.CO, Astrophysics, High Energy Astrophysical Phenomena, astro-ph.HE, Astrophysics, Instrumentation and Methods for Astrophysics, astro-ph.IM
Abstract: We employ gravitational-wave radiometry to map the gravitational waves
stochastic background expected from a variety of contributing mechanisms and
test the assumption of isotropy using data from Advanced LIGO's first observing
run. We also search for persistent gravitational waves from point sources with
only minimal assumptions over the 20 - 1726 Hz frequency band. Finding no
evidence of gravitational waves from either point sources or a stochastic
background, we set limits at 90% confidence. For broadband point sources, we
report upper limits on the gravitational wave energy flux per unit frequency in
the range $F(f, \Theta) < (0.1 - 56) \times 10^{-8}$ erg cm$^{-2}$ s$^{-1}$
Hz$^{-1}$ (f/25 Hz)$^{\alpha-1}$ depending on the sky location $\Theta$ and the
spectral power index $\alpha$. For extended sources, we report upper limits on
the fractional gravitational wave energy density required to close the Universe
of $\Omega(f,\Theta) < (0.39-7.6) \times 10^{-8}$ sr$^{-1}$ (f/25 Hz)$^\alpha$
depending on $\Theta$ and $\alpha$. Directed searches for narrowband
gravitational waves from astrophysically interesting objects (Scorpius X-1,
Supernova 1987 A, and the Galactic Center) yield median frequency-dependent
limits on strain amplitude of $h_0 <$ (6.7, 5.5, and 7.0) $\times 10^{-25}$
respectively, at the most sensitive detector frequencies between 130 - 175 Hz.
This represents a mean improvement of a factor of 2 across the band compared to
previous searches of this kind for these sky locations, considering the
different quantities of strain constrained in each case. |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
January 2007 , Volume 17 , Issue 1
Select all articles
Export/Reference:
Abstract:
We study the first positive Neumann eigenvalue $\mu_1$ of the Laplace operator on a planar domain $\Omega$. We are particularly interested in how the size of $\mu_1$ depends on the size and geometry of $\Omega$. A notion of the intrinsic diameter of $\Omega$ is proposed and various examples are provided to illustrate the effect of the intrinsic diameter and its interplay with the geometry of the domain.
Abstract:
We discuss one parameter families of unimodal maps, with negative Schwarzian derivative, unfolding a saddle-node bifurcation. We show that there is a parameter set of positive but not full Lebesgue density at the bifurcation, for which the maps exhibit absolutely continuous invariant measures which are supported on the largest possible interval. We prove that these measures converge weakly to an atomic measure supported on the orbit of the saddle-node point. Using these measures we analyze the intermittent time series that result from the destruction of the periodic attractor in the saddle-node bifurcation and prove asymptotic formulae for the frequency with which orbits visit the region previously occupied by the periodic attractor.
Abstract:
This paper studies questions regarding the local and global asymptotic stability of analytic autonomous ordinary differential equations in $\mathbb{R}^n$. It is well-known that such stability can be characterized in terms of Liapunov functions. The authors prove similar results for the more geometrically motivated Dulac functions. In particular it holds that any analytic autonomous ordinary differential equation having a critical point which is a global attractor admits a Dulac function. These results can be used to give criteria of global attraction in two-dimensional systems.
Abstract:
In this work we characterize those shift spaces which can support a 1-block quasi-group operation and show the analogous of Kitchens result: any such shift is conjugated to a product of a full shift with a finite shift. Moreover, we prove that every expansive automorphism on a compact zero-dimensional quasi-group that verifies the medial property, commutativity and has period 2, is isomorphic to the shift map on a product of a finite quasi-group with a full shift.
Abstract:
Using the characteristic equation approach, the problem of asymptotic stability of linear neutral systems with multiple time delays is investigated in this paper. New delay-independent stability criteria are derived in terms of the spectral radius of corresponding modulus matrices. The structure information of the system matrices are taken into consideration in the proposed stability criteria, thus the conservatism found in the literature can be significantly reduced. The explicit nature of the construction permits us to directly express the algebraic criteria in terms of the plant parameters, thus checking of stability by our criteria can be carried out rather simply. Numerical examples are given to demonstrate the validity of the new criteria and to compare them with the previous results.
Abstract:
The aim of this paper is to define and study a new kind of entropy-like invariants in the case of probability space and compact metric topological group of continuous endomorphisms. These new invariants are only non-zero for non-invertible maps, but many propositions can be described and the analogue of the well-known variational principle can be established.
Abstract:
We study a Schrödinger equation with a nonlocal nonlinearity, which has been considered as a model for ultra-short laser pulses. An interesting feature of this equation is that the underlying dynamical system possesses a bounded non compact global attractor, actually a ball in $L^2(R)$. Existence and instability of standing waves are also proved.
Abstract:
Motivated by the study of actions of $\Z^{2}$ and more general groups, and their non-cocompact subgroup actions, we investigate entropy-type invariants for deterministic systems. In particular, we define a new isomorphism invariant, the entropy dimension, and look at its behaviour on examples. We also look at other natural notions suitable for processes.
Abstract:
In this paper we study boundary value problem with one dimensional $p$-Laplacian. Assuming complete resonance at $+\infty$ and partial resonance at $0^+$, an existence of at least one positive solution is proved. By strengthening our assumptions we can guarantee strict positivity of the obtained solution.
Abstract:
Existence of the global attractor is proved for the
strongsolutions to the 3D viscous Primitive Equations (PEs) modeling large scale ocean and atmosphere dynamics. This result is obtained under the natural assumption that the external heat source $Q$ is square integrable. Furthermore, it is shown in [20] that the fractal and Hausdroff dimensions of the global attractor for 3D viscous PEs are both finite. Abstract:
In this paper the global well-posedness in $L^2$ and $H^m$ of the Cauchy problem is proved for nonlinear Schrödinger-type equations. This we do by establishing regular Strichartz estimates for the corresponding linear equations and some nonlinear a priori estimates in the framework of Besov spaces. We further establish the regularity of the $H^m$-solution to the Cauchy problem.
Abstract:
For conformal hyperbolic flows, we establish explicit formulas for the Hausdorff dimension and for the pointwise dimension of an arbitrary invariant measure. We emphasize that these measures are not necessarily ergodic. The formula for the pointwise dimension is expressed in terms of the local entropy and of the Lyapunov exponents. We note that this formula was obtained before only in the special case of (ergodic) equilibrium measures, and these always possess a local product structure (which is not the case for arbitrary invariant measures). The formula for the pointwise dimension allows us to show that the Hausdorff dimension of a (nonergodic) invariant measure is equal to the essential supremum of the Hausdorff dimension of the measures in an ergodic decomposition.
Abstract:
We study the changes on the Bowen-Ruelle-Sinai measures along an arc that starts at an Anosov diffeomorphism on a two-torus and reaches the boundary of its stability component while a flat homoclinic tangency or a first cubic heteroclinic tangency is happening. The outermost diffeomorphisms of such arcs are not hyperbolic but are conjugate to the original Anosov diffeomorphism and share similar ergodic traits. In particular, the torus is a global attractor with a full supported physical measure.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Abbreviation:
CRng$_1$
A
is a rings with identity $\mathbf{R}=\langle R,+,-,0,\cdot,1\rangle$ such that $\cdot$ is commutative: $x\cdot y=y\cdot x$ commutative ring with identity
Let $\mathbf{R}$ and $\mathbf{S}$ be commutative rings with identity. A morphism from $\mathbf{R}$ to $\mathbf{S}$ is a function $h:R\rightarrow S$ that is a homomorphism:
$h(x+y)=h(x)+h(y)$, $h(x\cdot y)=h(x)\cdot h(y)$, $h(1)=1$
Remark: It follows that $h(0)=0$ and $h(-x)=-h(x)$.
Example 1: $\langle\mathbb{Z},+,-,0,\cdot,1\rangle$, the ring of integers with addition, subtraction, zero, multiplication, and one.
$0$ is a zero for $\cdot$: $0\cdot x=x$ and $x\cdot 0=0$.
$\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &1\\ f(4)= &4\\ f(5)= &1\\ f(6)= &1\\ \end{array}$ |
Cold dark matter is thought to fill our galactic neighborhood with a density $\rho$ of about 0.3 GeV/cm${}^3$ and with a velocity $v$ of roughly 200 to 300 km/s. (The velocity dispersion is much debated.) For a given dark matter mass $m$ and nucleon scattering cross section $\sigma$, this will lead to a constant collision rate of roughly
$r \sim \rho v \sigma / m$
for every nucleon in normal matter. The kinetic energy transferred to the nucleon (which is essentially at rest) will be roughly
$\Delta E \sim 2 v^2 \frac{M m^2}{(m+M)^2}$,
where $M \approx 1$ amu $\approx 1$ GeV/c${}^2$ is the mass of a nucleon. The limits for light ($m \ll M$) and heavy ($m \gg M$) dark matter are
$\Delta E_\mathrm{light} \sim 2 v^2 \frac{m^2}{M}$ and $\Delta E_\mathrm{heavy} \sim 2 v^2 M$.
This leads to an apparent intrinsic heat production in normal matter
$\tilde{P} \sim r \Delta E / M$,
which is measured in W/kg. The limits are
$\tilde{P}_\mathrm{light} \sim 2 \rho v^3 \sigma m / M^2$ and $\tilde{P}_\mathrm{heavy} \sim 2 \rho v^3 \sigma / m$.
What existing experiment or observation sets the upper limit on $\tilde{P}$?
(Note that $\tilde{P}$ is only sensibly defined on samples large enough to hold onto the recoiling nucleon. For tiny numbers of atoms--e.g. laser trap experiments--the chance of any of the atoms colliding with dark matter is very small, and those that do will simply leave the experiment.)
The best direct limit I could find looking around the literature comes from dilution refrigerators. The NAUTILUS collaboration (resonant-mass gravitational wave antenna) cooled a 2350 kg aluminum bar down to 0.1 K and estimated that the bar provided a load of no more than 10 $\mu$W to the refrigerator. Likewise, the (state-of-the-art?) Triton dilution refrigerators from Oxford Instruments can cool a volume of (240 mm)${}^3$ (which presumably could be filled with lead for a mass of about 150 kg) down to ~8mK. Extrapolating the cooling power curve just a bit, I estimated it handled about $10^{-7}$ W at that temperature.
In both cases, it looked like the direct limit on intrinsic heating is roughly $\tilde{P} < 10^{-9}$W/kg.
However, it looks like it's also possible to use the Earth's heat budget to set a better limit. Apparently, the Earth produces about 44 TW of power, of which about 20 TW is unexplained. Dividing this by the mass of the Earth, $6 \times 10^{24}$ kg,
limits the intrinsic heating to $\tilde{P} < 3 \times 10^{-12}$W/kg.
Is this Earth-heat budget argument correct? Is there a better limit elsewhere?
To give an example, the CDMS collaboration searches for (heavy) dark matter in the range 1 to 10${}^3$ GeV/c${}^2$ with sensitivities to cross sections greater than 10${}^{-43}$ to 10${}^{-40}$ cm${}^2$ (depending on mass). A 100 GeV dark matter candidate with a cross-section of 10${}^{-43}$ cm${}^2$ would be expected to generate $\tilde{P} \sim 10^{-27}$ W/kg, which is much too small to be observed.
On the other hand, a 100 MeV dark matter particle with a cross-section of $10^{-27}$ cm${}^2$ (which, although not nearly as theoretically motivated as heavier WIMPs, is not excluded by direct detection experiments) would be expected to generate $\tilde{P} \sim 10^{-10}$ W/kg. This would have shown up in measurements of the Earth's heat production.
EDIT: So it looks like I completely neglected the effects of coherent scattering, which has the potential to change some of these numbers by 1 to 2 orders of magnitude. Once I learn more about this, I will update the question. |
Up till the quotient $\frac{\rho^2 \cos^2 \theta\sin^2 \theta}{\cos^2 \theta + \rho^2\sin^4 \theta}$ you are fine.
Now when $\rho \to 0$ the above function is dependent on $\theta$.
All right, but the dependency is not good enough to prevent the limit being zero!
there is a change of a $0/0$ at $k\pi - \frac{\pi}{2}$.
Indeed, note that at such values of $\theta$, the denominator is not zero because the sine term is not zero! Whatever path we approach $(0,0)$ from, the denominator cannot be zero, because one of the two terms will always be non-zero.
The function will assume different values in different directions, meaning that the limit does not exist.
Once again,
not different enough to prevent convergence to zero. Yes, it behaves "very differently" if we approach along the lines $\theta = 15^\circ$ and $75^\circ$, but it doesn't matter : convergence to zero will happen.
The point is, that you haven't done anything wrong, but just panicked, thinking that the function could do different things for different $\theta$. Yes, it does do different things, but near $(0,0)$ those things aren't far off each other. What's the reason?
What you've got to see is this :
what is the dominant term of the expression? Which term of the expression exerts the most influence on the expression when your point is close to $(0,0)$?
Well, $\sin \theta$ and $\cos \theta$ are always confined to $[-1,1]$, so their behaviour is always limited, in that you can always bound their behaviour by the constants $-1$ and $1$. Let us use this fact.
How do we use it? Let us make a small transformation, where we take the $\rho^2$ to the denominator.$$\frac{\rho^2 \cos^2 \theta \sin^2 \theta}{\cos^2 \theta + \rho^2 \sin^4 \theta} = \frac{\cos^2 \theta \sin^2 \theta}{\sin^4 \theta + \color{blue}{\rho^{-2}}\cos^{2} \theta}$$
Now, we look at the numerator.
Regardless of the value of $\theta$, we have that the numerator will be between $-1$ and $1$. So, the numerator doesn't really move here : all the big stuff is in the denominator.
In the denominator, the point is that
$\mathit \rho$ is super super small, so $\rho^{-2}$ is super super big. Therefore, if $\sin \theta \neq 0$, the contribution of $\rho^{-2}\sin^4 \theta$ can be made much much larger than the contribution of $\cos^2 \theta$ by simply taking $\rho$ small enough.
If $\sin \theta = 0$ then $\cos \theta = 1$ so the denominator is $1$, but then the numerator is zero, so the given expression is zero.
Either way, you see , at least geometrically, that
independent of $\mathit \theta$, the given expression behaves very much like $0$ for $\mathit \rho$ chosen sufficiently small.
Of course, other answers will suggest how to approach this rigorously, but your takeaway should be this : as long as there's a "controlling/dominating term" (imagine a policeman trying to prevent people getting out of their houses during a curfew), there's not much that varying $\theta$ can get you : the misbehaviour of the changing $\theta$ is only so much.
Which, of course, is also the content of the famous squeeze principle or sandwich theorem of limits. From a rigorous point of view, the answer to these sort of questions is to identify bounds for the non-dominating terms, and use the squeeze principle, as the others have done. |
I have posted a previous question, this is related but I think it is better to start another thread. This time, I am wondering how to generate uniformly distributed points inside the 3-d unit sphere and how to check the distribution visually and statistically too? I don't see the strategies posted there directly transferable to this situation.
The easiest way is to sample points uniformly in the corresponding hypercube and discard those that do not lie within the sphere. In 3D, this should not happen that often, about 50% of the time. (Volume of the hypercube is 1, volume of the sphere is $\frac{4}{3}\pi r^3 = 0.523...$.)
You can also do this in spherical coordinates, in which case there is no rejection. First you generate the radius and the two angles at random, then you use the transition formula to recover $x$, $y$ and $z$ ($x = r \sin \theta \cos \phi$, $y = r \sin \theta \sin \phi$, $z = r \cos \theta$).
You generate $\phi$ unifomly between $0$ and $2\pi$. The radius $r$ and the inclination $\theta$ are not uniform though. The probability that a point is inside the ball of radius $r$ is $r^3$ so the probability density function of $r$ is $3 r^2$. You can easily check that the cubic root of a uniform variable has exactly the same distribution, so this is how you can generate $r$. The probability that a point lies within a spherical cone defined by inclination $\theta$ is $(1-\cos\theta)/2$ or $1 - (1-\cos (-\theta))/2$ if $\theta > \pi/2$. So the density $\theta$ is $sin(\theta)/2$. You can check that minus the arccosine of a uniform variable has the proper density.
Or more simply, we can simulate the cosine of $\theta$ uniformly beteen $-1$ and $1$.
In R this would look as shown below.
n <- 10000 # For example n = 10,000.phi <- runif(n, max=2*pi)r <- runif(n)^(1/3)cos_theta <- runif(n, min=-1, max=1)x <- r * sqrt(1-cos_theta^2) * cos(phi)y <- r * sqrt(1-cos_theta^2) * sin(phi)z <- r * cos_theta
In the course of writing and editing this answer, I realized that the solution is less trivial than I thought.
I think that the easiest and computationally most efficient method is to follow @whuber's method to generate $(x,y,z)$ on the unit sphere as shown on this post and scale them with $r$.
xyz <- matrix(rnorm(3*n), ncol=3)lambda <- runif(n)^(1/3) / sqrt(rowSums(xyz^2))xyz <- xyz*lambda
In my opinion, the easiest option which also generalizes to higher dimensional balls (which is not the case of spherical coordinates and even less the case of rejection sampling) is to generate random points $P$ that are products of two random variables $P = N/||N|| * U^{1/n}$ where $N$ is a Gaussian random variable (i.e. isotropic, i.e. pointing in any direction uniformly) normalized so that it lies on the sphere and $U$ which is a uniform random variable in $[0,1]$ to the power $1/n$, $n$ being the dimensionality of the data, taking care of the radius.
Et voilà! |
Faddeeva Package From AbInitio
Revision as of 05:05, 31 October 2012 (edit)
Stevenj (Talk | contribs)
(→Usage)
← Previous diff
Revision as of 05:06, 31 October 2012 (edit)
Stevenj (Talk | contribs)
(→Usage)
Next diff →
Line 27: Line 27: :<math>\mathrm{erfi}(z) = -i\mathrm{erf}(iz) = -i[e^{z^2} w(z) - 1]</math>; for '''real''' ''x'', <math>\mathrm{erfi}(x) = e^{x^2} \mathrm{Im}[w(x)] = \frac{\mathrm{Im}[w(x)]}{\mathrm{Re}[w(x)]}</math> (imaginary error function) :<math>\mathrm{erfi}(z) = -i\mathrm{erf}(iz) = -i[e^{z^2} w(z) - 1]</math>; for '''real''' ''x'', <math>\mathrm{erfi}(x) = e^{x^2} \mathrm{Im}[w(x)] = \frac{\mathrm{Im}[w(x)]}{\mathrm{Re}[w(x)]}</math> (imaginary error function) :<math>F(z) = \frac{i\sqrt{\pi}}{2} \left[ e^{-z^2} - w(z) \right]</math>; for '''real''' ''x'', <math>F(x) = \frac{\sqrt{\pi}}{2}\mathrm{Im}[w(x)]</math> ([[w:Dawson function|Dawson function]]) :<math>F(z) = \frac{i\sqrt{\pi}}{2} \left[ e^{-z^2} - w(z) \right]</math>; for '''real''' ''x'', <math>F(x) = \frac{\sqrt{\pi}}{2}\mathrm{Im}[w(x)]</math> ([[w:Dawson function|Dawson function]]) - Note that in the case of erf and erfc, we provide different equations for positive and negative Re(''z''), in order to avoid numerical problems arising from multiplying exponentially large and small quantities. For erfi and ''F'', there are simplifications that occur for real ''x'' as noted. Furthermore, if you want to compute the Dawson function ''F'' for real ''x'', you can obtain the imaginary part of ''w''(''z'') directly without computing the real part, by calling: + Note that in the case of erf and erfc, we provide different equations for positive and negative Re(''z''), in order to avoid numerical problems arising from multiplying exponentially large and small quantities. For erfi and ''F'', there are simplifications that occur for real ''x'' as noted. Furthermore, if you want to compute the Dawson function ''F'' for real ''x'', you can obtain the imaginary part of ''w''(''x'') directly without computing the real part, by calling: extern double ImFaddeeva_w(double x); extern double ImFaddeeva_w(double x); Revision as of 05:06, 31 October 2012
Contents Faddeeva / complex error function
Steven G. Johnson has written free/open-source C++ code (with wrappers for other languages) to compute the
scaled complex error function w( z) = e − z2erfc(− iz), also called the Faddeeva function(and also the plasma dispersion function), for arbitrary complex arguments zto a given accuracy. Download the source code from: http://ab-initio.mit.edu/Faddeeva_w.cc (updated 30 October 2012)
Given the Faddeeva function, one can easily compute Voigt functions, the Dawson function, and similar related functions. Our implementation includes special-case optimizations for purely real or imaginary
z, making the performance competitive with specialized implementations of the Dawson function, erfcx, and erfi. Usage
To use the code, add the following declaration to your C++ source (or header file):
#include <complex> extern std::complex<double> Faddeeva_w(std::complex<double> z, double relerr=0);
The function
Faddeeva_w(z, relerr) computes
w( z) to a desired relative error
relerr.
Omitting the
relerr argument, or passing
relerr=0 (or any
relerr less than machine precision ε≈10
−16), corresponds to requesting machine precision, and in practice a relative error < 10 −13 is usually achieved. Specifying a larger value of
relerr may improve performance (at the expense of accuracy).
You should also compile
Faddeeva_w.cc and link it with your program, of course.
In terms of
w( z), some other important functions are: (scaled complementary error function) (complementary error function) (error function) ; for real x, (imaginary error function) ; for real x, (Dawson function)
Note that in the case of erf and erfc, we provide different equations for positive and negative Re(
z), in order to avoid numerical problems arising from multiplying exponentially large and small quantities. For erfi and F, there are simplifications that occur for real x as noted. Furthermore, if you want to compute the Dawson function F for real x, you can obtain the imaginary part of w( x) directly without computing the real part, by calling: extern double ImFaddeeva_w(double x);
which computes Im[
w( x)]. Wrappers: Matlab, GNU Octave, and Python
Wrappers are available for this function in other languages.
Matlab (also available here): A function
Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the
relerrargument is optional) can be downloaded from Faddeeva_w_mex.cc (along with the help file Faddeeva_w.m. Compile it into an octave plugin with:
mex -output Faddeeva_w -O Faddeeva_w_mex.cc Faddeeva_w.cc GNU Octave: A function
Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the
relerrargument is optional) can be downloaded from Faddeeva_w_oct.cc. Compile it into a MEX file with:
mkoctfile -DMPICH_SKIP_MPICXX=1 -DOMPI_SKIP_MPICXX=1 -s -o Faddeeva_w.oct Faddeeva_w_oct.cc Faddeeva_w.cc Python: Our code is used to provide
scipy.special.wofzin SciPy starting in version 0.12.0 (see here).
Algorithm
This implementation uses a combination of different algorithms. For sufficiently large |
z|, we use a continued-fraction expansion for w( z) similar to those described in Walter Gautschi, "Efficient computation of the complex error function," SIAM J. Numer. Anal. 7(1), pp. 187–198 (1970). G. P. M. Poppe and C. M. J. Wijers, "More efficient computation of the complex error function," ACM Trans. Math. Soft. 16(1), pp. 38–46 (1990); this is TOMS Algorithm 680.
Unlike those papers, however, we switch to a completely different algorithm for smaller |
z|: Mofreh R. Zaghloul and Ahmed N. Ali, "Algorithm 916: Computing the Faddeyeva and Voigt Functions," ACM Trans. Math. Soft. 38(2), 15 (2011). Preprint available at arXiv:1106.0151.
(I initially used this algorithm for all z, but the continued-fraction expansion turned out to be faster for larger |
z|. On the other hand, Algorithm 916 is competitive or faster for smaller | z|, and appears to be significantly more accurate than the Poppe & Wijers code in some regions, e.g. in the vicinity of | z|=1 [although comparison with other compilers suggests that this may be a problem specific to gfortran]. Algorithm 916 also has better relative accuracy in Re[ z] for some regions near the real- z axis. You can switch back to using Algorithm 916 for all z by changing
USE_CONTINUED_FRACTION to
0 in the code.)
Note that this is SGJ's
independent re-implementation of these algorithms, based on the descriptions in the papers only. In particular, we did not refer to the authors' Fortran or Matlab implementations (respectively), which are under restrictive "semifree" ACM copyright terms and are therefore unusable in free/open-source software.
Algorithm 916 requires an external complementary error function erfc(
x) function for real arguments x to be supplied as a subroutine. More precisely, it requires the scaled function erfcx( x) = e erfc( x2 x). Here, we use an erfcx routine written by SGJ that uses a combination of two algorithms: a continued-fraction expansion for large xand a lookup table of Chebyshev polynomials for small x. (I initially used an erfcx function derived from the DERFC routine in SLATEC, modified by SGJ to compute erfcx instead of erfc, by the new erfcx routine is much faster.)
Similarly, we also implement special-case code for real-
z, where the imaginary part of w is Dawson's integral. Like erfcx, this is also computed by a continued-fraction expansion for large | x|, a lookup table of Chebyshev polynomials for small | x|, and finally a Taylor expansion for very small | x|. Test program
To test the code, a small test program is included at the end of
Faddeeva_w.cc which tests
w( z) against several known results (from Wolfram Alpha) and prints the relative errors obtained. To compile the test program,
#define FADDEEVA_W_TEST in the file (or compile with
-DFADDEEVA_W_TEST on Unix) and compile
Faddeeva_w.cc. The resulting program prints
SUCCESS at the end of its output if the errors were acceptable.
License
The software is distributed under the "MIT License", a simple permissive free/open-source license:
Copyright © 2012 Massachusetts Institute of Technology Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
Note
This is a review of equilibrium statistical mechanics. Though I called it a review, it is more like a list of keywords at this moment.
For a system with \(N\) particles of \(r\) degrees of freedom, we could always describe the microstates of the system by looking at the state of each particle. There are at least two different point of views, the \(\mu\) space (mu space) and the \(\Gamma\) space (Gamma space).
The \(\mu\) space is a \(r\) dimensional space where each dimension corresponds to one degree of freedom of the particle. Thus a point in the \(\mu\) space represents a the state of one particle. To represent the microstate of the whole system, we need \(N\) points in the \(\mu\) space.
The \(\Gamma\) space is a \(rN\) dimensional space. In the \(\Gamma\) space, we have a holistic view. Each point in the \(\Gamma\) space represents the state of all the particles. For example, we use the first \(r\) dimensions out of the \(rN\) dimension to represent the state of the first particle, the next \(r\) dimensions to represent the state of the second particle, and so on.
Physical systems are usually composed of a large amount of particles. In principle, we could calculate the observable quantities if we know the exact motions of the particles. For example, we only need the momentum transfer per unit area to know the pressure of the gas and momentum transfer could be calculated if we know the motion of the particles.
This method is obviously unrealistic given the number of particles that we are dealing with. Alternatively, we could figure out the probabilities of each possible values of the observable quantities, i.e., the probability of the system being on each point in the \(\Gamma\) space. For each microscopic state, we could calculate the thermodynamic observables corresponding to it. However, we would get degeneracies of microscopic state for each combination of thermodynamic observables.
The
probability distribution of the microscopic states of the system, \(p(\{O_i\})\), is needed to estimate the observables \(\{O_i\}\). For example, to estimate the energy of the system, we take the statistical average using the distribution \(\int E p(E) \mathrm dE\).
However the microscopic state of the system is not known in general. We have to apply some assumptions and tricks.
There are two famous approaches developed in statistical mechanics. The Boltzmann’s approach is utilizing the most probable distributions while the Gibbs’ approach is using ensembles. They do not only differ from the way of estimating the probabilities of the states but also differ philosophically.
As mentioned in Description of the Microstates, many microstates have the same observables such as energy \(E\). For each value of energy, we could figure out the number of microstates, the distribution of microstates \(\Omega(E, \cdots)\). What makes this distribution powerful is that we could figure out the total number of microstates for this distribution by integrating or summing up for all energies \(\int \Omega(E, \cdots) \mathrm d E \mathrm d\cdots\). The total number of microstates is closely related the the probability of this distribution as will be discussed below. Meanwhile, we could calculate the thermodynamic observables using the distribution.
In statistical physics, we will be focusing on the
distribution of the microstates with respect to thermodynamic variables.
In Boltzmann statistics, we follow these guidelines.
Two postulates:
Occurrence of state in phase space ( Equal A Prior Probability ): all microstates have the same probabilities of occurence; This means that the most probable distribution for different energy \(\Omega(E, \cdots)\) should have the most total number of microstates, \(\int \Omega(E, \cdots) \mathrm d E \mathrm d\cdots\).
Which state the equilibrium system is staying at: the most probable microstate. This means that the most probable distribution discussed in 1 will be the actual distribution of the system.
We find the most probable distrinution by maximizing the total number of microstates. Boltzmann distribution and Boltzmann factor is derived from this.
Partition function makes it easy to calculate the observables.
Density of state \(g(E)\) ;
Partition function \(Z = \int g(E) \exp(-\beta E) \mathrm dE\); Variable of integration can be changed;
Systems of 3N DoFs \(Z = Z_1^{3N}\).
Macroscopic observables are calculated by taking specific transformations such as derivatives of the partition function.
Observable
Assumptions about free energy \(A = - k_B T\ln Z\); Combine this with thermodynamics potential relations we can calculate entropy then everything.
Internal energy \(U = \avg{E} = - \partial_\beta \ln Z\); All quantities can be extracted from partition function except those serve as variables of internal energy.
Heat capacity \(C = \partial_T U\)
Ensembles
Density of states; Liouville equation; Von Neumann equation
Equilibrium
Three ensembles
Observables
Boltzmann factor appears many times in thermodynamics and statistical mechanics. In Boltzmann’s most probable theory, ensemble theory, etc.
Theories of chains of oscillators in different dimensions are very useful. In fact the fun thing is, most of the analytically solvable models in physics are harmonic oscillators.
A nice practice for this kind of problem is to calculate the heat capacity of diatom chain. A chain of N atom with alternating mass M and m interacting only through nearest neighbors.
The plan for this problem is
Write down the equation of motion for the whole system;
Fourier transform the system to decouple the modes (by finding the eigen modes);
Solve the eigen modes;
Calculate the partition function of each mode;
Sum over each mode.
Problem is, we usually can not solve the problem exactly. So we turn to Debye theory. Debye theory assumes continuous spectrum even though our boundary condition quantizes the spectrum. So we need to turn the summation into integration using DoS using any of the several ways of obtaining DoS. Finally we analyze the different limits to get the low temperature or high temperature behavior.
Hint
Here are several methods to obtain DoS.
To do!
Classical theory: equipartition theorem;
Einstein theory: all modes of oscillations are the same;
Debye theory: difference between modes of oscillations are considered.
Gibbs Mixing Paradox is important for the coming in of quantum statistical mechanics.
Mean Field Thoery is the idea of treating interaction between particles as interactions between particles and a mean field. |
It looks like you're new here. If you want to get involved, click one of these buttons!
Now let's look at a mathematical approach to resource theories. As I've mentioned, resource theories let us tackle questions like these:
Our first approach will only tackle question 1. Given \(y\), we will only ask
is it possible to get \(x\). This is a yes-or-no question, unlike questions 2-4, which are more complicated. If the answer is yes we will write \(x \le y\).
So, for now our resources will form a "preorder", as defined in Lecture 3.
Definition. A preorder is a set \(X\) equipped with a relation \(\le\) obeying: reflexivity: \(x \le x\) for all \(x \in X\). transitivity \(x \le y\) and \(y \le z\) imply \(x \le z\) for all \(x,y,z \in X\).
All this makes sense. Given \(x\) you can get \(x\). And if you can get \(x\) from \(y\) and get \(y\) from \(z\) then you can get \(x\) from \(z\).
What's new is that we can also
combine resources. In chemistry we denote this with a plus sign: if we have a molecule of \(\text{H}_2\text{O}\) and a molecule of \(\text{CO}_2\) we say we have \(\text{H}_2\text{O} + \text{CO}_2\). We can use almost any symbol we want; Fong and Spivak use \(\otimes\) so I'll often use that. We pronounce this symbol "tensor". Don't worry about why: it's a long story, but you can live a long and happy life without knowing it.
It turns out that when you have a way to combine things, you also want a special thing that acts like "nothing". When you combine \(x\) with nothing, you get \(x\). We'll call this special thing \(I\).
Definition. A monoid is a set \(X\) equipped with:
such that these laws hold:
the
associative law: \( (x \otimes y) \otimes z = x \otimes (y \otimes z) \) for all \(x,y,z \in X\)
the
left and right unit laws: \(I \otimes x = x = x \otimes I\) for all \(x \in X\).
You know lots of monoids. In mathematics, monoids rule the world! I could talk about them endlessly, but today we need to combine the monoids and preorders:
Definition. A monoidal preorder is a set \(X\) with a relation \(\le\) making it into a preorder, an operation \(\otimes : X \times X \to X\) and element \(I \in X\) making it into a monoid, and obeying:
$$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg
and a slice of bread into a fried egg and a piece of toast!
You know lots of monoidal preorders, too! Many of your favorite number systems are monoidal preorders:
The set \(\mathbb{R}\) of real numbers with the usual \(\le\), the binary operation \(+: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \) and the element \(0 \in \mathbb{R}\) is a monoidal preorder.
Same for the set \(\mathbb{Q}\) of rational numbers.
Same for the set \(\mathbb{Z}\) of integers.
Same for the set \(\mathbb{N}\) of natural numbers.
Money is an important resource: outside of mathematics, money rules the world. We combine money by addition, and we often use these different number systems to keep track of money. In fact it was bankers who invented negative numbers, to keep track of debts! The idea of a "negative resource" was very radical: it took mathematicians over a century to get used to it.
But sometimes we combine numbers by multiplication. Can we get monoidal preorders this way?
Puzzle 60. Is the set \(\mathbb{N}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\) and the element \(1 \in \mathbb{N}\) a monoidal preorder? Puzzle 61. Is the set \(\mathbb{R}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}\) and the element \(1 \in \mathbb{R}\) a monoidal preorder? Puzzle 62. One of the questions above has the answer "no". What's the least destructive way to "fix" this example and get a monoidal preorder? Puzzle 63. Find more examples of monoidal preorders. Puzzle 64. Are there monoids that cannot be given a relation \(\le\) making them into monoidal preorders? Puzzle 65. A monoidal poset is a monoidal preorder that is also a poset, meaning
$$ x \le y \textrm{ and } y \le x \textrm{ imply } x = y $$ for all \(x ,y \in X\). Are there monoids that cannot be given any relation \(\le\) making them into monoidal posets?
Puzzle 66. Are there posets that cannot be given any operation \(\otimes\) and element \(I\) making them into monoidal posets? |
A neutron is a neutral particle which is merely some times more massive than an electron. What makes it so unstable outside the nucleus that it has a half life only of about 12 min?
How long is long?
So "half life only of about 12 min" is actually really a strange idea to most of your readers. 12 minutes is a very long time, atomically speaking! Like, the charged pions have a half-life of 18 nanoseconds, the uncharged one is 58 nano-nanoseconds (attoseconds). You might say "well those are mesons, not baryons like the proton and neutron," but actually the first new baryon ever discovered, the $\Lambda^0$, had a half-life of 0.18 ns and this was considered so
strange (in the sense of being so much longer than expected!) that the newly discovered particle was thought to have a quality called strangeness and this eventually became the name of the relevant quark; it is still today called the "strange quark." The mass difference
The neutron decays to the proton for a simple reason: a proton is made of two ups and a down, a neutron is made of two downs and an up, and the down quark is
intrinsically more massive than the up quark. Now there is a subtlety: the vast majority of the proton's and neutron's masses comes from their strong-force binding energy via $E=mc^2,$ which is why they have basically the exact same mass when fully assembled, a little over 930 MeV. (An electron volt, or eV, is the amount of energy that an electron gains when it goes through one volt of potential difference; it corresponds to a certain mass after dividing by $c^2.$) But the up quarks in these particles are about 2 MeV lighter than down quarks are (we actually don't know the real masses 100%, but the story seems to be about right), and the point is that this ~2 MeV gap is big enough that even after creating an electron (0.5 MeV) and neutrino and accounting for the greater electromagnetic self-repulsion, the proton is still 1.3 MeV lighter overall. Lighter means lower-energy, which means the total energy is spread out more across the universe, and in some sense we're talking about entropy and statistics again.
You might wonder why this argument doesn't go one step further, to a particle with three ups. This particle exists and is called the $\Delta^{++}.$ However, this fact that "most of the mass is binding energy" comes back to bite us, because some of that binding energy, it turns out, lives in the spin configuration of the quarks that make up the nucleon. This comes down to the "Pauli exclusion principle": a down and an up-quark, being different particles, can be in "the same state" but two up-quarks must be "in different states". In the details, this exclusion principle takes the form that the up/down "flavor" configuration and spin configuration must either both be symmetric or antisymmetric, since the color-charge state is antisymmetric and the overall state must be antisymmetric. Well the up-up-up state of the $\Delta^{++}$ and down-down-down state of the $\Delta^-$ can't help but be symmetric; so the spin-state must be symmetric too, and the spin-symmetric state has a higher energy than the spin-antisymmetric state by 200-300 MeV. By contrast there are two (1u,2d) and (2u, 1d) configurations, the ones that are flavor-antisymmetric and spin-antisymmetric have total spin 1/2 and are the proton and neutron; the ones that are flavor-symmetric and spin-symmetric have total spin 3/2 and are the $\Delta^+$ and $\Delta^0.$ Anyway the point is that the extra energy which needs to be bound in this state to keep the extra spin in the system is very high, so that's why you don't see these particles in nature.
Quantum tunneling
So neutrons are a higher energy-state than protons, and quantum mechanics says that if there ever is a lower-energy state, and there is any process which can transfer energy out, then eventually the system will come to be in that lower-energy state. But, this could take a while if the transfer-process requires more energy than the system has, in which case quantum mechanics has to "tunnel" through the higher-energy state which takes some time due to time-energy uncertainty. That's what makes this process take
so long for neutrons; the only pathway involves creating a $W^-$ boson which eventually decays into an electron and an antineutrino, but the boson in the middle has a very large mass -- 80,000 MeV or so -- and there is therefore nowhere near enough mass to create one of these. QM has to tunnel through this $W$-boson state. How does the presence of other nucleons stabilize neutrons?
On the flip side, when these baryons are within a nucleus, the attraction of the different baryons can create a force which "holds together" neutrons, in the sense that the decay of a neutron would increase the energy of the whole, formed nucleus. This actually occurs by the exact same mechanism that makes that $\Delta^{++}$ baryon cost energy, that Pauli exclusion.
So if you have dealt with atoms you know that two uncharged
atoms will still "stick" to each other by the van der Waals forces, which just have to do with "even though the total charge is 0, there is still some charge-distribution structure here, which matters a lot at short distances." The nucleons within atoms actually have a very similar property even though the color charge is more complicated than the electric charge. Basically, these protons and neutrons are being held internally together with these gluons into color-charge-neutral particles; but they can still "stick" to each other through the strong force, generally by exchanging virtual pions. The pions are mesons: combinations of a quark and an antiquark with opposite color charges, so they end up being color-neutral as well. In this case the up-antidown meson is called $\pi^+$ while the down-antiup meson is called $\pi^-$ and there are two very short-lived $\pi^0$ mesons between them, up-antiup and down-antidown. These were predicted by Yukawa a long time before we knew anything about quarks: they were, in fact, our first jump down the rabbit hole! But anyway, there are these short-lived pions that "stick" protons and neutrons together at short ranges.
Now Pauli exclusion comes in and says "hey, these protons and neutrons are
also identical spin-1/2 particles, so I demand that they be in different states." This picture is much more like the electron-shell model of the atom, there are some energy "shells" for the protons and an almost-identical set of shell levels for the neutrons: the proton levels are a little higher in energy because the electromagnetic force says that like charges repel. Imagine these are laid out side-by side, left column is protons, right column is neutrons. If a neutron wants to become a proton by emitting an electron and antineutrino, it may need to pay an extra "cost" if there is no corresponding proton state to the left: and those levels also see a non-negligible splitting based on spin due to a strong spin-orbit interaction. In fact these effects are already enough to keep a neutron together in the case of deuterium, one proton bound to one neutron by these pions. Add one more neutron, and this becomes weakly unstable tritium with a half-life of 12 years, add one more neutron and the result is severely unstable.
Actually there is a balance here where the energy gain from being able to "drop" down several energy shells can drive a nucleus with too few neutrons and too many protons to emit a positron (an anti-electron) in reverse-beta decay, turning into a neutron in order to "drop" a few shells down in energy. Those nuclei are very useful in medicine, because the positron then usually annihilates with an electron to produce two gamma rays going in opposite directions, and detecting these gamma rays is how the PET scanner works. So you say "drink this positron-emitting fluid!" and then you can map out with the PET scanner where all of these atoms have gone in the body.
It is a main principle of physical observations that everything goes to the lowest energy state, if it can. A neutron has an Mev more mass than the proton and it can decay to a proton through the weak interaction.
This is the Feynman diagram of the decay:
Note the virtual W. If real it has a mass close to 100 GeV, so it is very much off mass shell in the integrals . The two weak interaction vertices and the large off mass of the virtual W give the probability of decay of 15 minutes as observed.
Of course one must keep in mind that neutron decays explored and defined the weak interaction, and it would be circular if we did not have a plethora of other weak interactions which agree with the model as incorporated in the standard model of particle physics. |
I have two questions that i didn't find in books. When calculate time rates of change of the expectation values of $\langle x \rangle$ or $\langle p_x \rangle$, why is x or $p_x$ not derived from time?
Now i want to show this, if $\Psi(r, t) $ is a square integrable wave function normalised to unity,then:
$ \frac{d}{dt} \langle x² \rangle = \frac{1}{m} [ \langle xp_x \rangle + \langle p_xx \rangle]$
Using Ehrenfest's theorem:
$ \frac{d}{dt} \langle x² \rangle = \frac{d}{dt} \int \Psi^*(r,t)x^2 \Psi(r,t) dr $
Derive $\Psi^*(r,t)$ and $\Psi(r,t)$ from time and using schrödinger equation:
$ \frac{d}{dt} \langle x² \rangle = \frac{1}{2m i\hbar} \int \Psi^*[x^2 p_x² - p_x ² x^2 ]\Psi dr $
Inside the brackets is the operator [x²,p_x²], i don't know how calculate this operator, i believe that is not necesary, so:
$ \frac{d}{dt} \langle x² \rangle = \frac{1}{2m i\hbar} \int \Psi^*[x(xp_x)p_x - p_x(p_xx)x ]\Psi dr $
I can use $xp_x -p_xx= i \hbar$, but how if x and $p_x$ not commute? |
An algebra is
if each congruence relation of the algebra isdetermined by any one of its congruence classes, i.e. $\forall a,b\ [a]_{\theta}=[b]_{\psi}\Longrightarrow\theta =\psi$. congruence regular
A class of algebras is
if each of its members is congruence regular. congruence regular
Congruence regularity holds for many 'classical' varieties such as groups, rings and vector spaces.
This property can be characterized by a Mal'cev condition … |
Abbreviation:
MQgrp
A
is a quasigroup $\mathbf{A}=\langle A,\cdot,\backslash,/\rangle$ such that medial quasigroup
$\cdot$ is
: $(xy)(zw)=(xz)(yw)$ medial
Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be medial quasigroups. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x \cdot y)=h(x) \cdot h(y)$, $h(x \backslash y)=h(x) \backslash h(y)$, $h(x / y)=h(x) / h(y)$,
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[...]] subvariety [[...]] expansion [[quasigroups]] supervariety |
With this problem the salient feature is that the first guess which is
$$\sum_{q=m}^n {n\choose q} (-1)^{m-q} (n-q)^k$$
does not produce the correct answer. The underlying poset has nodesfor each subset $P$ of the set $Q$ of $n$ cells where the noderepresents cells from $P$ plus possibly additional cells being empty,ordered by the superset relation with the node of all cells $Q$ beingempty (which does not contain any configurations) being at the bottom.(With $P_1$ a superset of $P_2$ the configurations of the formerconstitute a subset of the latter.) Now a configuration that hasexactly $p$ empty cells where $m\le p\le n$ receives total weight (sumof the weights of all nodes where it is included)
$$\sum_{q=m}^p {p\choose q} (-1)^{m-q}.$$
While this yields a weight of one for $p=m$ it is not equal to zerofor $p\gt m$ and hence cannot be used to count configurations withexactly $m$ empty cells. A better approach is to choose the $m$ emptycells first and use a poset where the nodes $P$ represent the extraempty cells in addition to the $m$ already selected, which no longerparticipate in the inclusion-exclusion. This yields
$${n\choose m} \sum_{q=0}^{n-m} {n-m\choose q} (-1)^q(n-m-q)^k.$$
Here we are interested in the count of zero empty extra cells. Thesingle node with $q=n-m$ is at the bottom. With tbis approach theweight of a configuration on the remaining $n-m$ cells with exactly$p$ extra empty cells where $0\le p\le n-m$ is given by
$$\sum_{q=0}^{p} {p\choose q} (-1)^q.$$
This evaluates to one when $p=0$ and is zero otherwise, which isprecisely the weights that we require for this problem. Observe thatwe have a Stirling number here which can be seen by writing
$$\frac{n!}{m!} \frac{1}{(n-m)!}\sum_{q=0}^{n-m} {n-m\choose q} (-1)^{n-m-q} q^k$$
which yields
$$\bbox[5px,border:2px solid #00A000]{\frac{n!}{m!} {k\brace n-m}.}$$
If we had not been asked to use inclusion-exclusion the answer wouldhave been ${n\choose m} \times {k\brace n-m} (n-m)!$ by inspection. |
There is no necessary relation between the implementation of the compiler and the output of the compiler. You could write a compiler in a language like Python or Ruby, whose most common implementations are very slow, and that compiler could output highly optimized machine code capable of outperforming C. The compiler itself would take a long time to run, ...
How can a machine built by a man be stronger than a man? This is exactly the same question.The answer is that the output of the compiler depends on the algorithms implemented by that compiler, not on the langauge used to implement it. You could write a really slow, inefficient compiler that produces very efficient code. There's nothing special about a ...
I want to make one point against a common assumption which is, in my opinion, fallacious to the point of being harmful when choosing tools for a job.There is no such thing as a slow or fast language.¹On our way to the CPU actually doing something, there are many steps².At least one programmer with certain skillsets.The (formal) language they program ...
There are a number of well-studied strategies; which is best in your application depends on circumstance.Improve worst case runtimeUsing problem-specific insight, you can often improve the naive algorithm. For instance, there are $O(c^n)$ algorithms for Vertex Cover with $c < 1.3$ [1]; this is a huge improvement over the naive $\Omega(2^n)$ and might ...
The best algorithm that is known is to express the factorial as a product of prime powers. One can quickly determine the primes as well as the right power for each prime using a sieve approach. Computing each power can be done efficiently using repeated squaring, and then the factors are multiplied together. This was described by Peter B. Borwein, On the ...
On the contrary. At the same time that hardware is getting cheaper, several other developments take place.First, the amount of data to be processed is growing exponentially. This has led to the study of quasilinear time algorithms, and the area of big data. Think for example about search engines - they have to handle large volumes of queries, process large ...
Another perspective on "efficiency" is that polynomial time allows us to define a notion of "efficiency" that doesn't depend on machine models. Specifically, there's a variant of the Church-Turing thesis called the "effective Church-Turing Thesis" that says that any problem that runs in polynomial time on on kind of machine model will also run in polynomial ...
All complexities you provided are true, however they are given in Big O notation, so all additive values and constants are omitted.To answer your question we need to focus on a detailed analysis of those two algorithms. This analysis can be done by hand, or found in many books. I'll use results from Knuth's Art of Computer Programming.Average number of ...
I really like the example from Introduction to Algorithms book, which illustrates significance of algorithm efficiency:Let's compare two sorting algorithms: insertion sort and merge sort. Their complexity is $O(n^2) = c_1n^2$ and $O(n\log n) = c_2n \lg n$ respectively. Typically merge sort has a bigger constant factor, so let's assume $c_1 < c_2$.To ...
I see four main ways to solve this problem, with different running times:$O(n^2)$ solution: this would be the solution that you propose. Note that, since the arrays are unsorted, deletion takes linear time. You carry out $n$ deletions; therefore, this algorithm takes quadratic time.$O(n \: log \: n)$ solution: sort the arrays beforehand; then, perform a ...
The confusion arises from difference between the conceptual description of the algorithm, and its implementation.Logically merge sort is described as splitting up the array into smaller arrays, and then merging them back together. However, "splitting the array" doesn't imply "creating an entirely new array in memory", or anything like that - it could be ...
We count the number of array element reads and writes. To do bubble sort, you need $1 + 4n$ accesses (the initial write to the end, then, in the worst case, two reads and two writes to do $n$ swaps). To do the binary search, we need $2\log n + 2n + 1$ ($2\log n$ for binary search, then, in the worst case, $2n$ to shift the array elements to the right, then 1 ...
There is one forgotten thing about optimisation here.There was longish debate about fortran outperforming C. Putting apart malformed debate: the same code was written in C and fortran (as testers thought) and performance was tested based on same data.The problem is, these languages differ, C allows pointers aliasing, while fortran does not.So the codes ...
Quick answer: Never, for practical purposes. It is not currently of any practical use.First, let's separate out "practical" compositeness testing from primality proofs. The former is good enough for almost all purposes, though there are different levels of testing people feel is adequate. For numbers under 2^64, no more than 7 Miller-Rabin tests, or ...
What you are looking for is "approximate near neighbor search" (ANNS) in the Levenshtein/edit distance. From a theoretical perspective, edit distance has so far turned out to be relatively hard for near-neighbor searches, afaik. Still, there are many results, see the references in this Ostrovsky and Rabani paper. If you are willing to consider alternative ...
Keep in mind that the factorial function grows so fast that you'll need arbitrary-sized integers to get any benefit of more efficient techniques than the naive approach. The factorial of 21 is already too big to fit in a 64-bit unsigned long long int.As far as I know, there is no algorithm to compute $n!$ (factorial of $n$) which is faster than doing the ...
The (asymptotically) most efficient deterministic primality testing algorithm is due to Lenstra and Pomerance, running in time $\tilde{O}(\log^6 n)$. If you believe the Extended Riemann Hypothesis, then Miller's algorithm runs in time $\tilde{O}(\log^4 n)$. There are many other deterministic primality testing algorithms, for example Miller's paper has an $\...
Yes, Grover's algorithm shows you can use a quantum algorithm to find an element in an unordered database of size $N$ with high probability by querying the database only $O(\sqrt{N})$ times. Any classical solution that succeeds with high probability requires $\Omega (N)$ queries to the database.
The problem you are asking for is a well-known algorithmic problems. It is actually still open, how hard this problem exactly is. Also you should know that there are different incarnations of this problem. In contrast what you are asking for, usually only the distances are returned, whereas you are asking for the the actual shortest paths. Notice that these ...
The $\Theta(n)$ difference-of-sums solution proposed by Tobi and Mario can in fact be generalized to any other data type for which we can define a (constant-time) binary operation $\oplus$ that is:total, such that for any values $a$ and $b$, $a \oplus b$ is defined and of the same type (or at least of some appropriate supertype of it, for which the ...
Regular language membership can be decided in $\cal{O}(n)$ time by simulating the language's (minimal) DFA (which has been precomputed).Context free language membership can be decided in $\cal{O}(n^3)$ by the CYK Algorithm.There are decidable languages that are not in $\sf{P}$, such as those in $\sf{EXPTIME}\setminus \sf{P}$.
There are two answers, depending on how you define efficient.Compactness of representationTelling more with less: NFAs are more efficient.Converting a DFA to an NFA is straightforward and does not increase the size of the representation.However, there are regular languages for which the smallest DFA is exponentially bigger than the smallest NFA. A ...
Element = Sum(Array2) - Sum(Array1)I sincerely doubt this is the most optimum algorithm. But it's another way to solve the problem, and is the simplest way to solve it. Hope it helps.If the number of added elements is more than one, this won't work.My answer has the same run time complexity for best, worst, and average case,EDITAfter some thinking, ...
The different dimensions are independent. What you can do is compute, for each dimension j, how many different walks there are in just that dimension which take $t$ steps. Let us call that number $W(j,t)$. From your question, you already know how to compute these numbers with dynamic programming.Now, it's easy to count the number of walks that take $t_i$ ...
The problem as you probably have noticed is a quite difficult problem. Checking the web will lead to some complex instances that probably you will not need. Here is a solution - as required (i.e. you dont need to recalculate everything from scratch).for the case of adding an edge $(u,v)$ - then using your already built-distance matrix - do the following : ...
I'd post this as a comment on Tobi's answer, but I don't have the reputation yet.As an alternative to calculating the sum of each list (especially if they are large lists or contain very large numbers that might overflow your data type when summed) you can use xor instead.Just calculate the xor-sum (i.e. x[0]^x[1]^x[2]...x[n]) of each list and then xor ...
Other answers have addressed this from a more theoretical perspective. Here is a more practical approach.For "typical" NP-complete decision problems ("does there exist a thingy that satisfies all these constraints?"), this is what I would always try first:Write a simple program that encodes your problem instance as a SAT instance.Then take a good SAT ...
The data structures you are interested in are metric trees. That is, they support efficient searches in metric spaces. A metric space is formed by a set of objects and a distance function defined among them satisfying the triangle inequality. The goal is then, given a set of objects and a query element, to retrieve those objects close enough to the query....
The knowledge of algorithms is much more than how to write fast algorithms.It also gives you problem solving methods (e.g. divide and conquer, dynamic programming, greedy, reduction, linear programming, etc) that you can then apply when approaching a new and challenging problem. Having a suitable approach usually leads to codes which are simpler and much ... |
2. Series 29. Year Post deadline: - Upload deadline: -
(2 points)1. rat on ice
A rat is running on ice with speed $v$. Suddenly he decides to turn 90$°$ so that he keeps running with the same speed in the new direction. What is the least amount of time he needs for such a turn? Suppose that rat's feet can move independently. Coefficient of friction between rat's feet and ice is $f$.
Xellos dostal smyk.
(2 points)2. numismatic
Once in a while, a situation may occur, that the nominal value of coins is lower that their manufacturing costs. Assume we have two coins, made of a gold-silver alloy. The first one has diameter $d_{1}=1\;\mathrm{cm}$, second one $d_{2}=2\;\mathrm{cm}$, both have thickness $h=2\;\mathrm{mm}$. If we submerge them in mercury, the smaller one sinks to the bottom, whilst the larger one rises to the surface. If we submerge both coins, smaller one on top of the larger, they neither rise nor sink. Assuming the smaller coin is made of pure gold, determine the fraction of silver in the larger coin (in percent of mass).
Bonus: How would the result change if the smaller one could contain silver as well?
Mirek má radši mince než bankovky.
(3 points)3. fatal fall
From a spaceship on a circular orbit with height $h=2000\;\mathrm{km}$ above the surface of Earth a screwdriver is thrown with speed $v=5\;\mathrm{km}\cdot h^{-1}$ relative to the rocket towards the center of the Earth. Determine when will the screwdriver hit the surface?
Karel nemá rád šroubováky.
(5 points)4. mirrorception
Consider an optical system composed of three semitransparent mirrors placed behind each other along one axis. Every mirror by itself reflects half of incident light and lets the other half pass. Determine what fraction of light passes through the system of mirrors.
Bonus: Solve the problem for $n$ such mirrors.
Karel se prohlížel v zrcadle.
(5 points)5. round it up
Mirek felt that during winter it is a little bit too dark for reading in his room. So he decided to make a hole in his wall for another window. He went to glass-works first to buy the glass panes. There was one nice round piece, but before he would buy it, he needed to check whether it's not too uneven (specifically convex). He placed the pane on the glass desk of the glass-works and saw rainbow circles around the centre of the pane caused by interference of the perpendicularly coming white light in the thin space between the two glasses. Mirek randomly chose two red circles ($λ≈700nm)$ and measured with a ruler their diameters $d_{k}=(10,5±0,5)\;\mathrm{mm}$ and $d_{k+1}=(13,0±0,5)\;\mathrm{mm}$. Based on these measurements he managed to determine the radius of curvature of the pane. Calculate it as well and think about it's errors.
Mirek si nechce zkazit oči.
(5 points)P. parental
Imagine that an intelligent seven-year-old turns to you with a question: „What exactly is this superconductivity?“ What would you have to explain and teach him first, in order to reasonably clarify this phenomenon without using „lies-to-children“ (the term was first described in The Science of Discworld novel; it describes an explanation, that helps explaining a complex subject by simplifying elementary explanations, so that they are understandable, though technically wrong, e.g. imaging atoms as tiny solid balls). Try to elaborate the answer as much as possible.
Kiki doučovala a bez kuličkovitých elektronů se neobešla.
(8 points)E. let's do some Fizzics!
Buy any effervescent (i.e. fizzy) tablets and measure the time that takes for the tablet to fully dissolve in water as a function of temperature of this water. Discuss the possible causes and propose why is the relation the way it is.
Aleš Podolník umíral na rýmu.
(6 points)S. serial
Which types of processes (isobaric, isochoric, isothermal and adiabatic) can be reversible? Take the relation
$T=\frac{pV}{nR}\$,,
where $n=1mol$, $p=100kPa$ and $V=22l$. How will $T$ change, if we change both $p$ and $V$ by 10$%$, by 1$%$ or by 0$,1%?$ Calculate it in two ways: precisely and by using the relation: $$\;\mathrm{d} T=T_{,p} \mathrm{d} p T_{,V} \mathrm{d} V .$$
What is the difference between the results?
d gymnastics: Show that
$$\;\mathrm{d} (C f(x)) = C \mathrm{d} f(x)\,,$$
where $C$ is constant.
Calculate
$$\;\mathrm{d} (x^2) \ \quad \mathrm{a} \quad \mathrm{d} (x^3).$$
Show that
$$\;\mathrm{d}\left( \frac 1x \right)= -\frac {\mathrm{d} x}{x^2}$$
from the definition, that is $$\;\mathrm{d} \left(\frac 1x \right)= \frac {1}{x \mathrm{d} x} - \frac 1x$$
This might be handy: $(x \;\mathrm{d} x)(x-\mathrm{d}$ x) = x^2 - (\mathrm{d} x)^2 = x^2$\$,.
*Bonus: $This$ holds $$\sin \;\mathrm{d} \vartheta = \mathrm{d} \vartheta \quad a \quad \cos \mathrm{d} \vartheta = 1.$$ And you have the addition formula as well $$\sin (\alpha \beta ) = \sin \alpha \cos \beta \cos \alpha \sin \beta,$$ Prove $$\;\mathrm{d}\left( \sin \vartheta \right)=\, \mathrm{d} \vartheta \cos \vartheta .$$ *Bonus:** Similarly show
$$\;\mathrm{d} \left(\ln x \right)= \frac{\mathrm{d}x}{x}$$
using $$\ln (1 \;\mathrm{d} x) = \mathrm{d} x$$
Explain, why isobaric temperature is lower than isochoric. |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
August 2015 , Volume 35 , Issue 8
Select all articles
Export/Reference:
Abstract:
Two-point boundary value problems of Dirichlet type are investigated for a Ermakov-Painlevé II equation which arises out of a reduction of a three-ion electrodiffusion Nernst-Planck model system. In addition, it is shown how Ermakov invariants may be employed to solve a hybrid Ermakov-Painlevé II triad in terms of a solution of the single component integrable Ermakov-Painlevé II reduction. The latter is related to the classical Painlevé II equation.
Abstract:
We estimate the Hausdorff dimension of hyperbolic Julia sets of maps from the well-known family $F_{\lambda,n}(z) = z^n + \lambda/z^n$, $n \ge 2$, $\lambda \in \mathbb{C} \setminus \{0\}$. In particular, we show that $\dim_H J(F_{\lambda,n}) = \mathcal O (1/\ln |\lambda|)$ for large $|\lambda|$, and $\dim_H J(F_{\lambda,n}) = 1 + \mathcal O (1/\ln n)$ for large $n$ in the three cases: when $J(F_{\lambda,n})$ is a Cantor set, a Cantor set of quasicircles and a Sierpiński curve.
Abstract:
Let $M$ be a compact $n$-dimensional Riemanian manifold, End($M$) the set of the endomorphisms of $M$ with the usual $\mathcal{C}^0$ topology and $\phi:M\to\mathbb{R}$ continuous. We prove, extending the main result of [2], that there exists a dense subset of $\mathcal{A}$ of End($M$) such that, if $f\in\mathcal{A}$, there exists a $f$ invariant measure $\mu_{\max}$ supported on a periodic orbit that maximizes the integral of $\phi$ among all $f$ invariant Borel probability measures.
Abstract:
In this paper, we study the global existence and regularity of Hölder continuous solutions for a series of nonlinear partial differential equations describing nonlinear waves.
Abstract:
We continue our study initiated in [4] of the interaction of a ground state with a potential considering here a class of trapping potentials. We track the precise asymptotic behavior of the solution if the interaction is weak, either because the ground state moves away from the potential or is very fast.
Abstract:
Given $s\in(0,1)$, we consider the problem of minimizing the fractional Gagliardo seminorm in $H^s$ with prescribed condition outside the ball and under the further constraint of attaining zero value in a given set $K$.
We investigate how the energy changes in dependence of such set. In particular, under mild regularity conditions, we show that adding a set $A$ to $K$ increases the energy of at most the measure of $A$ (this may be seen as a perturbation result for small sets $A$).
Also, we point out a monotonicity feature of the energy with respect to the prescribed sets and the boundary conditions.
Abstract:
In this paper, we consider the following Schrödinger equation with critical growth $$-\Delta u+(\lambda a(x)-\delta)u=|u|^{2^*-2}u \quad \hbox{ in } \mathbb{R}^N, $$ where $N\geq 5$, $2^*$ is the critical Sobolev exponent, $\delta>0$ is a constant, $a(x)\geq 0$ and its zero set is not empty. We will show that if the zero set of $a(x)$ has several isolated connected components $\Omega_1,\cdots,\Omega_k$ such that the interior of $\Omega_i (i=1, 2, ..., k)$ is not empty and $\partial\Omega_i (i=1, 2, ..., k)$ is smooth, then for any non-empty subset $J\subset \{1,2,\cdots,k\}$ and $\lambda$ sufficiently large, the equation admits a solution which is trapped in a neighborhood of $\bigcup_{j\in J}\Omega_j$. Our strategy to obtain the main results is as follows: By using local mountain pass method combining with penalization of the nonlinearities, we first prove the existence of single-bump solutions which are trapped in the neighborhood of only one isolated component of zero set. Then we construct the multi-bump solution by summing these one-bump solutions as the first approximation solution. The real solution will be obtained by delicate estimates of the error term, this last step is done by using Contraction Image Principle.
Abstract:
We study the large-time behavior of the globally coupled Winfree model in a large coupling regime. The Winfree model is the first mathematical model for the synchronization phenomenon in an ensemble of weakly coupled limit-cycle oscillators. For the dynamic formation of phase-locked states, we provide a sufficient framework in terms of geometric conditions on the coupling functions and coupling strength. We show that in the proposed framework, the emergent phase-locked state is the unique equilibrium state and it is asymptotically stable in an $l^1$-norm; further, we investigate its configurational structure. We also provide several numerical simulations, and compare them with our analytical results.
Abstract:
We consider the Cauchy problem for incompressible viscoelastic fluids in the whole space $\mathbb{R}^d$ ($d=2,3$). By introducing a new decomposition via Helmholtz's projections, we first provide an alternative proof on the existence of global smooth solutions near equilibrium. Then under additional assumptions that the initial data belong to $L^1$ and their Fourier modes do not degenerate at low frequencies, we obtain the optimal $L^2$ decay rates for the global smooth solutions and their spatial derivatives. At last, we establish the weak-strong uniqueness property in the class of finite energy weak solutions for the incompressible viscoelastic system.
Abstract:
This paper is concerned with degenerate chemotaxis-Navier-Stokes systems with position-dependent sensitivity on a two dimensional bounded domain. It is known that in the case without a position-dependent sensitivity function, Tao-Winkler (2012) constructed a globally bounded weak solution of a chemotaxis-Stokes system with any porous medium diffusion, and Winkler (2012, 2014) succeeded in proving global existence and stabilization of classical solutions to a chemotaxis-Navier-Stokes system with linear diffusion. The present work shows global existence and boundedness of weak solutions to a chemotaxis-Navier-Stokes system with position-dependent sensitivity for any porous medium diffusion.
Abstract:
In the dynamics of a rotation of the unit circle by an irrational angle $\alpha\in(0,1)$, we study the evolution of partitions whose atoms are finite unions of left-closed right-open intervals with endpoints lying on the past trajectory of the point $0$. Unlike the standard framework, we focus on partitions whose atoms are disconnected sets. We show that the refinements of these partitions eventually coincide with the refinements of a preimage of the Sturmian partition, which consists of two intervals $[0,1-\alpha)$ and $[1-\alpha,1)$. In particular, the refinements of the partitions eventually consist of connected sets, i.e., intervals. We reformulate this result in terms of Sturmian subshifts: we show that for every non-trivial factor mapping from a one-sided Sturmian subshift, satisfying a mild technical assumption, the sliding block code of sufficiently large length induced by the mapping is injective.
Abstract:
In this paper, we investigate the quasilinear Keller-Segel equations (q-K-S): \[ \left\{ \begin{split} &n_t=\nabla\cdot\big(D(n)\nabla n\big)-\nabla\cdot\big(\chi(n)\nabla c\big)+\mathcal{R}(n), \qquad x\in\Omega,\,t>0,\\ &\varrho c_t=\Delta c-c+n, \qquad x\in\Omega,\,t>0, \end{split} \right. \] under homogeneous Neumann boundary conditions in a bounded domain $\Omega\subset\mathbb{R}^N$. For both $\varrho=0$ (parabolic-elliptic case) and $\varrho>0$ (parabolic-parabolic case), we will show the global-in-time existence and uniform-in-time boundedness of solutions to equations (q-K-S) with both non-degenerate and
degeneratediffusions on the non-convexdomain $\Omega$, which provide a supplement to the dichotomy boundedness vs. blow-up in parabolic-elliptic/parabolic-parabolic chemotaxis equations with degenerate diffusion, nonlinear sensitivity and logistic source. In particular, we improve the recent results obtained by Wang-Li-Mu (2014, Disc. Cont. Dyn. Syst.) and Wang-Mu-Zheng (2014, J. Differential Equations). Abstract:
We construct two families of non-localized standing waves for the hyperbolic cubic nonlinear Schrödinger equation \[iu_t+u_{xx}-u_{yy}+|u|^2u=0.\] The first family of standing waves consists of solutions which correspond to some generalized breathers for each fixed time $t$, while solutions in the second family are periodic both in $x$ and $y$. The second family of solutions were numerically observed by Vuillon, Dutykh and Fedele in a recent preprint [17].
Abstract:
This paper analyzes heat equation with memory in the case of kernels that are linear combinations of Gamma distributions. In this case, it is possible to rewrite the non-local equation as a local system of partial differential equations of hyperbolic type. Stability is studied in details by analyzing the corresponding dispersion relation, providing sufficient stability condition for the general case and sharp instability thresholds in the case of linear combinations of the first three Gamma functions.
Abstract:
We consider in this work some class of strongly perturbed for the semilinear heat equation with Sobolev sub-critical power nonlinearity. We first derive a Lyapunov functional in similarity variables and then use it to derive the blow-up rate. We also classify all possible asymptotic behaviors of the solution when it approaches to singularity. Finally, we describe precisely the blow-up profiles corresponding to these behaviors.
Abstract:
Motivated by some nonlinear models recently arising in Micro-Electro-Mechanical System (MEMS) and new progress on one-dimensional mean curvature type problems, we investigate the existence and exact numbers of positive solutions for a class of boundary value problems with $\varphi$-Laplacian $$ -(\varphi(u'))'=\lambda f(u)\; on (-L, L),\quad u(-L)=u(L)=0, $$ when the parameters $\lambda$ and $L$ vary. Various exact multiplicity results as well as global bifurcation diagrams are obtained. These results include the applications to one-dimensional MEMS equations with fringing field as well as mean curvature type problems. We also extend and improve one of the main results of Korman and Li [
Proc. Roy. Soc. Edinburgh Sect. A, 140(6):1197--1215, 2010] (Theorem 3.4). With the aid of numerical simulations, we find many interesting new examples, which reveal the striking complexity of bifurcation patterns for the problem. Abstract:
We study linearly degenerate hyperbolic systems of rich type in one space dimension. It is showed that such a system admits exact traveling wave solutions after a finite time, provided that the initial data are Riemann type outside a space interval. We prove the convergence of entropy solutions toward traveling waves in the $L^1$ norm as the time goes to infinity. The traveling waves are determined explicitly in terms of the initial data and the system. We also obtain the stability of entropy solutions in $L^1$. Applications concern physical models such as the generalized extremal surface equations, the Born-Infeld system and augmented Born-Infeld system.
Abstract:
It is a big problem to distinguish between integrable and non-integrable Hamiltonian systems. We provide a new approach to prove the non-integrability of homogeneous Hamiltonian systems with two degrees of freedom. The homogeneous degree can be taken from real values (not necessarily integer). The proof is based on the blowing-up theory which McGehee established in the collinear three-body problem. We also compare our result with Molares-Ramis theory which is the strongest theory in this field.
Abstract:
We consider the exact controllability problem for some uncoupled semilinear wave equations with proportional, but different principal operators in a bounded domain. The control is locally distributed, and its support satisfies the geometric control condition of Bardos-Lebeau-Rauch. First, we examine the case of a nonlinearity that is asymptotically linear; using a combination of the Bardos-Lebeau-Rauch observability result for a single wave equation and a new unique continuation result for uncoupled wave equations, we solve the underlying linear control problem. The linear controllability result thus established, generalizes to higher space dimensions an earlier result of Haraux established in the one-dimensional setting. Then, applying a fixed point argument, we derive the controllability of the nonlinear problem. Afterwards, we use an iterative approach to prove a local controllability result when the nonlinearity is super-linear. Finally, we discuss some extensions of our results and some open problems.
Abstract:
In this paper, we introduce concepts of pathwise random almost periodic and almost automorphic solutions for dynamical systems generated by non-autonomous stochastic equations. These solutions are pathwise stochastic analogues of deterministic dynamical systems. The existence and bifurcation of random periodic (random almost periodic, random almost automorphic) solutions have been established for a one-dimensional stochastic equation with multiplicative noise.
Abstract:
We consider the following anisotropic boundary value problem $$\nabla (a(x)\nabla u) + a(x)u^p = 0, \;\; u>0 \ \ \mbox{in} \ \Omega, \quad u = 0 \ \ \mbox{on} \ \partial\Omega,$$ where $\Omega \subset \mathbb{R}^2$ is a bounded smooth domain, $p$ is a large exponent and $a(x)$ is a positive smooth function. We investigate the effect of anisotropic coefficient $a(x)$ on the existence of concentrating solutions. We show that at a given strict local maximum point of $a(x)$, there exist arbitrarily many concentrating solutions.
Abstract:
Long time behavior of solutions for weakly damped gKdV equations on the real line is studied. With some weak regularity assumptions on the force $f$, we prove the existence of global attractor in $H^s$ for any $s\geq 1$. The asymptotic compactness of solution semigroup is shown by Ball's energy method and Goubet's high-low frequency decomposition if $s$ is an integer and not an integer, respectively.
Abstract:
In this paper we develop the
continuous averagingmethod of Treschev to work on the simultaneous Diophantine approximation and apply the result to give a new proof of the Nekhoroshev theorem. We obtain a sharp normal form theorem and explicit estimates of the stability constants appearing in the Nekhoroshev theorem. Abstract:
In this paper, we study the following nonlinear problem of Kirchhoff type: \begin{equation}\label{(0.1)} \left\{% \begin{array}{ll} -\left(a+b\int\limits_{\mathbb{R}^3}|\nabla u|^2\right)\Delta u+V(x)u=f(u), & \hbox{$x\in \mathbb{R}^3$}, \\ u>0, & \hbox{$x\in \mathbb{R}^3$}, (0.1) \\ \end{array}% \right.\end{equation} where $a,$ $b>0$ are constants, $V:\mathbb{R}^3\rightarrow\mathbb{R}$ and $f(t)$ is subcritical and superlinear at infinity. Under certain assumptions on non-constant potential $V$, we prove the existence of positive high energy solutions by using a linking argument with a barycenter map restricted on a Nehari-Pohožaev type manifold.
Our main result has solved Kirchhoff equation (0.1) with superlinear nonlinearities, which has not been studied, and can be viewed as a partial extension of a recent result of He and Zou in [9] concerning Kirchhoff equations with 4-superlinear nonlinearities.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
4. Series 29. Year Post deadline: - Upload deadline: -
(2 points)1. Kofola's
Let's have a Kofola (a Czech soft drink) with the energetic value $Q_{k}=1360kJ⁄\;\mathrm{kg}$ and temperature $t_{k}=24\;\mathrm{°C}$ and another Kofola, this time sugar-free, with the energetic value $Q_{free}=14.4kJ⁄\;\mathrm{kg}$ and temperature $t_{free}=4\;\mathrm{°C}$. If we assume other behaviour and constants are very similar to water, what temperature would have a mixture of these two for which the total energy gain would be none.
(2 points)2. Brain in a microwave
How far from a base transceiver station (BTS) do a person have to be, for the emission to be fully comparable with that of the mobile phone just next to somebody's head. Expect the BTS to broadcast uniformly into a half-space with the emission power 400 W. The emission power of a mobile phone is 1 W.
(3 points)3. Save the woods
We have a toilet paper roll with the diameter $R=8\;\mathrm{cm}$ with an inside hollow tube of diameter $r=2\;\mathrm{cm}$. Every layer of the paper has the thickness $d=200µm$ and the layers lies perfectly on top of each other. By how many does the number of pieces of the paper differ had we used a piece of the length $l_{1}=9\;\mathrm{cm}$ instead of $l_{2}=13\;\mathrm{cm}?$ A part of the solution has to be an estimate of the approximation error (if you use one).
Bonus: Calculate the precise length of the spiral the toilet paper makes. (5 points)5. Slide
There are two identical blocks with the mass $m$ and one of the sides of lenght $lon$ a horizontal plane. The distance between the closest two faces is 2$x_{0}$. Suddenly we start pouring water between them with the volume flow $Q$. At two sides of the blocks there is a barrier keeping the water in the place between the two blocks. The coefficient of static friction between the block and the plane is $f_{0}$ and the of the kinetic friction is $f$. There is no friction between the barriers and the blocks. What is the condition for $f_{0}$ that would keep the blocks in place? In the case of sufficiently small $f_{0}$, determine the acceleration of blocks as a function of position and also the distance, where the blocks eventually stop moving. Consider all the movement of the water reasonably slow, for any eddies to appear, for any heating of the water solely from its movement to take place or for any significant kinetic energy possesion. For the same reason of very slow $Q$, we can approximate there is no contribution of adding any water past the point where the blocks started moving.
Bonus: Find a condition for turning the block over. (6 points)S. serial
From the inequality
$$\Delta S_{tot} \ge 0 }$$
and given the equation from the text of the serial
$$\Delta S_{tot} = \frac{-Q}{T_H} \frac{Q-W}{T_C}$$
express $W$ and derive this way the inequality for work
$$W\le Q\left( 1 - \frac {T_C}{T_H} \right).$$
Calculate the efficiency of the Carnot cycle without the use of entropy. Hint: Write out 4 equations connecting 4 vertices of the Carnot cycle
$$p_1 V_1 = p_2 V_2 $$
$$p_2 V_2^{\kappa} = p_3V_3^{\kappa}$$
$$p_3V_3 = p_4V_4$ p_4V_4^{\kappa} = p_1V_1^{\kappa}$
and multiply all of them together. By modifying this equation you should be able to get
$$\frac {V_2}{V_1} = \frac {V_3}{V_4}.$$
Next step is using the equation for the work done in an isothermal process: when going from the volume $V_{A}$ to the volume $V_{B}$, the work done on a gas is
$nRT\,\;\mathrm{ln}\left(\frac{V_A}{V_B}\right)$.
Now the last thing we need to realize is that the work in an isothermal process is equal to the heat (with the correct sign) a calculate the work done by the gas (there is no contribution from the adiabatic processes) and the heat taken away.
$ For the correct solution, you only need to fill in the details.$
In the last problem you worked with $pV$ and $Tp$ diagram. Do the same with $TS$ diagram, i. e. sketch there the isothermal, isobaric, isochoric and adiabatic process. In addition sketch the path for the Carnot cycle including the direction and labeling of the individual processes. Sometimes it is important to check if we give or receive heat. Because sometimes this fact can change during the process. One of the examples is the process
$p=p_0\;\mathrm{e}^{-\frac{V}{V_0}}$,
where $p_{0}$ and $V_{0}$ are constants. Show for which values of $V$ (during the expansion) the heat is going into the gas and for which out of it. |
I've been studying for my exam and came across the following problem:
Suppose that $X_1,\ldots,X_n$ is a random sample from a Poisson distribution with mean $\lambda$.(a) Find the maximum likelihood estimator (MLE) of $\eta = P(X_1 = 2\mid \lambda)$. (b) Find the UMVUE of $\eta$.
I am not sure about my solution, so I would like to ask for some feedback.
Sol: (a). Recall that the MLE for $\lambda$ is the mean of the sample, $\overline{X}$, thus $\eta_{MLE}= \frac{e^{-\overline{X}}\overline{X}^2}{2!}$.
(b). We see that the joint pmf $f(x\mid\lambda)=e^{-\lambda}\prod_{i=1}^{n}\frac{1}{x_i!}e^{\sum_ix_i\ln\lambda}$ is part of the exponential family. Thus, $Y=\sum_ix_i$ is a complete and sufficient statistics for $\lambda$. Therefore, $Y^*=\frac{e^{-Y}Y^2}{2!}$ is complete and sufficient for $\eta$. Further, let $W=1$, if $X_1=2$ and $0$ otherwise. $E(W)=1\cdot P(X_1=2)=\eta$. Thus, $W$ is unbiased for $\eta$.
We construct next the Rao-Blackwell estimator: $η^*=E(W\mid Y^*=y^*)=E(1_{\{X_1=2\}}\mid Y^*=y^*)=P(X_1=2\mid \frac{e^{-Y}Y^2}{2!}=y^*)=\frac{P(X_1=2,\frac{e^{-Y}Y^2}{2!}=y^*)}{P(\frac{e^{-Y}Y^2=y^*}{2!})}$
I am not sure if my thinking is on the right track. I would appreciate any help. Thanks! |
We’ve derived the 1D heat diffusion equation, so now let’s take a look at one of the solutions. This solution is for a symmetric rod. Simply, imagine a rod of some length $L$. This rod is insulated across its length so that no heat is lost to the atmosphere across that length. The only place where the rod is not insulated is at the ends which will change temperature. We want to calculate how the temperature changes across the length of the rod in time when the temperature at the ends of the rod change.
Let’s recall the heat diffusion equation that we derived:
We want to find a solution for the temperature profile of the rod in time \textit{and} space. This would take the form of an equation:
We want to find an equation for $T(x,t)$ where we can plug in the location within the rod and the time after the temperature change to find the temperature.
We will assume that the entire rod starts off at a uniform temperature $T_i$ at $t=0$, that is to say: $T_i=T(x,0)$. Then at an infinitesimal time after $t=0$, the edges ends of the rod immediately change temperature to $T_f$. Therefore $T_f = T(0,t>0) = T(L,t>0)$. One can imagine that after a long time the rod will reach one uniform temperature, because the side of the rod is insulated and no heat can be lost, the entire rod must reach thermal equilibrium with the rod ends. Therefore we can also say: $T_f = T(x,\infty)$
Notice we have just defined our boundary conditions. Let’s take another look:
Our temporal (initial) condition is the temperature of the ends of the rod at $t=0$ And the two spacial boundary conditions refer to the two ends of the rod: .
We can rewrite these another way which may make more sense:
At $t=0$, $T=T_i$ (temporal) At $x=0$, $T=T_f$ (spacial) At $x=L$, $T=T_f$ (spacial)
In order to solve the differential equation for this problem it helps to non-dimensionalise our variables, which are $x$, time and temperature. We define our non-dimensional variables as follows:
I will not discuss how to non-dimensionalise the variables here, instead you can see \textbf{this} article.
The PDE now looks like this:
Why did we just do this? It makes solving the PDE mathematically easier because now the initial temperature is $\hat{T_i} = 1$ and the final temperature is $\hat{T_f} = 0$, the initial length is $\hat{x} = 0$, the final length is $\hat{x}=1$ and the initial time is $\hat{t} = 0$.
Let’s rewrite our boundary conditions in non-dimensional form:
At $\hat{t}=0$, $\hat{T}=1$ (temporal) At $\hat{x}=0$, $T=0$ (spacial) At $\hat{x}=1$, $T=0$ (spacial)
We will now use a technique called \textbf{separation of variables} to solve this PDE in non-dimensional form. This means that we are going to assume that there exists a solution to the differential which is the product of two functions which each are a function only of either space or time.
Let’s unpack that. Recall that we want a solution which describes temperature in space and time like so: $\hat{T} = \hat{T}(\hat{x},\hat{t})$. We are going to assume that we can write $\hat{T}(\hat{x},\hat{t})$ (which is a two-variable function) as the product of two one-variable functions like so:
Where $\beta (\hat{x})$ is some function of $\hat{x}$ alone, and $\gamma(\hat{t})$ is a function only of $\hat{t}$. Multiplying these two functions together gives our two-variable function which is our solution.
We are now going to look at the left hand side of our PDE: $\frac{\partial^2\hat{T}}{\partial \hat{x}^2}$, notice that this is what we get if we differentiate $\hat{T}(\hat{x},\hat{t})$ twice with respect to $\hat{x}$ whilst keeping $\hat{t}$ constant:
Because $\gamma(\hat{t})$ is being treated as a constant, we can easily apply the product rule when differentiating, and so we get
Where $\beta’’(\hat{x}) = \frac{d^2 \beta(\hat{x})}{d\hat{x}^2}$. Notice it’s a normal derivative because $\beta$ can only be differentiated with respect to $\hat{x}$ because it is a function of $\hat{x}$ only.
Moving on to the right hand side of the PDE we see that
Again applying the product rule we get
Substituting both of these results into the original PDE we get an interesting equation composed only of single-variable functions:
Which can be re-written as:
Why did I equate this equation to a constant $-\lambda^2$? Well we know that each side of the equation is actually a constant because the left hand side of the equation is a function of $\hat{t}$ only, meaning that changing that side cannot affect the right hand side of the equation which is a function only of $\hat{x}$ and vice versa. Okay then, well why is the constant negative? Why is it squared? You will see further on why choosing a negative constant is the only one which can give you an answer which makes sense in the real world, and squaring it simply makes the maths more convenient.
Anyway - we now have two equations which are easy to solve independently:
$\frac{\gamma'(\hat{t})}{\gamma(\hat{t})} = -\lambda^2$ $\frac{\beta''(\hat{x})}{\beta (\hat{x})} = -\lambda^2$
Let’s solve the first by re-arranging and integrating directly:
Combining the constant and exponentiating:
Integrating the second function is a little bit more involved. Let’s rearrange it:
This is a \textbf{homogeneous 2nd order ordinary differential equation with constant coefficients}, this means we can guess a solution of the form $\beta = e^{k\hat{x}}$. Let’s plug this solution into the ODE:
Notice how the exponential terms drop out and we are left with:
Therefore
We can now take linear combinations of the solution as we have two of them (the $\pm i \lambda$)
We can write
Recall Euler’s identity which states that $e^{i\theta}=\cos(\theta)+i\sin(\theta)$
We can substitute this into our general solution for $\beta$ and we get the following:
Noting sin and cos properties:
Collecting terms:
And collecting constants into new ones:
We now have a general solution for $\hat{T}$!
Now we must substitute in our boundary conditions to find the final solution.
Let’s apply the boundary conditions: At $\hat{x}=0$, $\hat{T}=0$ (spacial)
For this to be satisfied, $C_2 = 0$. Therefore we now have
$\hat{x}=1$, $\hat{T}=0$ (spacial)
Therefore $C_3\sin(\lambda) = 0$ because the exponential cannot equal 0. This is true when $\lambda=n\pi$. Taking linear combinations of solutions we get
And finally: At $\hat{t}=0$, $\hat{T}=1$ (temporal)
Therefore $C_1 = 1$. We get a final solution of:
Now we must find the the constants $C_n$, we will use the property that:
Now we apply the temporal (initial) boundary condition:
Multiplying each side of this equation by $\sin(n\pi \hat{x})$ and integrating between limits yields:
Notice that the only terms in this infinite sum that will not be zero integrated are when $n=\hat{x}$, therefore we get a formula for the unknown coefficients:
For the denominator recall that $\sin^2(x)=\frac{1}{2}[1-\cos(2x)]$ and therefore
If you evaluate the right hand integral you find that:
So
So it can be seen that the coefficients are:
Note that $C_1$ is not the same constant mentioned earlier.
Substituting these coefficients into the final solution we get our (more) final solution:
Or |
5. Series 29. Year Post deadline: - Upload deadline: -
(2 points)1. let it flow
Thin wire with resistance $R=100mΩ$ and length $l=1\;\mathrm{m}$, that is connected to the source of DC with voltage $U=3V$, contains in its volume $N=10^{22}$ free electrons, which contribute to the electric current. Determine what is the average speed (more accurately net velocity) of these electrons in the wire.
(2 points)2. multiparticular
Let's have a container that is split by imaginary plane into two disjunct parts A and B, identical in size. There are $nparticles$ in the container and each of them has a probability of 50 % to be in part A and probability 50 % to be in part B. Figure out the probabilities of the part A containing $n_{A}=0.6n$ or $n_{A}=1+n⁄2$ particles respectively.. Solve it for $n=10$ and $n=N_{A}$, where $N_{A}≈6\cdot 10^{23}$ is Avogadro's constant.
(3 points)3. egyptian gate
Ancient Egyptians could build a gate, but they hadn't invented the portcullis yet so they closed the gate with nilans (limestone blocks). There are 150 slaves of mass $m=60\;\mathrm{kg}$, who are at the moment slowly opening a gate closed with a nilan of mass $M=8t$. The nilan fits precisely (air-tightly) into a structure above the gate whose inner dimensions are $a=3\;\mathrm{m}$, $b=0.5\;\mathrm{m}$ and $c=3\;\mathrm{m}$. The original pressure inside the structure is $p_{0}=100kPa$ and the original temperature is $T_{0}=300K$. The structure is situated at high $H=3\;\mathrm{m}$ above the ground. Find out how high are the slaves able to lift the nilan, using only their own weight if the air temperature stays constant.
(4 points)4. safe ride
A car is approaching a wall with a trajectory that is perpendicular to the wall. The driver, however, wishes to approach the wall safely. Find the car's speed as a function of time, so that the distance between the car and the wall is, at every moment, the same as the distance the car would travel with its instantaneous speed in $T=2\;\mathrm{s}$.
(5 points)5. rolling stones
There is a sphere with inhomogeneous distribution of density on an inclined plane. We know the angle of inclination of the plane $α$, the radius of the sphere $R$ and the distance $t$ of the centre of mass from its geometrical centre. If we label the centre of the sphere $S$, the point of touch with the plane $D$ and the centre of mass $T$, then we can define the angle $φ_{0}=∠DST$ as the initial angle (before any motion begins). We also know that the centre of mass is situated in a plane given by the line segment $DS$ (the normal to the inclined plane) and the down-sloping direction. Considering all the parameters given, carefully describe the time evolution of the sphere's state of motion. The sphere does not slip.
(5 points)P. underground
As we all know, it is always a little bit chilly in the caves of central Europe, usually about 4 °C. Why, on the other hand, is it always warm in the underground (subway, metro) throughout the whole year? Is more heat produced by the people present or by the technology?
(6 points)S. naturally variant
Use the relation for entropy of ideal gas from the solution of third serial problem
$$S(U, V, N) = \frac{s}{2}n R \ln \left( \frac{U V^{{\kappa} -1}}{\frac{s}{2}R n^{\kappa} } \right) nR s_0$$
and the relation for the change of the entropy
$$\;\mathrm{d} S = \frac{1}{T}\mathrm{d} U \frac{p}{T} \mathrm{d} V - \frac{\mu}{T} \mathrm{d} N$$
to calculate chemical potential as a function of $U$, $VaN$. Modify it further to get the function of $T$, $pandN$.
Hint: The coefficients like 1 ⁄ $T$ in front of d$U$ can be calculated as a partial derivative of $S(U,V,N)$ by $U$. Don't forget that ln$(a⁄b)=\lna-\lnb$ and that $n=N⁄N_{A}$. Bonus: Express similarly temperature and pressure as functions of $U$, $VandN$. Eliminate the pressure dependence to get the equation of state. Is the chemical potential of an ideal gas positive or negative? (Assume $s_{0}$ is negligible.)? What will happen with a gas in a piston if the gas is connected to a reservoir of temperature $T_{r}?$ The piston can move freely and there is nothing acting on it from the other side. Describe what happens if we allow only quasistatic processes. How much work can we extract? Is it true that the free energy is minimized? Hint: To calculate the work, this equation can be useful:
$$\int _{a}^{b} \frac{1}{x} \;\mathrm{d}x = \ln \frac{b}{a}.$$
We defined the enthalpy as $H=U+pV$ and the Gibb's free energy as $G=U-TS+pV$. What are the natural variables of these two potentials? What other thermodynamic quantities do we obtain by differentiating these potentials by their most natural variables? Calculate the change of grandcanonic potential d$Ω$ from its definition $Ω=F-μN$. |
Abbreviation:
MouQgrp
A
is a quasigroup $\mathbf{A}=\langle A,\cdot,\backslash,/\rangle$ such that Moufang quasigroup
$\cdot$ satisfies the Moufang law: $ye=y\Longrightarrow ((xy)z)x = x(y((ez)x))$
Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x ... y)=h(x) ... h(y)$
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[Moufang loops]] expanded type [[quasigroups]] |
6. Series 29. Year Post deadline: - Upload deadline: -
(2 points)1. It's about what's inside of us
In the year 2015, a Nobel prize for Physics was given for an experimental confirmation of the oscillation of neutrinos. You have probably already heard about neutrinos and maybe you know that they interact with matter very weakly so they can pass without any deceleration through Earth and similar large objects. Try to find out, using available literature and Internet sources, how many neutrinos are at any instant moment in an average person. Don't forget to reference the sources.
(4 points)3. Going downhill
We are going up and down the same hill with the slope $α$, driving at the same speed $v$ and having the same gear (and therefore the same RPM of the engine), in a car with mass $M$. What is the difference between the power of the engine up the hill (propulsive power) and down the hill (breaking power)?
(4 points)4. Fire in the hole
Neutral particle beams are used in various fusion devices to heat up plasma. In a device like that, ions of deuterium are accelerated to high energy before they are neutralized, keeping almost the initial speed. Particles coming out of the neutralizer of the COMPASS tokamak have energy 40 keV and the current in the beam just before the neutralization is 12 A. What is the force acting on the beam generator? What is its power?
(5 points)5. Particle race
Two particles, an electron with mass $m_{e}=9,1\cdot 10^{-31}\;\mathrm{kg}$ and charge $-e=-1,6\cdot 10^{-19}C$ and an alpha particle with mass $m_{He}=6,6\cdot 10^{-27}\;\mathrm{kg}$ and charge 2$e$, are following a circular trajectory in the $xy$ plane in a homogeneous magnetic field $\textbf{B}=(0,0,B_{0})$, $B_{0}=5\cdot 10^{-5}T$. The radius of the orbit of the electron is $r_{e}=2\;\mathrm{cm}$ and the radius of the orbit of the alpha particle is $r_{He}=200\;\mathrm{m}$. Suddenly, a small homogeneous electric field $\textbf{E}=(0,0,E_{0})$, $E_{0}=5\cdot 10^{-5}V\cdot \;\mathrm{m}^{-1}$ is introduced. Determine the length of trajectories of these particles during in the time $t=1\;\mathrm{s}$ after the electric field comes into action. Assume that the particles are far enough from each other and that they don't emit any radiation.
(6 points)P. iApple
Think up and describe a device that can deduce its orientation relative to gravitational acceleration and convert this information to an electrical signal. Come up with as many designs as you can. (An accelerometer-like device that is in most smart phones.)
(8 points)E. Malicious coefficient of restitution
If we drop a bouncing ball or any other elastic ball on an appropriate surface, it starts to bounce. During every hit on the surface some kinetic energy of the ball is dissipated (into heat, sound, etc.) and the ball doesn't return to its initial height. We define the coefficient of restitution as the ratio of the kinetic energy after and before the hit. Is there any dependence between the coefficient of restitution and the height which the ball fell from? Choose one suitable ball and one suitable surface (or several if you want) for which you determine the relation between the coefficient of restitution and the height of the fall. Describe the experiment properly and perform a sufficient number of measurements.
(6 points)S. A closing one
Find, in literature or online, the change of enthalpy and Gibbs free energy in the following reaction
$$2\,\;\mathrm{H}_2 \mathrm{O}_2\longrightarrow2\,\mathrm{H}_2\mathrm{O},$$
where both the reactants and the product are gases at standard conditions. Find the change of entropy in this reaction. Give results per mole.
Power flux in a photon gas is given by
$j=\frac{3}{4}\frac{k_\;\mathrm{B}^4\pi^2}{45\hbar^3c^3}cT^4$.
Substitute the values of the constants and compare the result with the Stefan-Boltzmann law.
Calculate the internal energy and the Gibbs free energy of a photon gas. Use the internal energy to write the temperature of a photon gas as a function of its volume for an adiabatic expansion (a process with $δQ=0)$. Hint: The law for an adiabatic process with an ideal gas was derived in the second part of this series (Czech only). Considering a photon gas, show that if $δQ⁄T$ is given by
$$\delta Q / T = f_{,T} \;\mathrm{d} T f_{,V} \mathrm{d} V\,,$$
then functions $f_{,T}$ and $f_{,V}$ obey the necessary condition for the existence of entropy, that is
$$\frac{\partial f_{,T}(T, V)}{\partial V} = \frac{\partial f_{,V}(T, V)}{\partial T} $$ |
I was working out some problems from Rick Durrett's Probability theory and Examples (2010 edition), when I came across a very unusual question(reproduced here ad-verbatim):
If $X_n$ is
ANY sequence of random variables, there are constants $c_n \to \infty$ so that$$\frac{X_n}{c_n} \to 0 \quad \mbox{a.s}$$
You will find it in Chapter 2: Laws of Large Numbers, under the section on Borel Cantelli Lemma. What makes this question unusual is that there are
NO assumptions made on the random variables. My attempt:I rephrased the question equivalently as:
If $X_n$ is
ANY sequence of random variables, there are constants $a_n \to 0$ so that we need to show$$a_nX_n \to 0 \quad \mbox{a.s}$$
Then I showed that it was sufficient to assume $a_n > 0$ and $X_n \geq 0$. To see this, since the limit is going to , the sign of the constants do not matter, hence positivity of $a_n$ can be assumed w.l.o.g. As for an arbitrary r.v, any r.v can be written as:
$$X_n = X_n^+ - X_n^-$$
where $X_n^+ = \max(X_n,0)$ and $X_n^- = \max(-X_n,0)$
Suppose we prove the result for non-negative random variables, then say we have $$b_n X_n^+ \to 0 \quad c_n X_n^- \to 0 \quad \mbox{a.s}$$
Then pick $a_n = \min(b_n,c_n)$. Then this will ensure $$a_nX_n \to 0 \quad \mbox{a.s}$$
For the non-negative case, I was able to prove the result for simple functions: If $X_n$ is simple, and $X_n = \sum_{k=1}^n s_k 1_{A_k}$, take $a_n = \frac{1}{2^{n} \sum_{k=1}^n s_k}$ would work, but only if all functions were simple.
Now the tough part. Handling the non -ve measurable function case. I had difficulty here.
Hence my question is:
I'd like a hint/answer (preferably a hint) on how to solve this particular case. Right now I am looking at manipulating the lemma that every non-negative measurable function can be approximated by a monotone sequence of simple functions.
Thank you.
Note: I use measurable functions and random variables interchangeably. But note that the space is a probability space. Additionally, I didn't find any similar question (I typed convergence random variables). If it has been answered, kindly provide the link. |
First, terminologically, "axiom" and "inference rule" are often used as roughly interchangeable as they tend to serve similar purposes. There are technical distinctions, which themselves can vary slightly, but outside the study of formal logic or related systems, these distinctions aren't that important.
In the context of formal logic, an axiom is a formula of the logic that is a theorem by definition. A rule of inference is a way of deducing new theorems from old theorems. Rules are always part of the logic, while axioms are separated into logical axioms, which are viewed as part of the logic, and non-logical axioms, which are viewed as part of the particular theory you are studying within the logic. Many logical axioms can be presented as rules and vice versa. For example, the axiom $P\supset(Q\supset P)$ roughly corresponds to the rule of weakening $\cfrac{\Gamma\vdash P}{\Gamma,Q\vdash P}$.
From a meta-logical perspective, rules of inference can be viewed as (definitional) axioms
about the logic being studied (not to be confused with the "formal" axioms within the logic. Philosophically, "believing" in the conclusions of a logical theory requires "believing" the rules of inference just as much as "believing" the axioms.
So are Armstrong's "Axioms" axioms or rules of inference? It depends on how you formalize them. The "axioms" strongly suggest a deductive system, i.e. a system of rules, and that deductive system is quite useful in actually calculating functional dependencies. But, we could just as well take $\to$ and $\subseteq$ as relation symbols in a first-order theory and then the "axioms" really would be non-logical axioms of that theory. This could be viewed as meta-logical axioms about the deductive system, but it can just as reasonably be viewed as axiomatizing what "functional dependency" means without any deductive system in mind.
As for how to determine if $Y\subseteq X$ if we do indeed take a deductive system view of Armstrong's Axioms, in this context $X$ and $Y$ are meta-variables that stand for sets of attributes. They would be instantiated with expressions like $\{\mathsf{name},\mathsf{address}\}$. So for the rule you referenced, you'd simply be able to calculate that $\{\mathsf{name}\}\subseteq\{\mathsf{name},\mathsf{address}\}$ and $\{\mathsf{age}\}\not\subseteq\{\mathsf{name},\mathsf{address}\}$. An algorithm to calculate this could be formalized as a collection of inference rules itself, but there is no reason to do this. We can just assume that it can, in fact, be calculated. Obviously it can be in the intended use-case where the "sets" are explicitly presented, finite sets of elements with decidable equality.
Even for modus ponens, we have $P$ and $Q\supset R$ and we can apply modus ponens if $P=Q$, but checking the (syntactic) equality of two formulas while not hard is not completely trivial. Indeed, weakening is often presented with a rule like $\cfrac{\Gamma\vdash P\qquad \Gamma\subseteq\Delta}{\Delta\vdash P}$ and it is just understood that you can, in fact, calculate whether one set of formulas is a subset of another. It may also be presented as $\cfrac{\Gamma\vdash P}{\Gamma\cup\Delta\vdash P}$ where, again, it is just assumed that you have some way of calculating $\Gamma\cup\Delta$. |
(redirected from H1s2sResearch.RydbergProject)
News: Shrinking the proton again!
Our most recent results from laser spectroscopy of the 2S-4P transition in atomic hydrogen will be published in
Science in the October, 6th issue. After more than six years of work, we have succeeded in measuring the transition frequency with an uncertainty of 2.3 kHz, corresponding to a relative uncertainty of 4 parts in 10 12. This is the second-best frequency measurement in hydrogen after our previous measurement of the 1S-2S transition. From these two measurements, we derive new values for the Rydberg constant and the proton root mean square (RMS) radius,{$R_\infty=10973731.568076(96)\,\mathrm{m}^{-1}$} and {$r_\mathrm{p}=0.8335(95)\,\mathrm{fm}$}, respectively. Our results are in excellent agreement with the results from laser spectroscopy of muonic hydrogen, but are 5% smaller than and disagree by 3.3 standard deviations with the hydrogen world data. More... Original publication
Beyer, A., Maisenbacher, L., Matveev, A., Pohl, R., Khabarova, K., Grinin, A., Lamour, T., Yost, D. C., Hänsch, T. W., Kolachevsky, N., Udem, Th.
The Rydberg constant and proton size from atomic hydrogen. Science, 358:79, DOI: 10.1126/science.aah6677. Supplementary materials. Press coverage Nature: "Proton-size puzzle deepens" Science: "The proton radius revisited" Science News: "Proton size still perplexes despite a new measurement" NZZ: "Der Protonenradius ist und bleibt ein Rätsel" (in German) Spektrum.de: "Wie groß ist das Proton wirklich?" (in German) Nature Physics: "Proton puzzle: Agreement in disagreement" Physik Journal: "Radius und Interferenz" (in German) Press release Contact Media 2S- nP spectroscopy The Rydberg constant and proton radius from hydrogen spectroscopy
The 2S-
nP project is currently the main focus of our research. This project is aiming for a new determination of the Rydberg constant {$R_\infty$} and the proton root mean square (RMS) charge radius {$r_\mathrm{p}$} from precision spectroscopy of atomic hydrogen (H). The precise extraction of these parameters from different experiments is an essential ingredient for stringent tests of the consistency of quantum electrodynamics (QED). The Rydberg constant plays a special role in these tests because it connects multiple fundamental constants:
\begin{equation} R_\infty=\frac{m_e \alpha^2c}{2h}. \end{equation}
Figure 1. Rydberg constant and proton RMS charge radius determined from hydrogen (H) spectroscopy. Our results from H 2S-4P spectroscopy (green diamond) agrees with the result from muonic hydrogen (µp; pink bar and violet square), but disagrees with the H world data (blue bar and blue triangle), the average of previous results from electronic H. The H world data includes 15 different spectroscopic measurements of H (black squares: microwave measurements; black circles: optical measurements). The CODATA 2014 value (gray hexagon) additionally includes results from elastic electron scattering and deuterium spectroscopy.
To extract {$R_\infty$} and {$r_\mathrm{p}$} from H spectroscopy, at least two distinct transition frequencies are needed as input. Because the 1S-2S transition frequency is known with much higher precision than any of the other transition frequencies, it serves as a corner stone in this determination. Fig. 1 shows the result of this determination, using the different available results from H spectroscopy. Note that since {$R_\infty$} and {$r_\mathrm{p}$} are highly correlated, the values can be shown in one plot with two axes. The results from H spectroscopy are compatible with each other, with a weighted average of these results giving the value shown as H world data. There is, however, a four standard deviation ({$\sigma$}) discrepancy between this H world data value and the value determined from laser spectroscopy of muonic hydrogen (µp) [1]. An even larger discrepancy of 5.6{$\sigma$} is obtained when elastic electron scattering and deuterium spectroscopy data are included in the analysis (CODATA 2014 in Fig. 1). So far, it is unclear what causes this so-called "proton size puzzle", with suggested solutions covering the entire spectrum from experimental errors up to physics beyond the standard model.
Laser spectroscopy of the 2S-4P transitions
This situation calls for additional experimental data. Utilizing the 1S-2S beam apparatus as a well-controlled and reliable cryogenic source of hydrogen atoms in the metastable 2S state, we are currently measuring transition frequencies from the 2S state to higher lying P-states in H. In particular, we are studying the 2S-4P transition at 486 nm, since this laser wavelength corresponds to twice the wavelength needed for the 1S-2S experiment and thus a well-characterized laser is available to us.
Achieving the desired accuracy of a few parts in {$10^{-12}$} needed to improve on previous measurements is an technologically and experimentally challenging task. In particular, this accuracy corresponds to determining the center of the 2S-4P atomic resonance to an uncertainty on the order of {$10^{-4}$} of its observed width of 20 MHz (resulting from the natural line width {$\Gamma = 2\pi \times 12.9\,\mathrm{MHz}$} and additional broadening mechanisms). Such a high resolution requires both a well-understood apparatus and a deep theoretical understanding of the atomic dynamics involved.
A schematic view of our experimental setup for 2S-4P spectroscopy [2] is shown in Fig. 2. H thermalizes at the inner walls of a copper nozzle held at 5.8 K by a cryostat. The emerging atomic beam is collimated by two apertures and overlaps with 243 nm radiation from a preparation laser circulating in an enhancement cavity. This radiation allows for a Doppler-free two-photon excitation of the 1S-2S transition, resulting in H in the 2S state. In contrast to electron-impact excitation, the standard scheme of 2S excitation for the optical measurements shown in Fig. 1, this optical excitation scheme preserves the atoms' low thermal velocity and almost exclusively populates one of the Zeeman sublevels (the {$\mathrm{2S}_{1/2}^{F=0}$} level).
Excitation of the 2S-4P transition takes place in a separated region. Here, light from the spectroscopy laser at 486 nm crosses the beam of 2S atoms at an angle close to 90°. In this way, the first-order Doppler shift due to motion of the atoms relative to the propagation direction of the laser light is minimized. To further suppress the Doppler shift, which constitutes the biggest source of uncertainty for this measurement, we developed the active fiber-based retroreflector (AFR) [3] scheme. In this scheme, the transition is simultaneously driven by two actively-stabilized, antiparallel phase-retracing laser beams, resulting in Doppler shifts of equal magnitude, but opposite signs and thus no net shift of the resulting line shape. The 4P state rapidly decays back to the 1S ground state, emitting a Lyman-{$\gamma$} photon. The photoelectrons ejected by these energetic photons from our graphite-coated detector walls are detected in channel electron multipliers CEM1 and CEM2 and the output of these detectors is our signal. By scanning the frequency of the spectroscopy laser, the 2S-4P resonance can recorded, with typical examples of the resulting data shown in Fig. 3. We periodically switch off the preparation laser and thus the production of 2S atoms using a chopper wheel and record the signal as a function of delay time. Different delay times then correspond to the sampling of different atomic velocity groups (see different curves in Fig. 3). With this, we can experimentally confirm the validity of the Doppler shift suppression by evaluating the resonance position as a function of atomic velocity.
In order to determine the transition frequency to a precision of a few parts in 10
12, the atomic resonance has to be sampled many thousands of times and the results averaged. References and further reading
[1] Pohl, R.
et al. (2010) The size of the proton. Nature, 466:213.
[2] Beyer, A., Maisenbacher, L., Khabarova, K., Matveev, A., Pohl, R., Udem, Th., Hänsch, T.W. and Kolachevsky, N. (2015)
Precision spectroscopy of 2S-nP transitions in atomic hydrogen for a new determination of the Rydberg constant and the proton charge radius. Physica Scripta, T165:014030.
[3] Beyer, A., Maisenbacher, L., Matveev, A., Pohl, R., Khabarova, K., Chang, Y., Grinin, A., Lamour, T., Shi, T., Yost, D. C., Udem, Th., Hänsch, T. W. and Kolachevsky, N. (2016)
Active fiber-based retroreflector providing phase-retracing anti-parallel laser beams for precision spectroscopy. Optics Express, 24(15):17470.
[4] Beyer, A., Maisenbacher, L., Matveev, A., Pohl, R., Khabarova, K., Grinin, A., Lamour, T., Yost, D. C., Hänsch, T. W., Kolachevsky, N. and Udem, Th. (2017)
The Rydberg constant and proton size from atomic hydrogen. Science, 358(6359):79. Supplementary materials. |
Except for the undecidable unaries I have no idea if there is anything in the gap between $P/poly$ and $P$
Take a language $L$ which is not in $\mathsf{E} = \bigcup_{c=1}^\infty \mathsf{TIME}(2^{cn})$. Now consider the language $L' = \{1^m : m \in L\}$. Then $L'$ is clearly in $\mathsf{P/poly}$, but it's not in $\mathsf{P}$: if it were decidable in time $O(m^k)$, then we could decide $L$ in time $O((2^n)^k)$, and so $L$ would be in $\mathsf{E}$. Our decision procedure works as follows: on input $m$ of length $n = \log m$, we run the algorithm for $L'$ on the input $1^m$. This runs in time $O(m^k) = O((2^n)^k)$.
It remains to ensure that $L'$ is decidable. To that end, all we need to do is to choose some $L \notin \mathsf{E}$ which is decidable, for that makes $L'$ trivially decidable: given an input, if it's not of the form $1^m$, reject; otherwise, answer according to whether $m \in L$.
The existence of a decidable language $L \notin \mathsf{E}$ is guaranteed by the time hierarchy theorem. |
How to construct a dense subset say $A$, of the real numbers other than rationals? By dense I mean that there should be an element of $A$ between any two real numbers.
First note that you can "cheat" by taking any subset and take a union with the rationals.
Second note that you can always cheat by taking some real number $x$ and considering the set $\{x+q\mid q\in\mathbb Q\}$. If $x$ is irrational then the set is not the rationals.
Now more seriously, you can note that the irrationals ($\mathbb R\setminus\mathbb Q$) are dense, as well all the irrational algebraic numbers ($\sqrt2$ and such). More interestingly the set $\{\sin n\mid n\in\mathbb N\}$ is dense in $[0,1]$ so it can be stretched (or multiplied) into a dense set of $\mathbb R$.
However an important fact is that every countable dense linear order is isomorphic to the rationals, so if your dense set is countable it will not differ too much from the rationals.
Let's construct dense sets in $[0,1]$. We can then get a dense set in $\Bbb R$ by taking the union of dense sets for the intervals $[n, n+1]$ (a dense set for $[n,n+1]$ can be obtained from a dense set of $[0,1]$ by shifting). To make things more interesting, we will find dense sets for which, given any two distinct elements in the set there is a number between them that is not in the set.
For a dense set in $[0,1]$, you can take:
The irrationals in $[0,1]$.
Or: take $[0,1]$. Take its midpoint $1/2$ to be an element in the, to be constructed, dense set. Then take the midpoints of $(0,1/2)$ and $(1/2,1)$ to be elements in the sense set. Then take the midpoints of the four sets obtained by splitting the two prior sets in two...
Or: use any similar, carefully done, construction similar to the preceding example. For instance, you could successively split $[0,1]$ as above (always splitting the previous sets in half), but choose irrationals in each piece.
Take any irrational number $\alpha$ and consider the set $E = \{n\alpha \bmod 1\ : n \in \mathbb{N}\}$. By the equidistribution theorem this set is uniformly distributed (and thus must be dense) on $[0,1]$. For a set dense on all of $\mathbb{R}$ take $\cup_{n \in \mathbb{Z}} (n + E)$.
Let $a_n$ be a positive sequence going to zero then $p a_n$ with $p\in \mathbb{Z}$ is dense in $\mathbb{R}$.
I guess you have a
countable dense set $A$ in mind, since otherwise you could just put $A:={\mathbb R}$. I don't know what your intuition of the real numbers is, but I assume that you are happy with the idea that a real number is an infinite decimal, like $34.5210071856\ldots\ $.
The set $$A\ :=\ \bigcup_{r=0}^\infty \left\{{k\over 10^r}\ \bigm| k\in{\mathbb Z}\right\}$$ of finite decimal fractions is a union of countable sets, therefore it is countable. Given any two real numbers $$\alpha:=a_0.a_1\, a_2\, a_3\,\ldots,\qquad\ a_0\in{\mathbb Z},\quad a_k\in\{0,1,\ldots,9\}\ \ (k\geq 1)$$ and $$\beta:=b_0.b_1\, b_2\, b_3\,\ldots,\qquad\ b_0\in{\mathbb Z},\quad b_k\in\{0,1,\ldots,9\}\ \ (k\geq 1)$$ with $\alpha<\beta$ there is a minimal $k\geq0$, call it $k$, such that $a_k<b_k$. Since we have assumed $\alpha<\beta$ the case $$(a_k, a_{k+1}, a_{k+2}\ldots)=(a_k,9,9,9,\ldots)\quad\wedge\quad (b_k , b_{k+1}, b_{k+2}\ldots)=(a_k+1,0,0,0,\ldots)$$ is excluded. There are a few cases to be distinguished, but all in all it is easy to produce a finite decimal expansion $$\xi=x_0.x_1\, x_2\, x_3\,\ldots x_{k-1}\, x_k\ \in A$$ such that $\alpha<\xi<\beta$.
The Liouville numbers are dense in $\mathbb{R}$.
Take every real number with infinite number of occurences of the string '56787773' in the decimal expansion.
Use the axiom of choice to choose one element from each open interval. (I suppose that with this method there's no guarantee you won't get the rationals.)
For any fixed $s>0$,
$$\Bigg\{\sum_{n=1}^\infty\frac{a_n}{n^s}: \{a_n\}_{n=1}^\infty \text{ is a periodic sequence of integers.} \Bigg\}$$ Is a countable set which is dense in the reals. To demonstrate this we can apply checkmath's answer. |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
August 2009 , Volume 24 , Issue 3
A special issue
Dedicated to Peter W. Bates on the occasion of his 60th birthday
Select all articles
Export/Reference:
Abstract:
This special issue of
Discrete and Continuous Dynamical Systems-Ais dedicated to Peter W. Bates on the occasion of his 60th birthday, and in recognition of his outstanding contributions to infinite dimensional dynamical systems and the mathematical theory of phase transitions.
Peter Bates was born in Manchester, England on December 27, 1947. He graduated from the University of London in mathematics in 1969 after which he moved to United States with his family. Later, he attended the University of Utah and received his Ph.D. in 1976. Following his graduation, Peter moved to Texas and taught at University of Texas at Pan American and Texas A&M University. He returned to Utah in 1984 and taught at Brigham Young University until 2004. He is currently a professor of mathematics at Michigan State University.
For more information please click the “Full Text” above.
Abstract:
We prove that a boundary value problem for a semilinear wave equation with smooth nonlinearity, smooth forcing, and no resonance cannot have continuous solutions. Our proof shows that this is due to the non-monotonicity of the nonlinearity.
Abstract:
The uniqueness and stability of traveling wave solutions for system of nonlocal evolution equations with bistable nonlinearity are established. It is also proved that traveling waves are monotone and exponentially asymptotically stable, up to translation.
Abstract:
The purpose of this paper is to introduce the model reference control method (MRC) in system biology. We review the main framework of MRC based on neural networks and some research issues. The model reference control for some model biological systems plant is considered.
Abstract:
Quadratic perturbations of a one-parameter family of quadratic reversible systems with two centers (without other singularities in finite plane) are studied. The exact upper bound of the number of limit cycles, the configurations of limit cycles, and the bifurcation diagrams for different range of the parameter are given.
Abstract:
We study a system of elliptic equations arising from biology with a chemotaxis term. This system is non-variational. Using a reduction argument, we show that the system has solutions with peaks near the boundary and inside the domain.
Abstract:
In this paper, we study a model of insect and animal dispersal where both density-dependent diffusion and nonlinear rate of growth are present. We analyze the existence of bounded traveling wave solution under certain parametric conditions by using the qualitative theory of dynamical systems. An explicit traveling wave solution is obtained by means of the first integral method. Traveling wave solutions in parametric forms for three particular cases are established by the Lie symmetry method.
Abstract:
The problem of discerning key features of steady turbulent flow adjacent to a wall has drawn the attention of some of the most noted fluid dynamicists of all time. Standard examples of such features are found in the mean velocity profiles of turbulent flow in channels, pipes or boundary layers. The aim of this article is to explain and further develop the recent concept of
scaling patchfor the time-averaged equations of motion of incompressible flow made highly turbulent by friction at a fixed boundary (introduced in recent papers by Wei et al, Fife et al, and Klewicki et al.) Besides outlining ways to identify the patches, which provide the scaling structure of mean profiles, a critical comparison will be made between that approach and more traditional ones.
Our emphasis will be on the question of how and how well these arguments supply insight into the structure of the mean flow profiles. Although empirical results may initiate the search for explanations, they will be viewed simply as means to that end.
Abstract:
We consider wavefronts that arise in a mathematical model for high Lewis number combustion processes. An efficient method for the proof of the existence and uniqueness of combustion fronts is provided by geometric singular perturbation theory. The fronts supported by the model with very large Lewis numbers are small perturbations of the front supported by the model with infinite Lewis number. The question of stability for the fronts is more complicated. Besides discrete spectrum, the system possesses essential spectrum up to the imaginary axis. We show how a geometric approach which involves construction of the Stability Index Bundles can be used to relate the spectral stability of wavefronts with high Lewis numbers to the spectral stability of the front in the case of infinite Lewis number. We discuss the implication for nonlinear stability of fronts with high Lewis numbers. This work builds on the ideas developed by Gardner and Jones [12] and generalized in the papers by Bates, Fife, Gardner and Jones [3, 4].
Abstract:
For the near-Hamiltonian system $\dot{x}=y+\varepsilon P(x,y),\dot{y}=x-x^2+\varepsilon Q(x,y)$, where $P$ and $Q$ are polynomials of $x,y$ having degree 3 with varying coefficients we obtain 5 limit cycles.
Abstract:
In this paper we investigate critical periods for a planar cubic differential system with a periodic annulus linking to equilibria at infinity. The monotonicity of the period function is decided by the sign of the second order derivative of a Abelian integral. We derive a Picard-Fuchs equation from a system of Abelian integrals and further give an induced Riccati equation for a ratio of derivatives of Abelian integrals. The number of critical points of the period function for periodic annulus is determined by discussing an planar autonomous system, the orbits of which describe solutions of the Riccati equation.
Abstract:
The current paper is devoted to the study of pullback attractors for general nonautonomous and random parabolic equations on non-smooth domains $D$. Mild solutions are considered for such equations. We first extend various fundamental properties for solutions of smooth parabolic equations on smooth domains to solutions of general parabolic equations on non-smooth domains, including continuous dependence on parameters, monotonicity, and compactness, which are of great importance in their own. Under certain dissipative conditions on the nonlinear terms, we prove that mild solutions with initial conditions in $L_q(D)$ exist globally for $q$ » $1$. We then show that pullback attractors for nonautonomous and random parabolic equations on non-smooth domains exist in $L_q(D)$ for $1$ « $q$ < $\infty$.
Abstract:
Consider a reaction-diffusion model for a microbial flow reactor with two competing species. Suppose that the amount of nutrient is input in a constant velocity at one end of the flow reactor and is washed out at the other end of the reactor. We study the dynamical behavior of population growth of these two species. In particular we are interested in the problem on the coexistence of traveling waves that best describes the long time dynamical behavior. By developing shooting method and continuation argument with the aid of an appropriately Liapunov function, we obtain the sufficient conditions for the coexistence of traveling waves as well as the minimum wave speed.
Abstract:
This paper gives a family of nonlinear wave equations, which can yield so called loop solution, cusp wave solution and solitary wave solution depending on the values of parameter $A$. For two third order systems, the dynamical behavior of these solutions are considered. The exact explicit parametric representations of solitary wave solutions and periodic wave solutions are given. It concerns with the properties of singular traveling wave systems.
Abstract:
The purpose of this paper is to analyze the asymptotic properties of collision orbits of Newtonian $N$-body problems. We construct new coordinates and time transformation that regularize the singularities of simultaneous binary collisions in the collinear four-body problem. The motion in the new coordinates and time scale across simultaneous binary collisions at least $C^2$. The explicit formulae are given in detail for the transformations and the extension of solutions. Furthermore, we study the behaviors of the motion approaching, across and after the simultaneous binary collision. Numerical simulations have been conducted for the special case in which the bodies are distributed symmetrically about the center of mass.
Abstract:
This paper concerns the lowest eigenvalue $\mu(b\N^Q)$ of the Schrödinger operator in three-dimensions with a magnetic potential $b\N^Q$, where the vector field $\N^Q$ depends on a matrix $Q$ varying in $SO(3)$ and $b$ is a real parameter. The eigenvalue variation problem is to minimize the lowest eigenvalue among all $Q$ in $SO(3)$. This problem arises in the phase transitions of smectic liquid crystals. We give an estimate of the minimum value inf${\mu(b\N^Q):~Q\in SO(3)\}$ for large $b$, and examine its dependence on geometry of the domain surface.
Abstract:
A shell like structure is sought as a solution of a free boundary problem derived from the Ohta-Kawasaki theory of diblock copolymers. The boundary of the shell satisfies an equation that involves its mean curvature and the location of the entire shell. A variant of Lyapunov-Schmidt reduction process is performed that rigorously reduces the free boundary problem to a finite dimensional problem. The finite dimensional problem is solved numerically. The problem has two parameters: $a$ and $\gamma$. When $a$ is small, there are a lower bound and a sequence such that if $\gamma$ is greater than the lower bound and stays away from the sequence, there is a shell like solution.
Abstract:
In this paper, an effective existence theorem for periodic Markov process is first established. Using the theorem, we consider a class of periodic $It\hat{o}$ stochastic functional differential equations, and some sufficient conditions for the existence of periodic solution of the equations are given. To overcome the difficulties created by the special features possessed by the periodic stochastic differential equations with delays, as one will see, several lemmas are introduced. These existence theorems are rather general and therefore have great power in applications. Especially, our results are natural generalization of some classical periodic theorems on the model without stochastic perturbation. An example is worked out to demonstrate the advantages of our results.
Abstract:
We study the existence and uniqueness, as well as various qualitative properties of periodic traveling waves for a reaction-diffusion equation in infinite cylinders. We also investigate the spectrum of the operator obtained by linearizing with respect to such a traveling wave. A detailed description of the spectrum is obtained.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
That is the right paper, but there are actually several equivalent embeddings.
The book
Basic Proof Theory by Troelstra and Schwichtenberg gives two such embeddings. Here's one. If $P$ is atomic but not $\bot$:
$$P^\circ := P$$$$\bot^\circ := \bot$$$$(A \wedge B)^\circ := A^\circ \wedge B^\circ$$$$(A \vee B)^\circ := \square A^\circ \vee \square B^\circ$$$$(A \rightarrow B)^\circ := \square A^\circ \rightarrow B^\circ$$$$(\exists x A)^\circ := \exists x \square A^\circ$$$$(\forall x A)^\circ := \forall x A^\circ$$
Here's the other:
$$P^\square := P$$$$\bot^\square := \bot$$$$(A \wedge B)^\square := A^\square \wedge B^\square$$$$(A \vee B)^\square := A^\square \vee B^\square$$$$(A \rightarrow B)^\square := \square (A^\square \rightarrow B^\square)$$$$(\exists x A)^\square := \exists x A^\square$$$$(\forall x A)^\square := \square \forall x A^\square$$
They are equivalent in the sense that $S4 \vdash A^\circ \leftrightarrow A^\square$, and the embeddings are sound and faithful. The proofs are left as an exercise, or you can dig out the book. |
It looks like you're new here. If you want to get involved, click one of these buttons!
Okay, now we've got all the machinery set up to study co-design diagrams with feedback. Today let's consider a very simple one.
I'll start without feedback. I seem to like examples from business and economics for these purposes:
This describes someone who buys bread and then sells it, perhaps at a higher price. This is described by the composite of two feasibility relations:
$$ \mathrm{Purchase} \colon \mathbb{N} \nrightarrow \mathbb{N} $$ and
$$ \mathrm{Sell} \colon \mathbb{N} \nrightarrow \mathbb{N} $$ where \(\mathbb{N}\) is the set of natural numbers given its usual ordering \(\le\).
Be careful about which way these feasibility relations go:
\( \mathrm{Purchase}(j,k) = \texttt{true}\) if you can purchase \(j\) loaves of bread for \(k\) dollars.
\( \mathrm{Sell}(i,j) = \texttt{true} \) if you can make \(i\) dollars selling \(j\) loaves of bread.
The variable at right is the 'resource', while the variable at left describes what you can obtain using this resource. For example, in purchasing bread, \( \mathrm{Purchase}(j,k) = \text{true}\) if starting with \(k\) dollars as your 'resource' you can buy \(j\) loaves of bread. This is an arbitrary convention, but it's the one in the book!
When we compose these we get a feasibility relation
$$ \mathrm{Purchase} \mathrm{Sell} \colon \mathbb{N} \to \mathbb{N} $$ (and again, there's an annoying arbitrary choice of convention in the order here). We have
I haven't said what the feasibility relations \( \mathrm{Purchase}\) and \( \mathrm{Sell}\) actually
are: they could be all sorts of things. But let's pick something specific, so you can do some computations with them. Let's keep it very simple: let's say you can buy a loaf of bread for \( $ 2\) and sell it for \( $ 3\). Puzzle 218. Write down a formula for the feasibility relation \(\mathrm{Purchase}.\) Puzzle 219. Write down a formula for the feasibility relation \(\mathrm{Sell}.\) Puzzle 220. Compute the composite feasibility relation \( \mathrm{Purchase} \mathrm{Sell}\). (Hint: we discussed composing feasibility relations in Lecture 58.)
That was just a warmup. Now let's introduce feedback!
Now you can reinvest some of the money you make to buy more loaves of bread! That creates a 'feedback loop'. Obviously this changes things dramatically: now you can start with a little money and keep making more. But how does the mathematics work now?
First, you'll notice this feedback loop has a cap at left and a cup at right. I defined these last time.
But this feedback loop also involves two feasibility relations called \(\hat{\textstyle{\sum}}\) and \(\check{\textstyle{\sum}}\). We use the one at left,
$$ \hat{\textstyle{\sum}} \colon \mathbb{N} \times \mathbb{N} \nrightarrow \mathbb{N} ,$$ to say that the money we reinvest (which loops back), plus the money we take as profit (which comes out of the diagram at left), equals the money we make by selling bread.
We use the one at right,
$$ \check{\textstyle{\sum}} \colon \mathbb{N} \nrightarrow \mathbb{N} \times \mathbb{N} ,$$ to say that the money we have reinvested (which has looped around), plus the new money we put in (which comes into the diagram at right), equals the money we use to purchase bread.
These two feasibility relations are both built from the monotone function
$$ \textstyle{\sum} \colon \mathbb{N} \times \mathbb{N} \nrightarrow \mathbb{N} $$ defined in the obvious way:
$$ \textstyle{\sum}(m,n) = m + n .$$ Remember, we saw in Lecture 65 that any monotone function \(F \colon \mathcal{X} \to \mathcal{Y} \) gives two feasibility relations, its 'companion' \(\hat{F} \colon \mathcal{X} \nrightarrow \mathcal{Y}\) and its 'conjoint' \(\check{F} \colon \mathcal{Y} \nrightarrow \mathcal{X}\).
Puzzle 221. Give a formula for the feasibility relation \( \hat{\textstyle{\sum}} \colon \mathbb{N} \times \mathbb{N} \nrightarrow \mathbb{N} \). In other words, say when \(\hat{\textstyle{\sum}}(a,b,c) = \texttt{true}\). Puzzle 222. Give a formula for the feasibility relation \( \check{\textstyle{\sum}} \colon \mathbb{N} \nrightarrow \mathbb{N} \times \mathbb{N} \).
And now finally for the big puzzle that all the others were leading up to:
Puzzle 223. Give a formula for the feasibility relation described by this co-design diagram:
You can guess the answer, and then you can work it systematically by composing and tensoring the feasibility relations defined by the boxes, the cap and the cup! This is a good way to make sure you understand everything I've been talking about lately. |
I can get the proper answer, but I don't quite know why.
I am supposed to find $dy/dt$ for the function $y = \sqrt{2x +1}$ if $dx/dt = 3$ when $x=4$.
For the derivative I get $$ \frac {dy}{dt} = \frac {1}{2} (2x + 1)^{-1/2} \frac{dx}{dt},$$ which then gives me $$ \frac {dy}{dt} = \frac {1}{2} (9)^{-1/2} \cdot 3 \frac {dy}{dt} = \frac{1}{2}, $$
which is wrong. I can also do
$$ \frac {dy}{dt} = \frac {1}{2} (9)^{-1/2} \cdot 2 \frac {dx}{dt},$$
which gives me $1$, which is the proper answer, but I am not sure why I get that. I know that the derivative of the inner function will be $2$ but the problems defines it as being $3$, so do I just multiply the two? |
Abbreviation:
AAlg
$\cdot$ is associative: $(xy)z=x(yz)$
Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x ... y)=h(x) ... h(y)$
A
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct |
Please is this prof is correct ? Let $\{\Omega_i\}_{i\in I}$ a famille of connected sets such that $$\forall i,j\in I, \Omega_i\cap\Omega_j\neq\emptyset$$ I want to prove that $\bigcup_{i\in I} \Omega_i$ is connected.
If I suppose that $\bigcup \Omega_i$ is not connected then, there exists two non empty open sets $A,B$ from $\bigcup \Omega_i$ such that $$ \begin{cases} \bigcup \Omega_i= A\cup B\\ A\cap B=\emptyset \end{cases} $$ we have $\forall i\in I, \Omega_i\subset \bigcup_{i\in I}\Omega_i=A\cup B$ then by the connectedness of $\Omega_i$ $$\forall i\in I, [\Omega_i\subset A ~\text{or}~ \Omega_i\subset B]$$
As $\forall i,j\in I, A_i\cap A_j\neq \emptyset$ we deduce that $$\forall I\in A, \Omega_i\subset A~\text{or}~ \forall i\in I,\Omega_i\subset B$$ it follows that $B=\emptyset$ or $D=\emptyset$, which is a contradiction.
Please if I change the condition $$\forall i,j\in I, \Omega_i\cap\Omega_j\neq\emptyset$$by $$\exists i_0\in I, \Omega_{i_0}\cap \Omega_j\neq \emptyset,\forall j\in I$$How to do ?%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Let $\{\Omega_i\}_{i\in I}$ a famille of connected sets such that $$\exists i_0\in I, \Omega_{i_0}\cap \Omega_j\neq \emptyset,\forall j\in I$$ I want to prove that $\bigcup_{i\in I} \Omega_i$ is connected.
If I suppose that $\bigcup \Omega_i$ is not connected then, there exists two non empty open sets $A,B$ from $\bigcup \Omega_i$ such that $$ \begin{cases} \bigcup \Omega_i= A\cup B\\ A\cap B=\emptyset \end{cases} $$ we have $\forall i\in I, \Omega_i\subset \bigcup_{i\in I}\Omega_i=A\cup B$ then $\Omega_{i_0}\subset A\cup B$ as it is connected we have $$\Omega_{i_0}\subset A~\text{or} ~ \Omega_{i_0}\subset B$$ if we suppose that $\Omega_{i_0}\subset A$ then $$\forall j\in I, \Omega_{j}\cap A\neq \emptyset $$ by the connectedness of $\Omega_i$ we deduce that $$\forall j\in I, \Omega_{j}\cap B=\emptyset$$ then $$\forall j\in I, \Omega_j\subset A$$ thus $$\bigcup_{j\in I}\Omega_j\subset A$$ so $B=\emptyset$ contradiction in the same way if we suppose that $\Omega_{i_0}\subset B$ we find that $A=\emptyset$
thank you |
When we take a dot product between two vectors of a vector space, we actually "act" by a 1-form (dual vector) on a vector. So why most books define the dot product between vectors? Of course with the help of a metric we can take a dot product between two vectors, but technically the metric converts one of the two vectors to a 1-form.
An inner product is definitely a certain bilinear map $$\langle \cdot, \cdot \rangle: V \times V \longrightarrow \Bbb R$$ which takes two vectors as its arguments. If you're thinking in terms of a metric tensor $g$, then $g$ is a type $(0,2)$-tensor, which means it has two vector arguments (no covector arguments).
What you're thinking of is the fact that a choice of inner product $\langle \cdot, \cdot \rangle$ on a vector space $V$ gives an isomorphism$$\varphi: V \longrightarrow V^\ast,$$$$v \mapsto \langle v, \cdot \rangle.$$In this sense we can identify the inner product as the action of a covector on a vector:$$\langle u, v \rangle = \varphi(u)(v).$$In terms of components of the metric tensor, this is equivalent to$$\langle u, v \rangle = g_{ij} u^i v^j = u_j v^j.$$An element of $V^\ast$ acts on $V$
in the same way no matter what basis we choose, but there is no canonical identification of $V$ with $V^\ast$. Without a metric, the lowered index components $u_i$ of a vector $u$ make no sense. |
Abbreviation:
CPoSgrp
A
is a partially ordered semigroup $\mathbf{A}=\langle A,\cdot,\le\rangle$ such that commutative partially ordered semigroup
$\cdot$ is
: $xy=yx$ commutative
Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be commutative partially ordered semigroups. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a orderpreserving homomorphism: $h(x \cdot y)=h(x) \cdot h(y)$ and $x\le y\Longrightarrow h(x)\le h(y)$.
A
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[Commutative partially ordered monoids]] expansion [[Partially ordered semigroups]] supervariety [[Commutative semigroups]] subreduct |
I bought the eighth edition of Stewart Calculus (metric version) and I'm up to the section about limits. It's been pretty easy so far, but I've come across a class of limit problems that don't seem solvable with mere algebra. The following is fairly representative of them:
$$\lim_{t\to 0} \frac {\sqrt{1+t}-\sqrt{1-t}}t$$
I tried rationalizing the numerator by multiplying by its conjugate, but I still ended up with a denominator that tends towards 0, and thus I was forced to conclude that the limit did not exist. However, the book kindly gave me the answer of 1, and I can't for the life of me work out how to get to that point via algebraic manipulation.
Have I just misled myself with regards to the quotient limit law? That is, I have been tacitly assuming that the failure of said law amounts to the expression having no limit overall, and now that I type this, it seems like a rather stupid assumption. Does that mean that problems such as these require numerical/graphical methods to solve?
I realize that I might have just answered my own question, but still, I'd like to know if my reflection is accurate. Also, I apologize for the lack of formatting; it's quite late here and as such I found the MathJax instructions... impenetrable. |
Positive Integer Greater than 1 has Prime Divisor/Proof 2 Lemma Proof
Let $S = \set {n \in \Z: n > 1: \neg \exists p \in \Bbb P: p \divides n}$.
That is:
$S = \set {\text {all integers not divisible by a prime} }$ Let $n \in S$ be the smallest of these.
As $S$ is bounded below by $1$, this is bound to exist, by Set of Integers Bounded Below by Integer has Smallest Element.
So:
$\neg \exists x \in S: x < n$ Now $n$ cannot be prime itself: $\paren {\paren {n \in \Bbb P} \land \paren {n \divides n} \implies n \notin S} \implies n \notin \Bbb P$ $\exists r, s \in \Z: n = r s, 1 < r < n, 1 < s< n$ There are two possibilities: $(1):\quad$ Neither $r$ nor $s$ has a prime divisor $(2):\quad$ At least one of $r$ and $s$ has a prime divisor. If either $r$ or $s$ has a prime divisor, then: $\exists p \in \Bbb P: \paren {p \divides r} \lor \paren {p \divides s} \implies p \divides n$
This contradicts our claim that $n$ is not not divisible by some prime.
However, if neither $r$ nor $s$ has a prime divisor, it follows that $r, s \in S$.
But as $r, s < n$, this contradicts our choice of $n$ as the smallest element of $S$.
$\blacksquare$ |
Answer
$\lim_\limits{x\to 0}\frac{\sin 3x}{x}=3$
Work Step by Step
To find the limit using the table, we must find the value that $f(x)$ approaches, as $x$ approaches a given value. Here, we are asked to find the limit as $x\to0$. As $x$ approaches $0$ from both sides, it is clear that the values of $f(x)$ are approaching $3$. Thus, $\lim_\limits{x\to 0}\frac{\sin 3x}{x}=3$. |
I'm new to Matrix Calculus. Recently I've been working on that and have a question. Please see the following:
$J=J(\mathbf{z})$
$\mathbf{z}=\mathbf{W}\mathbf{a}$
Where $J: R^m \rightarrow R$, $\mathbf{z}$ is $m\times1$ vector, $\mathbf{a}$ is $n\times1$ vector and $\mathbf{W}$ is $m\times n$ matrix.
I want to calculate $\frac{\partial{J}}{\partial{\mathbf{W}}}$. A reference paper tells I need to turn $\mathbf{W}$ to vector by stacking the
column:
$\frac{\partial{J}}{\partial{vec(\mathbf{W})}} = \frac{\partial J}{\partial \mathbf{z}}\cdot\frac{\partial \mathbf{z}}{\partial vec(\mathbf{W})}$
let $\delta^T = \frac{\partial J}{\partial \mathbf{z}}$ and it is a $1\times m$ vector (numerator layout).
$\frac{\partial \mathbf{z}}{\partial vec(\mathbf{W})} = \frac{\partial \mathbf{W}\mathbf{a}}{\partial vec(\mathbf{W})}=\frac{\partial vec(\mathbf{W}\mathbf{a})}{\partial vec(\mathbf{W})} = \frac{\partial (\mathbf{a}^T \otimes I_{mm})vec(\mathbf{W})}{\partial vec(\mathbf{W})}=\mathbf{a}^T \otimes I_{mm}$, $\otimes$ is Kronecker product.
$\mathbf{a}^T \otimes I_{mm}$ is $m\times mn$ matrix.
$\delta^T\cdot(\mathbf{a}^T \otimes I_{mm}) = [\delta_1\cdot \mathbf{a}^T, \delta_2\cdot \mathbf{a}^T, ..., \delta_m\cdot \mathbf{a}^T]$. If I recover this to matrix by invert stacking column, the result is strage: $$ \begin{matrix} \delta_1a_1 & \delta_{1}a_{m+1} & \dots \\ \delta_1a_2 & \delta_{1}a_{m+2} & \dots \\ \vdots & \vdots & \dots \\ \delta_1a_m & \delta_{2}a_{2m-n} & \dots \end{matrix} $$
This is obviously wrong. It looks strange. And I find another reference, the result should be $\mathbf{\delta}\cdot \mathbf{a}^T$. So I think the $invert\ vec(\cdot)$ should be row stacking, is it right? |
I encountered a problem as folows:
Show a $3\times 3$ real matrx $A$, such that
$$A^4=\left(\begin{array}{ccc}3&0&0\\0&3&1\\0&0&0\end{array}\right)$$
well, this problem is not difficult, one can first find $B=\left(\begin{array}{ccc}\sqrt3&0&0\\0&\sqrt3&x\\0&0&0\end{array}\right)$ such that $B^2=\left(\begin{array}{ccc}3&0&0\\0&3&1\\0&0&0\end{array}\right)$.
My problem is:
Let $m,n$ be two positive integers. then, for what $n\times n$ real matrix $X$, there exist real matrix $A$ such that $A^m=X$? Is there a general method or theorem to calculate all the matrices $X$ and $A$?
Maybe, there does not exist a general answer. then, How about $n=3$ or $4$?
Thanks a lot! |
[I'm sorry, I've already posted the same question in the physics community, but I haven't received an answer yet.]
I'm approaching the study of Bell's inequalities and I understood the reasoning under the Bell theorem (ON THE EINSTEIN PODOLSKY ROSEN PARADOX (PDF)) and how the postulate of locality was assumed at the start of the demonstration.
However, I find problematic to arrive at the equivalence $$ E(\vec{a},\vec{b}) = \int_{\Lambda}d\lambda \rho(\lambda)A(\vec{a},\lambda)B(\vec{b},\lambda),$$
starting from the point of view expressed by the Clauser and Horne definition of locality.
CH claimed that a system is local if there is a parameter $\lambda$ and a joint conditional probabilities that can be written as follows: $$p(a,b|x,y,\lambda) = p(a|x,\lambda)p(b|y,\lambda),$$ and $$p(a,b|x,y) = \int_\Lambda d\lambda \rho(\lambda) p(a|x,\lambda)p(b|y,\lambda)$$ which make sense since it affirms that the probability of obtaining the value $a$ depends only on the measument $x = \vec{\sigma}\cdot\vec{x} $ and the value of $\lambda$.
However, if I use this expression to write down the expectation value of the products of the two components $\vec{\sigma}\cdot\vec{a}$ and $\vec{\sigma}\cdot\vec{b}$, I obtain as follows:
$$ E (\vec{a},\vec{b}) = \sum_{i,j}a_ib_jp(a,b|x,y) = \\ = \sum_{ij}a_ib_j \int_\Lambda d\lambda \rho(\lambda) p(a|x,\lambda)p(b|y,\lambda) \\ = \int_\Lambda d\lambda \rho(\lambda) (\sum_{i}a_ip(a|x,\lambda))(\sum_{i}b_ip(b|y,\lambda)) $$ where in the last equivalence I've used the fact that if the measument are independent their covariance must be equal to $0$.
At this point, the terms in the RHS in the brackets are equal to: $$ (\sum_{i}a_ip(a|x,\lambda)) = E(a,\lambda) =? = A(\vec{a},\lambda)\quad \quad (\sum_{i}b_ip(b|y,\lambda)) = E(b,\lambda) =?= B(\vec{b},\lambda).$$
That is not the equivalence that I want to find.
In fact in the RHS of the first equation $A(\vec{a},\lambda)$ is, according to Bell original article, the result of measure $\vec{\sigma}\cdot\vec{a}$, and fixing both $\vec{a}$ and $\lambda$ it can assume only the values of $\pm1$. (The same is applied for $B(\vec{b},\lambda)$.)
Some of you knows, where I fail? How can I obtain the original equivalence (that then is proved to be violate in the case of an entangled system) starting from the CH definition of reality?
Edit #1:
I've noted that I obtain the wanted equivalence only if I assume that $p(ab|xy\lambda) = E(\vec{a}\vec{b})$, but is it possible? How can a conditional probability be linked to the mean value of the product of two components?
Edit #2:
Surfing the internet I found an article (https://arxiv.org/abs/1709.04260, page 2, right on the top) which reports the same CH's local condition (to be accurate, the article presents the discrete version) and then affirm that:
Blockquote "The central realization of Bell’s theorem is the fact that there are quantum correlations obtained by local measurements ($M_a^x$ and $M_b^y$) on distant parts of a joint entangled state $\varrho$, that according to quantum theory are described as: $$p_{Q}(a,b,|x,y) = \text{Tr}(\varrho(M_a^x\otimes M_b^y) $$ and cannot be decomposed in the LHV form (i.e. The CH condition for locality)"
So why $p_Q(a,b|x,y)$ is seen as a measure of quantum correlation (that for definition is the mean of the product of the possible output)? It isn't a joint probability distribution (as stating while obtaining the LHV form)? Is there a link between the classical correlation ($E(\vec{a},\vec{b})$) and the joint probability distribution $p(a,b|x,y,\lambda)$? |
It looks like you're new here. If you want to get involved, click one of these buttons!
Last time we took a peek at a 'co-design diagram':
This describes a big complicated feasibility relation built from smaller ones - the little boxes - in various ways. Now let me start explaining those various ways!
First of all, remember that a
feasibility relation \(\Phi \colon X \nrightarrow Y \) from a preorder \(X\) to a preorder \(Y\) is monotone function
$$ \Phi \colon X^{\text{op}} \times Y \to \lbrace \text{true}, \text{false} \rbrace . $$ In collaborative design we interpret \(\Phi(x,y) = \text{true}\) to mean "we can meet the requirements \(x\) given the resources \(y\)". In a codesign diagram we can draw \(\Phi\) as a box with one wire coming in at left labelled \(X\), and one wire going out at right labelled \(Y\):
Here are a few easy examples:
Puzzle 204. Suppose you are trying to buy a plane ticket, and the cheapest available ticket is $500. Describe this using a feasibility relation \(\Phi : \textbf{Bool} \nrightarrow [0,\infty) \) where we make \( [0,\infty) \), the set of nonnegative real numbers, into a poset with its usual ordering \(\le\). Puzzle 205. Suppose you are trying to buy either one or two loaves of bread - or perhaps none. Suppose bread costs $2 per loaf. Describe this using a feasibility relation \(\Psi : \lbrace 0,1,2\rbrace \nrightarrow [0,\infty) \). Here we make \( {0,1,2}\) into a poset with its usual ordering. Puzzle 206. Suppose you are trying to feed hungry children with the loaves of bread you bought in the previous puzzle, and you can feed at most three children with each loaf of bread. Describe this using a feasibility relation \(\Phi : \mathbb{N} \nrightarrow \lbrace 0,1,2\rbrace \). Here \(\mathbb{N}\) is the set of natural numbers \( \lbrace 0,1,2,3,\dots \rbrace \) with its usual ordering.
Second, remember from Lecture 58 that we can compose feasibility relations \(\Phi \colon X \nrightarrow Y\) and \(\Psi \colon Y \nrightarrow Z\) to get a feasibility relation \(\Psi \Phi \colon X \nrightarrow Z\). We draw this as follows:
The box on the outside helps us think of \(\Psi\Phi\) as a single thing, but we could also leave it out.
The idea in codesign is that this describes two systems or processes stuck together, with the second providing the resources required for the first. Let's look at an easy example:
Puzzle 207. Suppose you buy loaves of bread and then use them to feed hungry children. Compose the feasibility relation \(\Psi : {0,1,2} \nrightarrow [0,\infty) \) from Puzzle 205 and the feasibility relation \(\Phi : \mathbb{N} \nrightarrow \lbrace 0,1,2\rbrace \) from Puzzle 206 to get a feasibility relation \(\Psi \Phi : \mathbb{N} \nrightarrow [0,\infty) \) describing how many children you can feed for a certain amount of money (given the fact that you plan to buy at most two loaves).
Third, if we have a bunch of preorders \(X_1, \dots, X_m \) and \(Y_1, \dots, Y_n\), their products \(X_1 \times \cdots \times X_m\) and \(Y_1 \times \cdots \times Y_n \) are also preorders, so we can talk about a feasibility relation
$$ \Phi \colon X_1 \times \cdots \times X_m \nrightarrow Y_1 \times \cdots \times Y_n $$ We draw this as a box with a bunch of wires going in and a bunch going out, as follows:
The idea in codesign is that this describes a situation where a bunch of resources \(Y_1, \dots, Y_n\) are needed to meet a bunch of requirements \(X_1, \dots, X_m\).
Puzzle 208. Suppose you some slices of bread, some slices of cheese and some slices of ham. You are trying to make sandwiches. You can make a cheese sandwich with two slices of bread and one slice of cheese. You can make a ham sandwich with two slices of bread and one slice of ham. There is a feasibility relation \(\Theta : \mathbb{N} \times \mathbb{N} \nrightarrow \mathbb{N} \times \mathbb{N} \times \mathbb{N} \) where \(\Theta(m,n,i,j,k) = \text{true}\) if you can make \(m\) cheese sandwiches and \(n\) ham sandwiches from \(i\) slices of bread, \(j\) slices of cheese and \(k\) slices of ham.
Fourth, if we have feasibility relations \(\Phi \colon X \nrightarrow Y\) and \(\Psi \colon X' \nrightarrow Y'\), we can define a new feasibility relation
$$ \Phi \otimes \Psi \colon X \times X' \nrightarrow Y \times Y' $$ given by
$$ (\Phi \otimes \Psi)((x,x'),(y,y')) = \Phi(x,y) \wedge \Psi(x',y') $$We call this way of combining feasibility relations
tensoring, and we draw \(\Phi \otimes \Psi\) as follows:
This describes a situation where we can meet the requirements \((x,x')\) given resources \( (y,y') \) iff we can meet requirement \(x\) given \(x'\)
and meet requirement \(y\) given \(y'\).
Here is a very simple example:
Puzzle 209. Suppose you are trying to fry some eggs and also toast some slices of bread. Describe each process separately as a feasibility relation from \(\mathbb{N}\) to \(\mathbb{N}\) and then tensor these relations. What is the result? Puzzle 210. Show that \(\Phi \otimes \Psi\) is really a feasibility relation if \(\Phi\) and \(\Psi\) are feasibility relations. Puzzle 211. What general mathematical result is Puzzle 209 an example of? Puzzle 212. We can get a feasibility relation by taking either the companion or the conjoint of a monotone map, thanks to the ideas in the puzzles of Lecture 65. Which of the feasibility relations in this lecture's puzzles are companions or conjoints? |
This question is experimentally accessible, despite the feebleness of the weak interaction, because the strong and electromagnetic interactions are symmetric under parity transformations and the weak interaction is not.
The contribution to the binding energy is small enough that it's not a good way to think of things. Better is to continue the process of trying to describe nuclear energy eigenstates as linear combinations of different spin-orbit states. For instance, the deuteron ground state has isospin zero and spin, parity $J^P=1^+$, and so must be a linear combination of the even-$L$ spin triplets $\left|{}^3S_1{}^{T=0}\right>$ and $\left|{}^3D_1{}^{T=0}\right>$; the d-wave component famously contributes about 4% of the wavefunction and was the first evidence for the tensor nature of the nuclear force. But because the weak interaction contributes to the nuclear interaction, the ground state isn't an
exact eigenstate of the parity operator (or, for that matter, of isospin) and there's a little bit of p-wave mixed in:$$\left|\text{deuteron}\right>=\sqrt{0.96}\left|{}^3S_1{}^{T=0}\right>+\sqrt{0.04}\left|{}^3D_1{}^{T=0}\right>+\epsilon_0\left|{}^3P_1{}^{T=0}\right>+\epsilon_1\left|{}^1P_1{}^{T=1}\right>$$
In the formation of deuterium by neutron capture on hydrogen, you get interference between parity-allowed capture to the $S$- and $D$-wave states and parity-forbidden capture to the $P$-wave states. These interferences manifest as asymmetries or spontaneous polarizations in the photons emitted during capture which are more or less linear in the amount of $P$-wave mixing; typical asymmetries are a few parts per billion.
In heavier nuclei (e.g. helium & beyond) you lose the luxury of a ground-state wavefunction which can be described in a paragraph, or even at all. However, a perturbation-theory way of describing the influence of the weak interaction is to say that a particular physical eigenstate with, say, positive parity $\left|\psi^+_\text{physical}\right>$ will be
mostly given by a strong-force eigenstate with definite parity, but contain contributions from nearby opposite-parity states due to the weak interaction:$$\left| \psi^+_\text{physical} \right>=\left| \psi^+ \right>+\sum_i\left| \psi^-_i \right>\frac{\left< \psi^-_i \middle|H_\text{weak}\middle| \psi^+ \right>}{E_i - E_+}$$
In heavy nuclei with a dense forest of excited states, you sometimes find same-spin, opposite-parity states which have very different lifetimes and very similar energies; these states are prime candidates to exhibit parity mixing due to the weak interaction.There's a famous excitation in lanthanum which decays by emitting photons with a 10% parity-forbidden directional asymmetry.
Microscopically, your other answers are correct that the nucleus is too large and the overlap between nucleons too small for appreciable exchange of $W$ and $Z$ bosons. But you can of course say the same thing about nucleons and exchange of gluons. The effective theory of the weak interaction between nucleons models the nuclear force as an exchange of strong mesons (the $\pi,\rho,\omega$) where each nucleon-nucleon-meson vertex with a given set of quantum numbers has a particular parity-nonconserving amplitude. (There was some effort a few years ago to move into the twenty-first century and come up with an "effective field theory" which described the nucleon-nucleon weak interaction without mesons; a big pile of work seems to have produced a one-to-one relationship between the coupling constants in the modern effective field theory and the coupling constants in the old meson theory.)
This has been a pretty long-winded preparation for my answer to your question: the contribution of the weak interaction to the
energy of any particular nuclear state is pretty small, for the same reason that the Coulomb-force contribution to the energies of light nuclei can generally be neglected. What's more interesting is to try an use the short-range nature of the weak interaction to peek at high-energy physics hiding inside of stable nuclei. |
From Gauss' law, one can easily derive that the electric field at some distance from an infinite sheet of charge density $\sigma$ is $E=\frac{\sigma}{2\epsilon _0}$. Now when one considers a conductor instead, because there is no electric field within the body of a conductor, the electric field somewhere above the surface of this infinite, flat conductor is $E=\frac{\sigma}{\epsilon _0}$. Now I am struggling to see which formula you can use in a physical situation.
In particular, I was considering a capacitor where the charge on each plate was Q. The well-known results say that the electric field within the conductor is $E=\sigma / \epsilon _0$, and since the electric fields from each plate add this must mean that each plate seperately is being considered as in the first case: a sheet of charge density $\sigma$ and not a conductor itself. I do not see why we doo not consider each plate as a conductor.
I am trying to reason this out by considering the fact that, since the plates are connected by a wire, technically to think of this as a single conductor I would have to think about the whole system. Then both of the plates would belong to the system/conductor, so above the surface of this whole system I could say that the electric field is $E=\sigma / \epsilon _0$, but to take both sheets into account I have to look between the plates.
But this argument is really unconvincing to me. I was wondering if anyone has any better explanation as to the lack of a factor of two in the expression for the electric field. |
Current browse context:
stat
Change to browse by: References & Citations Bookmark(what is this?) Computer Science > Machine Learning Title: Finite Precision Stochastic Optimization -- Accounting for the Bias
(Submitted on 22 Aug 2019 (v1), last revised 26 Aug 2019 (this version, v2))
Abstract: We consider first order stochastic optimization where the oracle must quantize each subgradient estimate to $r$ bits. We treat two oracle models: the first where the Euclidean norm of the oracle output is almost surely bounded and the second where it is mean square bounded. Prior work in this setting assumes the availability of unbiased quantizers. While this assumption is valid in the case of almost surely bounded oracles, it does not hold true for the standard setting of mean square bounded oracles, and the bias can dramatically affect the convergence rate. We analyze the performance of standard quantizers from prior work in combination with projected stochastic gradient descent for both these oracle models and present two new adaptive quantizers that outperform the existing ones. Specifically, for almost surely bounded oracles, we establish first a lower bound for the precision needed to attain the standard convergence rate of $T^{-\frac 12}$ for optimizing convex functions over a $d$-dimentional domain. Our proposed Rotated Adaptive Tetra-iterated Quantizer (RATQ) is merely a factor $O(\log \log \log^\ast d)$ far from this lower bound. For mean square bounded oracles, we show that a state-of-the-art Rotated Uniform Quantizer (RUQ) from prior work would need atleast $\Omega(d\log T)$ bits to achieve the convergence rate of $T^{-\frac 12}$, using any optimization protocol. However, our proposed Rotated Adaptive Quantizer (RAQ) outperforms RUQ in this setting and attains a convergence rate of $T^{-\frac 12}$ using a precision of only $O(d\log\log T)$. For mean square bounded oracles, in the communication-starved regime where the precision $r$ is fixed to a constant independent of $T$, we show that RUQ cannot attain a convergence rate better than $T^{-\frac 14}$ for any $r$, while RAQ can attain convergence at rates arbitrarily close to $T^{-\frac 12}$ as $r$ increases. Submission historyFrom: Himanshu Tyagi [view email] [v1]Thu, 22 Aug 2019 04:57:22 GMT (49kb) [v2]Mon, 26 Aug 2019 04:56:31 GMT (49kb) |
An example of methylation analysis with simulated datasets
Part 2: Potential DMPs from the methylation signal
Methylation analysis with Methyl-IT is illustrated on simulated datasets of methylated and unmethylated read counts with relatively high average of methylation levels: 0.15 and 0.286 for control and treatment groups, respectively. In this part, potential differentially methylated positions are estimated following different approaches. 1. Background
Only a signal detection approach can detect with high probability real DMPs. Any statistical test (like e.g. Fisher’s exact test) not based on signal detection requires for further analysis to distinguish DMPs that naturally can occur in the control group from those DMPs induced by a treatment. The analysis here is a continuation of Part 1.
2. Potential DMPs from the methylation signal using empirical distribution
As suggested from the empirical density graphics (above), the critical values $H_{\alpha=0.05}$ and $TV_{d_{\alpha=0.05}}$ can be used as cutpoints to select potential DMPs. After setting $dist.name = “ECDF”$ and $tv.cut = 0.926$ in Methyl-IT function
getPotentialDIMP, potential DMPs are estimated using the empirical cummulative distribution function (ECDF) and the critical value $TV_{d_{\alpha=0.05}}=0.926$.
DMP.ecdf <- getPotentialDIMP(LR = divs, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "ECDF")
3. Potential DMPs detected with Fisher’s exact test
In Methyl-IT Fisher’s exact test (FT) is implemented in function
FisherTest. In the current case, a pairwise group application of FT to each cytosine site is performed. The differences between the group means of read counts of methylated and unmethylated cytosines at each site are used for testing ( pooling.stat=”mean”). Notice that only cytosine sites with critical values $TV_d$> 0.926 are tested ( tv.cut = 0.926).
ft = FisherTest(LR = divs, tv.cut = 0.926, pAdjustMethod = "BH", pooling.stat = "mean", pvalCutOff = 0.05, num.cores = 4L, verbose = FALSE, saveAll = FALSE) ft.tv <- getPotentialDIMP(LR = ft, div.col = 9L, dist.name = "None", tv.cut = 0.926, tv.col = 7, alpha = 0.05)
There is not a one-to-one mapping between $TV$ and $HD$. However, at each cytosine site $i$, these information divergences hold the inequality:
$TV(p^{tt}_i,p^{ct}_i)\leq \sqrt{2}H_d(p^{tt}_i,p^{ct}_i)$ [1].
where $H_d(p^{tt}_i,p^{ct}_i) = \sqrt{\frac{H(p^{tt}_i,p^{ct}_i)}w}$ is the Hellinger distance and $H(p^{tt}_i,p^{ct}_i)$ is given by Eq. 1 in part 1.
So, potential DMPs detected with FT can be constrained with the critical value $H^{TT}_{\alpha=0.05}\geq114.5$ 4. Potential DMPs detected with Weibull 2-parameters model
Potential DMPs can be estimated using the critical values derived from the fitted Weibull 2-parameters models, which are obtained after the non-linear fit of the theoretical model on the genome-wide $HD$ values for each individual sample using Methyl-IT function
nonlinearFitDist [2]. As before, only cytosine sites with critical values $TV>0.926$ are considered DMPs. Notice that, it is always possible to use any other values of $HD$ and $TV$ as critical values, but whatever could be the value it will affect the final accuracy of the classification performance of DMPs into two groups, DMPs from control and DNPs from treatment (see below). So, it is important to do an good choices of the critical values.
nlms.wb <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L)# Potential DMPs from 'Weibull2P' modelDMPs.wb <- getPotentialDIMP(LR = divs, nlms = nlms.wb, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Weibull2P")nlms.wb$T1
## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.5413711 0.0003964435 1365.570 0 0.991666592250838## scale 19.4097502 0.0155797315 1245.833 0 ## rho R.Cross.val DEV## shape 0.991666258901194 0.996595712743823 34.7217494754823## scale ## AIC BIC COV.shape COV.scale## shape -221720.747067975 -221694.287733122 1.571674e-07 -1.165129e-06## scale -1.165129e-06 2.427280e-04## COV.mu n## shape NA 50000## scale NA 50000
5. Potential DMPs detected with Gamma 2-parameters model
As in the case of Weibull 2-parameters model, potential DMPs can be estimated using the critical values derived from the fitted Gamma 2-parameters models and only cytosine sites with critical values $TV_d > 0.926$ are considered DMPs.
nlms.g2p <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L, dist.name = "Gamma2P")# Potential DMPs from 'Gamma2P' modelDMPs.g2p <- getPotentialDIMP(LR = divs, nlms = nlms.g2p, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Gamma2P")nlms.g2p$T1
## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square## shape 0.3866249 0.0001480347 2611.717 0 0.999998194156282## scale 76.1580083 0.0642929555 1184.547 0 ## rho R.Cross.val DEV## shape 0.999998194084045 0.998331895911125 0.00752417919133131## scale ## AIC BIC COV.alpha COV.scale## shape -265404.29138371 -265369.012270572 2.191429e-08 -8.581717e-06## scale -8.581717e-06 4.133584e-03## COV.mu df## shape NA 49998## scale NA 49998
Summary table:
data.frame(ft = unlist(lapply(ft, length)), ft.hd = unlist(lapply(ft.hd, length)),ecdf = unlist(lapply(DMPs.hd, length)), Weibull = unlist(lapply(DMPs.wb, length)),Gamma = unlist(lapply(DMPs.g2p, length)))
## ft ft.hd ecdf Weibull Gamma## C1 1253 773 63 756 935## C2 1221 776 62 755 925## C3 1280 786 64 768 947## T1 2504 1554 126 924 1346## T2 2464 1532 124 942 1379## T3 2408 1477 121 979 1354
6. Density graphic with a new critical value
The graphics for the empirical (in black) and Gamma (in blue) densities distributions of Hellinger divergence of methylation levels for sample T1 are shown below. The 2-parameter gamma model is build by using the parameters estimated in the non-linear fit of $H$ values from sample T1. The critical values estimated from the 2-parameter gamma distribution $H^{\Gamma}_{\alpha=0.05}=124$ is more ‘conservative’ than the critical value based on the empirical distribution $H^{Emp}_{\alpha=0.05}=114.5$. That is, in accordance with the empirical distribution, for a methylation change to be considered a signal its $H$ value must be $H\geq114.5$, while according with the 2-parameter gamma model any cytosine carrying a signal must hold $H\geq124$.
suppressMessages(library(ggplot2)) # Some information for graphic dt <- data[data$sample == "T1", ] coef <- nlms.g2p$T1$Estimate # Coefficients from the non-linear fit dgamma2p <- function(x) dgamma(x, shape = coef[1], scale = coef[2]) qgamma2p <- function(x) qgamma(x, shape = coef[1], scale = coef[2]) # 95% quantiles q95 <- qgamma2p(0.95) # Gamma model based quantile emp.q95 = quantile(divs$T1$hdiv, 0.95) # Empirical quantile # Density plot with ggplot ggplot(dt, aes(x = HD)) + geom_density(alpha = 0.05, bw = 0.2, position = "identity", na.rm = TRUE, size = 0.4) + xlim(c(0, 150)) + stat_function(fun = dgamma2p, colour = "blue") + xlab(expression(bolditalic("Hellinger divergence (HD)"))) + ylab(expression(bolditalic("Density"))) + ggtitle("Empirical and Gamma densities distributions of Hellinger divergence (T1)") + geom_vline(xintercept = emp.q95, color = "black", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = emp.q95 - 20, y = 0.16, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Emp==114.5)', family = "serif", color = "black", parse = TRUE) + geom_vline(xintercept = q95, color = "blue", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = q95 + 9, y = 0.14, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Gamma==124)', family = "serif", color = "blue", parse = TRUE) + theme( axis.text.x = element_text( face = "bold", size = 12, color="black", margin = margin(1,0,1,0, unit = "pt" )), axis.text.y = element_text( face = "bold", size = 12, color="black", margin = margin( 0,0.1,0,0, unit = "mm")), axis.title.x = element_text(face = "bold", size = 13, color="black", vjust = 0 ), axis.title.y = element_text(face = "bold", size = 13, color="black", vjust = 0 ), legend.title = element_blank(), legend.margin = margin(c(0.3, 0.3, 0.3, 0.3), unit = 'mm'), legend.box.spacing = unit(0.5, "lines"), legend.text = element_text(face = "bold", size = 12, family = "serif")
References Steerneman, Ton, K. Behnen, G. Neuhaus, Julius R. Blum, Pramod K. Pathak, Wassily Hoeffding, J. Wolfowitz, et al. 1983. “On the total variation and Hellinger distance between signed measures; an application to product measures.” Proceedings of the American Mathematical Society88 (4). Springer-Verlag, Berlin-New York: 684–84. doi:10.1090/S0002-9939-1983-0702299-0. Sanchez, Robersy, and Sally A. Mackenzie. 2016. “Information Thermodynamics of Cytosine DNA Methylation.” Edited by Barbara Bardoni. PLOS ONE11 (3). Public Library of Science: e0150427. doi:10.1371/journal.pone.0150427. |
$f:[0,1]\times [0,1]\to\mathbb R,$ defined by $$f(x,y)= \begin{cases}1,\quad \ \ y\in\mathbb R\text{\\}\mathbb Q\\2x,\quad\text{otherwise}\end{cases}$$.
$1.1$: $\int_0^1f(x,y)dx$ exists for every $y\in[0,1]$ and is equal to $1$.
$1.2$: The iterated integral $\int_0^1(\int_0^1f(x,y)dx)dy$ exists and is $1$.
$1.3$: The double integral $\int_If(x,y)d(x,y)$ does not exist.
I am struggling with solving iterated integrals in general and with this one I don't even know where to start since the values kind of jump from 1 to 2x constantly.
Edit: Got an idea for 1.1.: I made two cases, one for an irrational y and one for the rest. Giving me $\int_0^11dx$ which is 1 and $\int_0^12xdx$ which also is 1.
Could someone give me a short explanation about them and some hints on how to approach these exercises? |
I'm working on the book Quantum Effects in Biology by Mohesni et all. My question is however not biology related, it is about a section on quantum master equations in the weak system-bath coupling limit. To set up the problem, we consider the following:
We have a bath of harmonic oscillators $$H_B = \sum_\xi\hbar\omega_\xi( b_\xi^{\dagger}b_\xi+1/2)$$ and a linear system bath interaction $$H_{SB} = \sum_{l,\xi}\hbar\omega_\xi g_{\xi,l}(b_\xi^{\dagger}+b_\xi)S_l$$ where $S_l$ are some system operators, the exact form of which will not prove to be relevant.
Now, making some additional assumptions about the bosonic bath, one can significantly simplify his/her master equation by putting all of the information about the bath into a correlation function of the form $$C_{l l'}(t-t') = \sum_{\xi}{\omega^2_\xi g_{\xi,l}g_{\xi,l'}\mathrm{Tr}\left[(b_\xi e^{-i\omega_\xi t}+b^\dagger_\xi e^{i\omega_\xi t})(b_\xi e^{-i\omega_\xi t'}+b^\dagger_\xi e^{i\omega_\xi t'})\rho_B(0)\right]}$$ where $$\rho_B(0) = \frac{e^{-\beta H_b}}{\mathrm{Tr}(e^{-\beta H_b})}.$$
I mostly understand how this function is constructed, but the interested reader can find more on page 25 of the book.
My question however comes down to the next part. Without any calculations shown in between, the author equates the above to $$\int_0^\infty{d\omega S_{l l'}(\omega)\left[\coth(\beta \hbar \omega/2)\cos(\omega (t-t')) - i \sin(\omega (t-t'))\right]}$$ with $$S_{l l'}(\omega) = \sum_\xi{g_{\xi,l}g_{\xi,l'}\delta(\omega-\omega_\xi)\omega^2}.$$
What I am trying to do is firstly try to reproduce the derivation. It is very unclear to me as to how one would continue. One can work out the product of the creation/annihilation operators to get four terms, but these then all have to act upon $e^{-\beta H_b}$ which is my first hindrance. How does this work, with the exponential containing creation/annihilation operators itself? And subsequently, one needs to through with the trace, and perform a bunch of summations. In between the spectral density gets introduced too, although I would say that intuitively this part makes sense.
So primarily I am looking for some help in getting started with the first part, working out the annihilation/creation operations onto the exponential term, and then taking the trace.
Finally I should say that I am actually interested in re-doing the derivation for $t' = t$. The reason I am not starting with this is because it is not clear to me that this is possible; plugging this into the result definitely does not give anything interesting I'd argue. |
Say I simulate from a normal distribution $N(\mu, \sigma^2)$ 10.000 times using a couple of different methods.
I can calculate the standard error as $s/\sqrt{10000}$ where $s$ is the sample standard deviation.
What I don't understand is .... $s$ is the standard deviation of the sample. The sample are generated by simulating a normal distribution. Hence,
by that very fact, the sample standard deviation of all simulations will be equal to $\sigma$.
So, for
all my simulation methods, the standard error will be the same. It will be roughly $\sigma /\sqrt{10000}$. So what is the point of calculating it? It is already known before-hand and is constant for all my simulations, as long as I am simulating the same distribution?
So what is the point of calculating this number? Let me give you an example.
Say you write in R:
sample_1 <- rnorm(10000, mu, sigma)u <- runif(10000)sample_2 <- qnorm(u)
Now, both sample_1 and sample_2 are 10.000 simulations from a $N(\mu, \sigma^2)$ distribution... In either case, the standard error will roughly be $\sigma/\sqrt{10.000}$. This number seems pointless to me. It tells me nothing about the relative performance of $sample_1$ vs $sample_2$ ....the standard error is the same? |
ISSN:
1930-5346
eISSN:
1930-5338
All Issues
Advances in Mathematics of Communications
May 2015 , Volume 9 , Issue 2
Select all articles
Export/Reference:
Abstract:
In this paper a wide family of identifying codes over regular Cayley graphs of degree four which are built over finite Abelian groups is presented. Some of the codes in this construction are also perfect. The graphs considered include some well-known graphs such as tori, twisted tori and Kronecker products of two cycles. Therefore, the codes can be used for identification in these graphs. Finally, an example of how these codes can be applied for adaptive identification over these graphs is presented.
Abstract:
In this paper, a computation of the input-redundancy weight enumerator is presented. This is used to improve the theoretical approximation of the information--bit error rate, in terms of the channel bit--error rate, in a block transmission through a discrete memoryless channel. Since a bounded distance reproducing encoder is assumed, we introduce the here-called false positive, a decoding failure with no information-symbol error, and we estimate the probability that this event occurs. As a consequence, a new performance analysis of an MDS code is proposed.
Abstract:
We give deterministic polynomial time algorithms for two different decision version the modular inversion hidden number problem introduced by D. Boneh, S. Halevi and N. A. Howgrave-Graham in 2001. For example, for one of our algorithms we need to be given about $1/2$ of the bits of each inversion, while for the computational version the best known algorithm requires about $2/3$ of the bits and is probabilistic.
Abstract:
Cyclic orbit codes are constant dimension subspace codes that arise as the orbit of a cyclic subgroup of the general linear group acting on subspaces in the given ambient space. With the aid of the largest subfield over which the given subspace is a vector space, the cardinality of the orbit code can be determined, and estimates for its distance can be found. This subfield is closely related to the stabilizer of the generating subspace. Finally, with a linkage construction larger, and longer, constant dimension codes can be derived from cyclic orbit codes without compromising the distance.
Abstract:
A class of quaternary sequences $\mathbb{S}_{\lambda}$ had been proven to be optimal for some special values of $\lambda$. In this note, $\mathbb{S}_{\lambda}$ is investigated for all $\lambda$ by virtue of the $\mathbb{Z}_4$-valued quadratic forms over Galois rings. As a consequence, a new class of quaternary sequences with low correlation is obtained and the correlation distribution is also completely determined. It also turns out that the known optimal quaternary sequences $\mathbb{S}_{\lambda}$ for particular $\lambda$ can be easily obtained from our approach.
Abstract:
We examine the binary codes $C_2(A_i+I)$ from matrices $A_i+I$ where $A_i$ is an adjacency matrix of a uniform subset graph $\Gamma(n,3,i)$ of $3$-subsets of a set of size $n$ with adjacency defined by subsets meeting in $i$ elements of $\Omega$, where $0 \le i \le 2$. Most of the main parameters are obtained; the hulls, the duals, and other subcodes of the $C_2(A_i+I)$ are also examined. We obtain partial PD-sets for some of the codes, for permutation decoding.
Abstract:
In this paper infinite families of linear binary nested completely regular codes are constructed. They have covering radius $\rho$ equal to $3$ or $4,$ and are $1/2^i$th parts, for $i\in\{1,\ldots,u\}$ of binary (respectively, extended binary) Hamming codes of length $n=2^m-1$ (respectively, $2^m$), where $m=2u$. In the usual way, i.e., as coset graphs, infinite families of embedded distance-regular coset graphs of diameter $D$ equal to $3$ or $4$ are constructed. This gives antipodal covers of some distance-regular and distance-transitive graphs. In some cases, the constructed codes are also completely transitive and the corresponding coset graphs are distance-transitive.
Abstract:
In the papers by Alvarez et al. and Pathak and Sanghi a non-commutative based public key exchange is described. A similiar version of it has also been patented (US7184551). In this paper we present a polynomial time attack that breaks the variants of the protocol presented in the two papers. Moreover we show that breaking the patented cryptosystem US7184551 can be easily reduced to factoring. We also give some examples to show how efficiently the attack works.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Let $i : H \to G$ be a subgroup of finite index. The transfer map is a special homomorphism $V(i) : G^\mathrm{ab} \to H^\mathrm{ab}$. The usual ad hoc definition uses a set of representatives of $H$ in $G$ and then you have to check that it is independent from this choice and that it is a homomorphism at all. I think this definition is not enlightening at all (although it is, of course, useful for explicit calculations). A better one uses group homology. Namely, for a $G$-module $A$ there is a natural transformation $A_G \to \mathrm{res}^{G}_{H} A_H$, $[a] \mapsto \sum_{Hg \in H/G} [ga]$, which extends to a natural transformation $H_\*(G;A) \to H_\*(H;\mathrm{res}^{G}_{H} A)$ (usually called corestriction or transfer). Now evaluate at $A = \mathbb{Z}$ and $* = 1$ to get $G^\mathrm{ab} \to H^\mathrm{ab}$. One can then calculate this map using the explicit isomorphisms and homotopy equivalences involved; but now you know by the general theory that it is a well-defined homomorphism.
It also follows directly that the transfer is actually a functor $V : \mathrm{Grp}_{mf} \to \mathrm{Ab}^{\mathrm{op}}$ with object function $G \mapsto G^{\mathrm{ab}}$, where $\mathrm{Grp}_{mf}$ is the category whose objects are groups and whose morphisms are monomorphisms of finite index.
I would like to know if there is an even more "abstract" definition. To be more precise: Is there a categorical characterization of the functor $V$ which only uses the adjunction $\mathrm{Grp} {\longleftarrow \atop \longrightarrow} \mathrm{Ab}$?
Edit: There are many interesting answers so far which give, in fact, very "enlightening" definitions of the transfer. But I would also like to know if there is a pure categorical one, such as the one given by Ralph.
Edit: A very interesting note by Daniel Ferrand is A note on transfer. There a more general statement is proven (even in a topos setting): Let $G$ act freely on a set $X$ such that $X/G$ is finite with at least two elements. Then there is an
isomorphism of abelian groups $(\mathrm{Ver},\mathrm{sgn}) : {\mathrm{Aut}_{G}(X)}^{\mathrm{ab}} \cong G^{\mathrm{ab}} \times \mathbb{Z}/2$. It is natural with respect to $G$-isomorphisms. Here again I would like to ask if it is possible to characterize this isomorphism by its properties (instead of writing it down via choices, whose independence has to be shown afterwards).
Proposition 7.1. in this paper includes the interpretation via determinants mentioned by Geoff in his answer, actually something more general: For w.l.o.g. abelian $G$ there is a commutative diagram
$\begin{matrix} {\mathrm{Aut}_{G}(X)}^{\mathrm{ab}} & \cong & \mathrm{Aut}_{\mathbb{Z}G}{\mathbb{Z}X}^{\mathrm{ab}} \\\\ \downarrow & & \downarrow \\\\ G \times \mathbb{Z}/2 & \rightarrow & (\mathbb{Z} G)^{x} \end{matrix} $
Thus we may think of transfer and signature as the embedding the standard units into the group ring. |
ISSN:
1547-5816
eISSN:
1553-166X
All Issues
Journal of Industrial & Management Optimization
April 2010 , Volume 6 , Issue 2
Select all articles
Export/Reference:
Abstract:
The single machine semi-online scheduling problem with the objective of minimizing total completion time is investigated with the assumption that the ratio of the longest to the shortest processing time is not greater than a constant $\gamma$. A semi-online algorithm is designed and its competitive ratio is proven to be $1+ \frac{\gamma - 1}{1 + \sqrt {1 + \gamma (\gamma - 1)}}$. The competitive analysis method is as following: it starts from an arbitrary instance and modifies the instance towards the possible structure of the worst-case instance with respect to the given online algorithm. The modification guarantees that the performance ratio does not decrease. Eventually, it comes up with a relatively simple instance with a special structure, whose performance ratio can be directly analyzed and serves as an upper bound on the competitive ratio.
Abstract:
This paper is devoted to the study of a one-sector stochastic growth model with the depreciation factor of the output and with bounded and unbounded utility, in which the shocks are allowed to be bounded or unbounded. Under certain assumptions, the existence of a unique optimal policy function for the model is shown to be true and the existence of an invariant distribution for the output process is confirmed.
Abstract:
This paper proposes a trial-and-error implementation of marginal-cost pricing on transportation networks in the absence of both demand functions and travel time functions. Assuming that the corresponding link flows for given trial tolls are observable and that the approximations of the exact travel time functions are provided, the new trial is obtained via solving a system of equations. The new trial-and-error implementation is proved to be convergent globally under mild assumptions, and its improvements over existing methods are verified by some numerical experiments.
Abstract:
This paper presents for the first time how to easily incorporate facts devices in an optimal active power flow model such that an efficient interior-point method may be applied. The optimal active power flow model is based on a network flow approach instead of the traditional nodal formulation that allows the use of an efficiently predictor-corrector interior point method speed up by sparsity exploitation. The mathematical equivalence between the network flow and the nodal models is addressed, as well as the computational advantages of the former considering the solution by interior point methods. The adequacy of the network flow model for representing facts devices is presented and illustrated on a small 5-bus system. The model was implemented using Matlab and its performance was evaluated with the 3,397-bus and 4,075- branch Brazilian power system which show the robustness and efficiency of the formulation proposed. The numerical results also indicate an efficient tool for optimal active power flow that is suitable for incorporating facts devices.
Abstract:
The D-gap function approach has been adopted for solving variational inequality problems. In this paper, we extend the approach for solving equilibrium problems. From the theoretical point, we study the convergence and global error bound of a D-gap function based Newton method.
A general equilibrium problem is first formulated as an equivalent unconstrained minimization problem using a new D-gap function. Then the conditions of "strict monotonicity" and "strong monotonicity" for equilibrium problems are introduced. Under the strict monotonicity condition, it is shown that a stationary point of the unconstrained minimization problem provides a solution to the original equilibrium problem. Without the assumption of Lipschitz continuity, we further prove that strong monotonicity condition guarantees the boundedness of the level sets of the new D-gap function and derive error bounds on the level sets. Combining the strict monotonicity and strong monotonicity conditions, we show the existence and uniqueness of a solution to the equilibrium problem, and establish the global convergence property of the proposed algorithm with a global error bound.
Abstract:
In this paper, we study multi-parametric sensitivity analysis for programming problems with the piecewise linear fractional objective function using the concept of maximum volume in the tolerance region. We construct critical regions (the set of parameters values which the coefficients matrix of the problem (PLFP) may vary while still retaining the same optimal basis
B.) for simultaneous and independent perturbations of one row or one column of the constraint matrix in the given problem. Necessary and sufficient conditions are derived to classify perturbation parameters as 'focal' and 'non-focal'. Non-focal parameters can be deleted from the analysis, because of their low sensitivity in practice. Theoretical results are illustrated with the help of a numerical example. Abstract:
Based on the KK smoothing function, we introduce a regularized one-parametric class of smoothing functions and show that it is coercive under suitable assumptions. By making use of the introduced regularized one-parametric class of smoothing functions, we investigate a smoothing Newton algorithm for solving the generalized complementarity problems over symmetric cones (GSCCP), where a nonmonotone line search scheme is used. We show that the algorithm is globally and locally superlinearly convergent under suitable assumptions. The theory of Euclidean Jordan algebras is a basic tool in our analysis.
Abstract:
This paper deals with higher-order sensitivity analysis in nonconvex vector optimization. By virtue of higher-order adjacent derivatives introduced in (Aubin and Frankowska, Set-valued Analysis, Birkh$\ddot{a}$user, Boston, 1990), relationships between higher-order derivatives of a set-valued map and its profile map are discussed. Some results concerning higher-order sensitivity analysis are obtained in nonconvex vector optimization.
Abstract:
The gap constraint used in
A. A. Moreb, Spline technique for modeling roadway profile to minimize earthwork cost, J. Ind. Man. & Opt.,introduces unnecessary errors, while the slope constraint may be violated for second- and higher-order splines. In this note we amend the gap constraint, while maintaining the linearity of the model. We also present an improved slope constraint for linear and quadratic splines, and show that it becomes nonlinear for cubic and higher order splines. The improvements also apply to 5(2) (2009), 275-283 A. Moreb, M. Aljohani, Quadratic representation for roadway profile that minimizes earthwork cost, J. Sys. Sci. & Sys. Eng.,. 13(2) (2004), 245-252 Abstract:
We study the first-order behavior of the optimal value function in a parametric discrete optimal control problem with linear constraints and a nonconvex cost function. By establishing a new result on the Fréchet subdifferential of optimal value functions of parametric mathematical programming problems, we obtain some formulae on the Fréchet subdifferential of optimal value functions in parametric discrete optimal control problems which complement results due to Kien et al. [3].
Abstract:
We consider generalized complementarity problem GCP$(f,g)$ when the underlying functions $f$ and $g$ are $H$-differentiable. We describe $H$-differentials of some GCP functions and their merit functions. We give some conditions on the $H$-differentials of the given functions under which minimizing a merit function corresponding to such functions leads to a solution of the generalized complementarity problem. Further, we give some conditions on the functions $f$ and $g$ to get a solution of GCP$(f,g)$ by introducing the concepts of relative monotonicity and
P-property and their variants. Our results further give a unified/generalization treatment of such results for the nonlinear complementarity problem when the underlying function is $C^1$ , semismooth, and locally Lipschitzian. 0 Abstract:
The traditional economic order quantity (EOQ) and/or economic production quantity (EPQ) have been extensively examined and continually modified so as to accommodate specific business needs and market environments. In this paper, the learning effect of setup costs is incorporated into an inventory replenishment system where the demand is assumed to be deterministically constant in a finite planning horizon. The inventory replenishment system with learning considerations of setup costs is formulated as a mixed-integer cost minimization problem in which the number of replenishments and the replenishment time points in the planning horizon are regarded as decision variables. We first show that the time interval between any two successive replenishments should be equal. Then, the conditions of the optimal solution for the proposed problem are derived. Finally, numerical examples are provided to illustrate the features of the proposed problem.
Abstract:
A two-stage hybrid meta-heuristic for pickup and delivery vehicle routing problem with time windows and multiple vehicles is considered in this paper. The first stage uses simulated annealing algorithm to decrease the number of used vehicles, the second stage uses tabu search to decrease the total travel cost. Experimental results show the effectiveness of the algorithm which has produced many new best solutions on problems within 600 customers. In particular, it has improved 45% and 81.7% of the best solutions on the 200 and 600-customer benchmarks, sometimes by as much as 3 vehicles. These results further confirm the benefits of two-stage approaches in vehicle routing problems.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
On the DNA Computer Binary Code
In any finite set we can define a
, a partial order in different ways. But here, a partial order is defined in the set of four DNA bases in such a manner that a Boolean lattice structure is obtained. A Boolean lattice is an algebraic structure that captures essential properties of both set operations and logic operations. This partial order is defined based on the physico-chemical properties of the DNA bases: hydrogen bond number and chemical type: of purine {A, G} and pyrimidine {U, C}. This physico-mathematical description permits the study of the genetic information carried by the DNA molecules as a computer binary code of zeros (0) and (1). binary operation 1. Boolean lattice of the four DNA bases
In any four-element Boolean lattice every element is comparable to every other, except two of them that are, nevertheless, complementary. Consequently, to build a four-base Boolean lattice it is necessary for the bases with the same number of hydrogen bonds in the DNA molecule and in different chemical types to be complementary elements in the lattice. In other words, the complementary bases in the DNA molecule (
G ≡C and A= T or A= U during the translation of mRNA) should be complementary elements in the Boolean lattice. Thus, there are four possible lattices, each one with a different base as the maximum element. 2. Boolean (logic) operations in the set of DNA bases
The Boolean algebra on the set of elements
X will be denoted by $(B(X), \vee, \wedge)$. Here the operators $\vee$ and $\wedge$ represent classical “OR” and “AND” term-by-term. From the Boolean algebra definition it follows that this structure is (among other things) a logical operations in which any two elements $\alpha$ and $\beta$ have upper and lower bounds. Particularly, the greater lower bound of the elements $\alpha$ and $\beta$ is the element $\alpha\vee\beta$ and the least upper bound is the element $\alpha\wedge\beta$. partially ordered set This equivalent partial ordered set is called. Boolean lattice In every Boolean algebra (denoted by $(B(X), \vee, \wedge)$) for any two elements , $\alpha,\beta \in X$ we have $\alpha \le \beta$, if and only if $\neg\alpha\vee\beta=1$, where symbol “$\neg$” stands for the logic negation. If the last equality holds, then it is said that $\beta$ is deduced from $\alpha$. Furthermore, if $\alpha \le \beta$ or $\alpha \ge \beta$ the elements and are said to be comparable. Otherwise, they are said not to be comparable.
In the set of four DNA bases, we can built twenty four isomorphic Boolean lattices [1]. Herein, we focus our attention that one described in reference [2], where the DNA bases
G and C are taken as the maximum and minimum elements, respectively, in the Boolean lattice. The logic operation in this DNA computer code are given in the following table:
OR AND $\vee$ G A U C $\wedge$ G A U C G G A U Ç G G G G G A A A C C A G A G A U U C U C U G G U U C C C C C C G A U C
It is well known that all Boolean algebras with the same number of elements are isomorphic. Therefore, our algebra $(B(X), \vee, \wedge)$ is isomorphic to the Boolean algebra $(\mathbb{Z}_2^2(X), \vee, \wedge)$, where $\mathbb{Z}_2 = \{0,1\}$. Then, we can represent this DNA Boolean algebra by means of the correspondence: $G \leftrightarrow 00$; $A \leftrightarrow 01$; $U \leftrightarrow 10$; $C \leftrightarrow 11$. So, in accordance with the operation table:
$A \vee U = C \leftrightarrow 01 \vee 10 = 11$ $U \wedge G = U \leftrightarrow 10 \wedge 00 = 00$ $G \vee C = C \leftrightarrow 00 \vee 11 = 11$ A Boolean lattice has in correspondence a directed graph called Hasse diagram, where two nodes (elements) $\alpha$ and $\beta$ are connected with a directed edge from $\alpha$ to $\beta$ (or connected with a directed edge from $\beta$ to $\alpha$) if, and only if, $\alpha \le \beta$ ($\alpha \ge \beta$) and there is no other element between $\alpha$ and $\beta$. 3. The Genetic code Boolean Algebras
Boolean algebras of codons are, explicitly, derived as the direct product $C(X) = B(X) \times B(X) \times B(X)$. These algebras are isomorphic to the dual Boolean algebras $(\mathbb{Z}_2^6, \vee, \wedge)$ and $(\mathbb{Z}_2^6, \wedge, \vee)$ induced by the isomorphism $B(X) \cong \mathbb{Z}_2^2$, where $X$ runs over the twenty four possibles ordered sets of four DNA bases [1]. For example:
CAG $\vee$ AUC = CCC $\leftrightarrow$ 110100 $\vee$ 011011 = 111111
ACG $\wedge$ UGA = GGG $\leftrightarrow$ 011100 $\wedge$ 100001 = 000000
$\neg$ (CAU) = GUA $\leftrightarrow$ $\neg$ (110110) = 001001
The Hasse diagram for the corresponding Boolean algebra derived from the direct product of the Boolean algebra of four DNA bases given in the above operation table is:
In the Hasse diagram, chains and anti-chains are located. A Boolean lattice subset is called a chain if any two of its elements are comparable but, on the contrary, if any two of its elements are not comparable, the subset is called an anti-chain. In the Hasse diagram of codons shown in the figure, all chains with maximal length have the same minimum element GGG and the maximum element CCC. It is evident that two codons are in the same chain with maximal length if and only if they are comparable, for example the chain: GGG $\leftrightarrow$ GAG $\leftrightarrow$ AAG $\leftrightarrow$ AAA $\leftrightarrow$ AAC $\leftrightarrow$ CAC $\leftrightarrow$ CCC
The Hasse diagram symmetry reflects the role of hydrophobicity in the distribution of codons assigned to each amino acid. In general, codons that code to amino acids with extreme hydrophobic differences are in different chains with maximal length. In particular, codons with
U as a second base will appear in chains of maximal length whereas codons with A as a second base will not. For that reason, it will be impossible to obtain hydrophobic amino acid with codons having U in the second position through deductions from hydrophilic amino acids with codons having A in the second position.
There are twenty four Hasse diagrams of codons, corresponding to the twenty four genetic-code Boolean algebras. These algebras integrate a symmetric group isomorphic to the symmetric group of degree four $S_4$ [1]. In summary, the DNA binary code is not arbitrary, but subject to logic operations with subjacent biophysical meaning.
References Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Morgado E, Grau R. A genetic code Boolean structure. I. The meaning of Boolean deductions. Bull Math Biol, 2005, 67:1–14. |
Application of integrals:
At this stage, you must be aware of every formula to calculate areas of simple geometrical figures which includes Triangles, Rectangles, trapeziums and circles. For the calculation of areas of such simple figures, we use formulae of elementary geometry. Why do we need to study application of integral formula?
Also, ever you thought that the figures which are not included in elementary geometry say, curves. How to calculate areas of such figures?
Now, we understood that area for elementary geometry is inadequate for calculating the areas enclosed by curves. For calculating the area under curves we need some concepts of Integral Calculus. We need application of integrals to find the area under simple curves, area between lines and arcs of circles, parabolas and ellipses (standard forms only).
Special Terminology: Elementary strip: In a simple curve we can think of area under the curve as composed of large number of very thin vertical strips. Consider an arbitrary strip of height y and width dx, then dA (area of the elementary strip)= ydx, where, y = f(x). Application of Integral Formula: Area bounded by curve and lines: The area of the region bounded by the curve y = f (x), x-axis and the lines x = a and x = b where (b > a) is given by the application of integral formula as: \(\LARGE Area = \int_{a}^{b}ydx = \int_{a}^{b}f(x)dx\) The area of the region bounded by the curve x = φ (y), y-axis and the lines y = c, y = d is given by the application of integral formula as: \(\LARGE Area = \int_{c}^{d}xdy = \int_{c}^{d}φ(y)dy\)
3. The area of the region enclosed between two curves y = f (x), y = g (x) and the lines x = a, x = b is given by the application of integral formula as:
\(Area = \int\limits_{a}^b [f(x) – g(x)] dx\) where, f (x) ≥ g (x) in [a, b]4. If f (x) ≥ g (x) in [a, c] and f (x) ≤ g (x) in [c, b], a < c < b, then \(Area = \int\limits_{a}^c [f(x) – g(x)] dx + \int\limits_{c}^b [g(x) – f(x)] dx \) Application of Integral Formula Examples: Find the area of the region bounded by the two parabolas y = xSolution: 2and y 2= x. The point of intersection of these two parabolas are O (0, 0) and A (1, 1) as in the figure: Here, we stay as y 2= x or y = \(\sqrt{x}\) = f(x) and y = x 2= g (x), where, f (x) ≥ g (x) in [0, 1]. So, the required area of the shaded region is: \(A = \int\limits_{0}^1 [f(x) – g(x)] dx\) \(\int\limits_{0}^1 [\sqrt{x}– {x}^2] dx\) = \([\frac{2}{3}\:x^{\frac{3}{2}} – \frac{{x}^3}{3}{]}_{0}^{1}\) = \(\frac{2}{3}- \frac{1}{3} = \frac{1}{3}\) sq.units From the figure, we see that equation of the latus rectum LSL′ is x = a. Also, parabola is symmetrical about the x-axis. Find the area of the parabola ySolution: 2= 4ax bounded by its latus rectum. The required area of the region OLL′O = 2 (area of the region OLSO) = 2 \(\int\limits_{0}^a y.dx\) = 2 \(\int\limits_{0}^a \sqrt{4ax}.dx\) = 4\(\sqrt{a}\) \(\int\limits_{0}^a \sqrt{x}.dx\) = \(\frac{8}{3}\) a 2For Integral formulas Read – Math Formulae More from Calculus Relation and Functions Limits Formula Continuity Rules Differentiability Rules Derivative Formula Integral Formula Inverse Trigonometric function Formulas Logarithm Formulas |
I didn't feel MO was the best place to ask this question, so apologies for this, but when I asked it at https://math.stackexchange.com/questions/2297837/why-is-this-cubic-polynomial-generic-for-cyclic-field-extensions, I didn't get enough information. I would really like to understand this example, so I will try to streamline the question and ask it here.
I am trying to understand the circumstances in which a finite cover of $\mathbb{P}^1_K$ of Galois group $G$ can be twisted to contain a field extension $L/K$ of group $G$. This construction comes from p1 of Serre's Topics in Galois Theory.
Suppose we have a field $K$, which I would like to think of as $\mathbb{Q}$, and take the curve $Y = \mathbb{P}^1_K$ and a finite subgroup $G \subset \mathrm{Aut}(Y)$, where $G\cong \mathbb{Z}/3\mathbb{Z}$. Now treat $Y$ as a finite branched cover of $\mathbb{P}^1$ via the quotient $Y \to Y/G$.
If $L/K$ is a Galois extension also with group $G$, then we get a map $\phi:G_K \to G \to \mathrm{Aut}(Y)$, which we can view as a 1-cocycle $\Big($because with trivial action $H^1(G_K,\mathrm{Aut}(Y)) = \mathrm{Hom}(G_K,\mathrm{Aut}(Y))$, right?$\Big)$.
(1) Why is the extension $L/K$ given by a rational point on $\mathbb{P}^1_K/G$ if and only if the twist of $Y$ by this cocycle has a rational point not invariant by G?
(2) How explicitly can I understand the twist of $Y$ by a cocycle? Can I get equations for it?
Thanks.
[Edit:] If I can't get an explanation, I will be content with a reference or two that can help me along. According to Serre the fact (1) is "a general property of Galois twists". |
65 4 Homework Statement Two identical audio speakers, connected to the same amplifier, produce monochromatic sound waves with a frequency that can be varied between 300 and 600 Hz. The speed of the sound is 340 m/s. You find that, where you are standing, you hear minimum intensity sound a) Explain why you hear minimum-intensity sound b) If one of the speakers is moved 39.8 cm toward you, the sound you hear has maximum intensity. What is the frequency of the sound? c) How much closer to you from the position in part (b) must the speaker be moved to the next position where you hear maximum intensity? Homework Equations interference Homework Statement:Two identical audio speakers, connected to the same amplifier, produce monochromatic sound waves with a frequency that can be varied between 300 and 600 Hz. The speed of the sound is 340 m/s. You find that, where you are standing, you hear minimum intensity sound
a) Explain why you hear minimum-intensity sound
b) If one of the speakers is moved 39.8 cm toward you, the sound you hear has maximum intensity. What is the frequency of the sound?
c) How much closer to you from the position in part (b) must the speaker be moved to the next position where you hear maximum intensity?
Homework Equations:interference
I have no idea on how to proceed
I started with
## frequency=\frac {speed\space of\space sound} \lambda \space = \frac {340 \frac m s} \lambda ##
then
##d \space sin\alpha \space = \space \frac \lambda 2\space ##
but now i'm stuck
Any help please? |
Fall 2019
By the end of the course, you should know how to:
Monday: skim/read chapter; start DataCamp Tuesday: lecture (possible quiz) Wednesday: reread chapter; go through chapter code Thursday: ask questions in lab, practice coding Friday-Sunday: complete exercises, submit via Sakai
How do we
identify the effect of a treatment (cause) on an outcome?
They find a negative effect of motherhood on the probability of a call back
Assuming successful randomization to treatment and control, you
know it’s the treatment that’s causing the effect.
\(T\) a binary treatment variable
\(Y\) the value of the outcome we observe
\(Y^0\) the value the outcome
would take if \(T=1\)
\(Y^1\) the value the outcome
would take if \(T=0\)
Let’s think about the last two a bit more carefully…
Subject \(Y^0\) \(Y^1\) \(T\) \(Y\) Andrew 2 3 Barb 3 4 Catherine 3 4 David 2 3
What do these numbers mean?
Subject \(Y^0\) \(Y^1\) \(T\) \(Y\) Andrew 3 1 3 Barb 3 0 3 Catherine 4 1 4 David 2 0 2
\[ Y = TY^1+(1-T)Y^0 \]
\(Y = Y^1\) for \(T = 1\)
\(Y = Y^0\) for \(T = 0\)
We
can’t know \(Y^1\) for those who are \(T=0\)
We
can’t know \(Y^0\) for those who are \(T=1\)
\(Y^0\) and \(Y^1\) are
potential outcomes
In the real world, \(T\) is either 1 or 0 for each case.
We see \(Y^1\) or \(Y^0\), but never both.
When \(T=0\), \(Y^1\) is
counterfactual
When \(T=1\), \(Y^0\) is
counterfactual
We really care about the difference between \(Y^0\) and \(Y^1\). (Why?)
Let \(\delta_i = y^1_i - y^0_1\)
\(E[\delta]=E[Y^1-Y^0]\)
\(E[\delta]=E[Y^1]-E[Y^0]\)
Subject \(Y^0\) \(Y^1\) \(T\) \(Y\) Andrew 2 3 Barb 3 4 Catherine 3 4 David 2 3
Subject \(Y^0\) \(Y^1\) \(T\) \(Y\) Andrew 3 1 3 Barb 3 0 3 Catherine 4 1 4 David 2 0 2
\(T \bot Y^0\)
\(T \bot Y^1\)
\(E[Y^0 | T = 0] = E[Y^0 | T = 1 ]\)
\(E[Y^1 | T = 0] = E[Y^1 | T = 1 ]\)
In a properly executed experiment, there is no association between the potential outcome variables and treatment assignment.
\(E[Y^0 | T = 0] \simeq E[Y^0]\)
\(E[Y^1 | T = 1] \simeq E[Y^1]\)
So…
\(E[\delta] = E[Y|T=1]-E[Y|T=0]\)
The difference between the treatment average and the control average
\(E[\delta]\) is the expected value (mean) of the difference between each unit’s value of \(Y^1\) and \(Y^0\). It is the
average treatment effect (ATE). In a sample, this is the sample average treatment effect (SATE).
Even though the individual differences are unobservable (because either \(Y^0\) or \(Y^1\) will be counterfactual for each unit), we can estimate the mean difference via experiment.
\[ \text{SATE} = \frac{1}{n}\sum_{i=1}^{n}(y^1_i - y^0_i) \]
Experiments identify the SATE because cases are randomly assigned to the treatment and control group and are, therefore, identical
on average, on all pre-treatment characteristics.
Experiments are sometimes called
randomized controlled trials (or RCTs) Internal validity: the extent to which causal assumptions are satisfied in the study External validity: the extent to which the conclusions can be generalized beyond a particular study.
Weisshaar, K. (2018). “From Opt Out to Blocked Out: The Challenges for Labor Market Re-entry after Family-Related Employment Lapses.”
American Sociological Review, 83(1), 34–60. |
Suppose $X$ and $Y$ are two $n$-circulants (Cayley graphs for $\mathbb{Z}_n$) with adjacency matrices $A_X$ and $A_Y$. Since they are circulants, both $X$ and $Y$ lie in some symmetric association schemes (details below). Let $\mathcal{C}_X$ and $\mathcal{C}_Y$ be the
smallest association schemes containing $A_X$ and $A_Y$ respectively. Suppose that $\phi: \mathcal{C}_X \to \mathcal{C}_Y$ is an isomorphism of association schemes (bijective linear map that preserves matrix multiplication, entrywise multiplication, and transposition) that also satisfies $\phi(A_X) = A_Y$. Under these conditions, is it possible for $X$ and $Y$ to be NOT isomorphic? By the results of this paper, it seems that this cannot happen if one of the graphs (and thus also the other) is distance regular.
If I understand correctly, the condition imposed on $X$ and $Y$ is equivalent to the Weisfeiler-Lehman algorithm NOT being able to distinguish them.
Details on association schemes:
A symmetric association scheme $\mathcal{C}$ is a matrix subalgebra of the real $n \times n$ matrices that contains identity and the all ones matrix, is closed under entrywise mutliplication and transposition, and contains only symmetric matrices. Note that the last condition also implies that all matrices in $\mathcal{C}$ commute with each other. Since it is closed under entrywise multiplication, any symmetric association scheme has an orthogonal basis of 01-matrices $A_0, \ldots, A_d$ that sum up to the all ones matrix. One of these is necessarily the identity matrix (so WLOG we let $A_0 = I$) and thus the other $A_i$ are 01 symmetric matrices with 0 on the diagonal. We can think of these as adjacency matrices of graphs, and we say that a graph lies in $\mathcal{C}$ if its adjacency matrix is the sum of some of the $A_i$. It is easy to see that the intersection of two association schemes is an association scheme, and thus if a graph lies in an association scheme, there is some unique minimal association scheme that contains it.
For a given $n \in \mathbb{N}$, let $A_s$ be the $n \times n$ matrix whose $ij$-entry is 1 if $i - j \equiv \pm s \text{ mod } n$, and is 0 elsewhere. It is not difficult to see that these $A_s$ matrices are the 01-basis of a symmetric association scheme, and that any $n$-circulant lies in this scheme. Thus any circulant graph lies in a symmetric association scheme. |
on page 34, it talks about the periods of a closed $2$-form. Consider the following path integral,
$$\int\mathcal{D}B\;\exp{\left\{-\frac{i}{8\pi}\int\left(\bar{\tau}^{\prime}F^{\prime +}\wedge\star F^{\prime +}-\tau^{\prime}F^{\prime -}\wedge\star F^{\prime -}\right)-\frac{1}{2\pi}\int F^{\prime}\wedge dB\right\}}$$
where $F^{\prime}$ is an arbitrary $2$-form, $F^{\prime\pm}_{ab}=\frac{1}{2}\left(F_{ab}^{\prime}\pm\frac{i}{2}\epsilon_{abcd}F^{cd}\right)$, and $B$ is a $U(1)$-gauge field. It says that path-integral produces a delta functional $\delta[dF^{\prime}]$.
My first question is that how the path-integral produces a delta-functional. Shouldn't there be an extra factor $i$ in front of the second integral $\int F^{\prime}\wedge dB$, so that
$$\int\mathcal{D}B\;\exp{\left\{\frac{i}{2\pi}\int F^{\prime}\wedge dB\right\}}=\int\mathcal{D}B\;\exp{\left\{\frac{-i}{2\pi}\int dF^{\prime}\wedge B\right\}}=\delta[dF^{\prime}]$$
It then says that the $2$-form $F^{\prime}$ is closed and has periods $2\pi\mathbb{Z}$, and so it is the field strength of some gauge field $B^{\prime}$.
What's the definition of periods of a closed $2$-form? Is that related with the fact that $F^{\prime}$ belongs to the second cohomology class $F^{\prime}\equiv F^{\prime}+d\xi$? Why is it $2\pi\mathbb{Z}$? |
Consider a corner reflector with angle $\alpha$ between its semi-planes:
Let a plane wave come from the bottom into this reflector (possible at an angle). The objective is to find the total wave including this incoming one and all the reflections — so as to satisfy some boundary conditions on the reflecting surfaces, e.g. homogeneous Dirichlet boundary condtions.
Physical intuition (educated by quantum mechanics) suggests: try viewing the wave as a particle with momentum proportional to wave vector $\vec k$, and taking into account all the possible reflections of such a particle. Then represent all the straight parts of the trajectory with plane waves, find amplitudes of these waves such that will make each of waves satisfy boundary conditions at the surface of (single!) reflection, and combine all the waves obtained with corresponding amplitudes accumulated in multiple reflections.
Although one might have some objections to the suggestion above, this appears to work out nicely for $\alpha=\frac\pi n$ where $n\in\mathbb N$ (which corresponds to the case when the reflector becomes a reversing or non-reversing mirror, depending on parity of $n$). E.g. for $\alpha=\frac\pi4$ we have the following possible reflections (red color marks incoming "wave"):
Adding all the plane waves with the wave vectors matching directions of these reflections (skipping duplicate ones), we get for the example incoming wave $u_0$ defined as
$$u_0(x,y)=\exp\left(i\left(\frac3{10}x+y\right)\right)$$
the final incoming+reflected wave:
$$\begin{align} u_{f}(x,y) &= \exp\left( i \left(\frac3{10} x + y\right)\right) + \exp\left(-i \left(\frac3{10} x + y\right)\right) +\\ &+ \exp\left( i \left(x - \frac3{10} y\right)\right) + \exp\left(-i \left(x - \frac3{10} y\right)\right) -\\ &- \exp\left( i \frac{7x + 13y}{10 \sqrt2}\right) - \exp\left(-i \frac{7x + 13y}{10 \sqrt2}\right) -\\ &- \exp\left( i \frac{13x - 7y}{10 \sqrt2}\right) - \exp\left(-i \frac{13x - 7y}{10 \sqrt2}\right), \end{align} $$
which simplifies to a real-valued standing wave (because this mirror is non-reversing) containing only $\cos$ terms. Here's what it looks like (green lines show the reflector boundaries):
The problem is that this simple intuition doesn't work out for angles not of the form $\frac\pi n$, when outgoing wavevectors are not parallel. We can still find the solution as a finite superposition of plane waves in the case of $\alpha=\frac mn \pi$ (with $m\in\mathbb N$ and $n>m$) by finding the solution for the case $\alpha=\frac \pi n$ and rotating it so as to match the zeros of actual boundaries. But the intuition of reflections appears broken here. E.g. when $\alpha=\frac34\pi$, we have only one reflection per side:
But the solution satisfying the boundary conditions requires two additional, "virtual", reflections from the $\alpha=\frac\pi4$ reflector (i.e. the solution for $\alpha=\frac34\pi$ is the same as for $\alpha=\frac\pi4$).
My question is: what is the intuition for these "virtual" reflections in the case of $\alpha=\frac mn \pi$? |
Absolute Value of Integer is not less than Divisors
Jump to navigation Jump to search
Theorem $\forall c \in \Z_{\ne 0}: a \divides c \implies a \le \size a \le \size c$
Let $a, b \in \Z_{>0}$ be (strictly) positive integers.
Let $a \divides b$.
Then: $a \le b$ Proof
Suppose $a \divides c$ for some $c \ne 0$.
$a \le \size a$
Then:
\(\displaystyle a\) \(\divides\) \(\displaystyle c\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \exists q \in \Z: \ \ \) \(\displaystyle c\) \(=\) \(\displaystyle a q\) Definition of Divisor of Integer \(\displaystyle \leadsto \ \ \) \(\displaystyle \size c\) \(=\) \(\displaystyle \size a \size q\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \size a \size q \ge \size a \times 1\) \(=\) \(\displaystyle \size a\) \(\displaystyle \leadsto \ \ \) \(\displaystyle a \le \size a\) \(\le\) \(\displaystyle \size c\)
$\blacksquare$
Also see Non-Zero Integer has Finite Number of Divisors: a direct corollary Sources 1958: Martin Davis: Computability and Unsolvability... (previous) ... (next): Appendix $1$: Some Results from the Elementary Theory of Numbers: Corollary $2$ 1978: Thomas A. Whitelaw: An Introduction to Abstract Algebra... (previous) ... (next): $\S 11.2$: The division algorithm 1980: David M. Burton: Elementary Number Theory(revised ed.) ... (previous) ... (next): Chapter $2$: Divisibility Theory in the Integers: $2.2$ The Greatest Common Divisor: Theorem $2 \text{-} 2 \ (6)$ |
Large time behavior in the logistic Keller-Segel model via maximal Sobolev regularity
1.
Institute of Mathematical Sciences, Renmin University, Beijing 100872, China
2.
Institut für Mathematik, Universität Paderborn, 33098 Paderborn, Germany
$\begin{equation} \left\{ \begin{array}{llc} \displaystyle u_t=\Delta u-\chi\nabla\cdot(u\nabla v)+\kappa u-\mu u^2, &(x,t)\in \Omega\times (0,T),\\ \displaystyle \tau v_t=\Delta v-v+u, &(x,t)\in\Omega\times (0,T), \end{array} \right.(\star) \end{equation}$ Mathematics Subject Classification:Primary:35B40, 35K45. Citation:Xinru Cao. Large time behavior in the logistic Keller-Segel model via maximal Sobolev regularity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3369-3378. doi: 10.3934/dcdsb.2017141
References:
[1]
X. Bai and M. Winkler,
Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics,
[2]
T. Black, J. Lankeit and M. Mizukami, On the weakly competitive case in a two-species chemotaxis model,
[3]
X. Cao,
Global bounded solutions of the higher-dimensional Keller-Segel system under smallness conditions in optimal spaces,
[4] [5]
X. Cao and M. Winkler, Sharp decay estimates in a bioconvection model with quardratic degradation in bounded domains, preprint, 2016.Google Scholar
[6] [7] [8] [9] [10] [11]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[12] [13]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[14] [15] [16] [17]
M. Winkler,
Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with logistic dampening,
[18]
C. Yang, X. Cao, Z. Jiang and S. Zheng,
Boundedness in a quasilinear fully parabolic Keller-Segel system of higher dimension with logistic source,
show all references
References:
[1]
X. Bai and M. Winkler,
Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics,
[2]
T. Black, J. Lankeit and M. Mizukami, On the weakly competitive case in a two-species chemotaxis model,
[3]
X. Cao,
Global bounded solutions of the higher-dimensional Keller-Segel system under smallness conditions in optimal spaces,
[4] [5]
X. Cao and M. Winkler, Sharp decay estimates in a bioconvection model with quardratic degradation in bounded domains, preprint, 2016.Google Scholar
[6] [7] [8] [9] [10] [11]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[12] [13]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[14] [15] [16] [17]
M. Winkler,
Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with logistic dampening,
[18]
C. Yang, X. Cao, Z. Jiang and S. Zheng,
Boundedness in a quasilinear fully parabolic Keller-Segel system of higher dimension with logistic source,
[1]
Yuanyuan Liu, Youshan Tao.
Asymptotic behavior in a chemotaxis-growth system with nonlinear production of signals.
[2] [3]
Telma Silva, Adélia Sequeira, Rafael F. Santos, Jorge Tiago.
Existence, uniqueness, stability and asymptotic behavior of solutions for a mathematical model of atherosclerosis.
[4]
Marco Di Francesco, Alexander Lorz, Peter A. Markowich.
Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion:
Global existence and asymptotic behavior.
[5]
Francesca Romana Guarguaglini, Corrado Mascia, Roberto Natalini, Magali Ribot.
Stability of constant states and qualitative
behavior of solutions to a one dimensional
hyperbolic model of chemotaxis.
[6]
Masaaki Mizukami.
Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity.
[7]
Tobias Black.
Global existence and asymptotic stability in a competitive two-species chemotaxis system with two signals.
[8]
Masaaki Mizukami.
Improvement of conditions for asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity.
[9] [10] [11] [12] [13]
Shubo Zhao, Ping Liu, Mingchao Jiang.
Stability and bifurcation analysis in a chemotaxis bistable growth system.
[14]
Xin Lai, Xinfu Chen, Mingxin Wang, Cong Qin, Yajing Zhang.
Existence, uniqueness, and stability of bubble solutions of a chemotaxis model.
[15]
Doan Duy Hai, Atsushi Yagi.
Longtime behavior of solutions to chemotaxis-proliferation model with three variables.
[16]
Chunpeng Wang.
Boundary behavior and asymptotic behavior of solutions
to a class of parabolic equations with boundary degeneracy.
[17] [18] [19] [20]
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top] |
Abstract
The oxidative stability of the ramyon prepared with ricebran oil fortified with ${\alpha}-tocopherol$, BHA, TBHQ, and ascorbyl palmitate+citric acid or blended with palm oil was studied to assess the suitability of the oil as the frying oil. The antioxidants were added to a ricebran oil at 0.02% level, respectively, while blended oils were prepared by adding a palm oil to the ricebran oil at ratios of 3:7, 5:5, and 7:3. Ramyon samples were prepared by frying steamed noodel with the oils. They were stored in dark at $35.0{\pm}0^{\circ}C$. for 90 days. Peroxide, acid, iodine values, dielectric constant, and fatty acid composition of the oils extracted from the samples were determined regularly. The oxidative stability of the extracted oils and storage stability of the samples were estimated from the results of the determinations. ${\alpha}-tocopherol$ did not exert any appreciable antioxidant effect on the extracted oil while BHA demonstrated some effect. Ascorbyl palmitate with citric acid and especially TBHQ exerted a considerable effect. The storage stability of the samples fried with the oil fortified with TBHQ was as good as that of the samples prepared with the palm oil. The stability of the samples improved as the palm oil content In the frying oil increased. The stability of the samples fried with the blended oil containing 70f) palm oil was comparable to that of the samples prepared with the pure palm oil
Keywords
ricebran oil;frying oil;ramyon;antioxidants;blending |
With the following circuits as examples :
and
How will the current
I know how much to flow?Would any other wave travel first in the circuit and then come backand say so much current should flow?
Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It only takes a minute to sign up.Sign up to join this community
Not sure if this is what you're asking, but yes, when the battery is connected, an electric field wave travels from the battery down the wires to the load. Part of the electrical energy is absorbed by the load (depending on Ohm's law), and the rest is reflected off the load and travels back to the battery, some is absorbed by the battery (Ohm's law again) and some reflects off the battery, etc. Eventually the combination of all the bounces reaches the stable steady-state value that you would expect.
We usually don't think of it this way, because in most circuits it happens too quickly to measure. For long transmission lines it is measurable and important, however. No, the current does not "know" what the load is until the wave reaches it. Until that time, it only knows the characteristic impedance or "surge impedance" of the wires themselves. It doesn't yet know if the other end is a short circuit or an open circuit or some impedance in between. Only when the reflected wave returns can it "know" what's at the other end.
See Circuit Reflection Example and Transmission line effects in high-speed logic systems for examples of lattice diagrams and a graph of how the voltage changes in steps over time.
And in case you don't understand it, in your first circuit, the current is equal at every point in the circuit. A circuit is like a loop of pipework, all filled with water. If you cause the water to flow with a pump at one point, the water at every other point in the loop has to flow at the same rate.
The electric field waves I'm talking about are analogous to pressure/sound waves traveling through the water in the pipe. When you move water at one point in the pipe, the water on the other end of the pipes doesn't change instantly; the disturbance has to propagate through the water at the speed of sound until it reaches the other end.
Since the theory has been covered, I'll go with a rough analogy (Hopefully I'm understanding what you are asking properly, it's not so clear)
Anyway, if you imagine a pump (the battery), some pipes filled with water (the wires), and a section where the pipe narrows (the resistor)
The water is always there, but when you start the pump it creates pressure (voltage) and makes the water flow around the circuit (current). The narrowing of the pipe (resistor) restricts the flow (current) to a certain amount and causes pressure drop across it (voltage across resistor, in this case equal to the battery)
With the second circuit (two resistors in parallel) it is reasonably clear that the same amount of current that flows into the top junction must flow out from the bottom junction (see Kirchoff) If the resistors are the same, then they will share the current equally. this can be though of as one large pipe (wire) splitting into two narrower pipes (resistors) and then fusing back into one large pipe again. If they are unequal, then one will take more flow (current) than the other but the total out will always add up to the total in.
You could ask the same question with the water analogy - how does the water "know" how much to flow? Because it's limited by the pipes width and the pumps pressure.
EDIT - It seems the question being asked is a little different than I supposed initially. The trouble is there are a few different answers (as you can see) at different levels of abstraction, e.g. from Ohms law to Maxwell to Quantum physics. At the individual electron level I think you might have a problem due to particle wave duality and double path (see double slit experiment with photon) mentioned by Majenko.
Note that the reason I said above that "the water is always there" is because the electrons themselves do not flow at ~2/3 the speed of light round a circuit, rather the energy from one is propagated to the next (sort of) and so on. A bit like balls bouncing around randomly and into one another, with a average tendency overall to bounce in the direction of applied potential. A simpler way to think of it is like a line of snooker balls - if you hit the white ball into one end, the energy will be "transmitted" through all the balls (they will not actually change position though), and then ball at the other end will break away. I have a feeling the quantum explanation might go something like: we can only predict the probability that an individual electron will "choose" one path (or be in one particular area) but the process would not be observable directly (i.e. theoretical physics)
Either way I think this an excellent question and needs a good answer (will try and improve this one if time allows), although at the lowest level may be better dealt with on the physics stack.
At first, the current doesn't really know. Assuming a big cartoony switch in the line, when open, it represents a huge impedance. (Capacitive) charge builds up on either side of it; specifically, electrons crowd the negative terminal and the positive terminal lacks the same number of electrons from normal (image charge). Current flow is negligible (fA*), so there is no potential drop across the resistor. Electrons have no net movement or flow because the electrostatic repulsion with their neighbors, including the big bunch at the switch, is equal to the force from the external electric field bias.
When the switch is first closed, the extra electrons near the switch zip to the other contact, filling in the image charge. Now that there isn't a big bunch of bully electrons refusing to move and pushing back, the rest go ballistic
and start to zip through the circuit.(hah! not actually, though)
Those in and near the resistor meet ... resistance
(c'mon; I had to). There aren't nearly as many free electrons or sites, so, not unlike the very large impedance presented earlier by the switch, charge builds up on either end as the impatient buggers jostle for a spot in line. It continues to build up until equilibrium is reached: the electrostatic field from the bunch of electrons waiting to get through the resistor is equal to the external electric field bias.
At this point the current
knows how much to flow, and won't change ['til you realize that you put in a 1.3-ohm resistor instead of the 1.3-kohm, and it fries and open circuits again].
If the source were totally removed from the system at first, there would be no initial capacitive charge. An instantaneous connection with the source (DPST switch) would lead to an electric field propagating along the wire near
c, accelerating and dragging electrons along with it, and leading to the same leaving-the-football-stadium-type crowding at the resistors. In the case with parallel resistors, however, the doors of said stadium may be of different widths, so the equilibrium currents will differ.
How does the current in a river delta "know" which branch to take? "Current" in each case means the aggregate flow of water molecules or electrons, so first, replace the question with "How does each electron (or molecule) know which way to go"? It doesn't; it will just get swept along in the immediately local flow, and at the micro- or atomic level, will take the place of the departing one just ahead of it. So, what happens right at the point of the diverge? To our macro eyes, the direction it takes is random, distributed as the ratio(s) of the branch currents. At the very lowest level, some tiny disturbance will nudge it one way or the other.
(Very rough description/analogies, I know - forgive the implied inaccuracies.)
"Knowing" how much to flow implies knowledge, which implies intelligence.
Current is not intelligent, and doesn't flow per se. Current is pulled, or "drawn" by the load - in this case the resistors.
The amount of current the load draws is determined by Ohms Law:
\$I=\dfrac{V}{R}\$
In the first circuit that is simple enough to calculate.
The second circuit is slightly more complex. Calculating \$I_S\$ is simple enough, as long as you can calculate the total resistance:
\$\dfrac{1}{R_T} = \dfrac{1}{R_1} + \dfrac{1}{R_2}\$
or
\$R_T = \dfrac{R_1 \times R_2}{R_1 + R_2}\$
The amount of current then flowing through each resistance is then determined by the ratio of the two resistors. If the resistors are the same, then exactly half the current will flow through each. If \$R_1\$ is twice \$R_2\$, then a third of the current will flow through \$R_1\$, and two-thirds through \$R_2\$ (note the current ratio is the opposite of the resistance ratio).
Actually, the current doesn't know how much to flow at t=0.
Every resistor have some capacitance, since they consist of the conducting sides separated with an insulator (even though not perfect). Because of this capacitance, at t=0, current rushes as much as the power supply can supply. Then it slows down after a while to its normal value. Every practical resistor can be modeled as a resistor and a capacitor in parallel. So, your first circuit is actually a parallel R-C circuit.
Also, don't forget that E field (electric field) creates B field (magnetic field), and vice versa. When you apply a voltage across the resistor, what you do is you create an electric field inside the resistor. Which causes a change in the state of the electric field (you rise electric field from zero to a non-zero value). The change in the electric field creates magnetic field and it finally creates a flow of current.
Please refer to Maxwell's Equations for more information.
How current knows ? It knows because of statistical mechanics (with Boltzman and later Fermi-Dirac involved, and later Maxwell), when fermions (electrons) at particular temperature tend to occupy the volume of conductor (metal) when electrons fly free like particles of ideal gas and bounce against atoms. Speed (energy) of individual particles is about 1K miles per second (less than speed of light), drift speed is few millimeters per second (see wiki "drift speed"). Average free fly distance of electrons defines "conductivity". To observer of electron flow, the behaviour of electrons will look like a tendency of particles to maintain "electroneutrality", when every local part of conductor contains approximately equal amount of electrons and protons. Electrons are charged, so they apply repelling force to each other. The involvement of force, velocity and mass over time means that there are virtual photons emitted and absorbed during acceleration and deceleration of electrons. This photons are propagating much faster than particles and create "pressure". Overall depending on material, the speed of pressure wall is close to speed of light. It can be named "wave". The rest of the story is better explained by Endolith above.
The numbers for copper at room temperature can be seen in this article.
TLDR: Ideal electron gas with statistical mechanics->Boltzman->Fermi-Dirac->Maxwell->Ohm
Nobody mentioned the fact that all schematics adopt the so-called lumped element model.
In a schematic a wire is not a wire in the common sense, it is a simplifying relationship between nodes. If you wanted to describe step by step what happens to the current (or what it "senses") along a wire, you would have to draw an infinite series of passive elements.
Best analogy that helped me to understand it really quick and easy, I've met somewhere on Internet, but can't point the source at the moment. If someone know where it is, let me know, so this can be included. Analogy is very short and this will be very short answer. No formulas whatsoever. So it is kind of non-scientific, but is elegant analogy and really easy for human being to imagine and comprehend.
Most people imagine a simple circuits like those in examples like a empty tube or pipe that is filled with water. This is partly because the prolific water flow analogy.
In reality it is much more like a tube filled with solid balls like bowling tube. That tube is filled with balls in line from end to end and there are no gaps between them. When you push the ball in one end,
all balls travels the same distance.
This movement is the current of electrons and force needed to move the balls is the applied voltage.
Other source of confusion is the "least resistance path" sentence. Someone can imagine a person on the crossroad that choose the 1 of 3 possible ways. When person took a way all of the person go that way, and this is exactly
how current DO NOT FLOW. Instead current will "split" and flow in all possible directions, but proportionally to the resistance in those ways. Sometimes resistance in so high, that current amount is so small, that is beneficial to be neglected for simplifying.
Your question is a bit garbled and I don't see how waves have anything to do with this. However, the basic Ohm's law is easy to explain in your example. Both resistors have voltage \$V_S\$ across them. That means the current through them will be \$\frac{V_S}{R}\$. Specifically
\$I_1 = \dfrac{V_S}{R_1}\$
\$I_2 = \dfrac{V_S}{R_2}\$
\$I_S\$ is merely the sum of the two currents through the resistors:
\$I_S = I_1 + I_2\$
You can get \$I_S\$ another way, by considering the equivalent resistance of \$R_1\$ and \$R_2\$ in parallel.
In general: \$R_1 || R_2 || ... R_n = \dfrac{1}{(\dfrac{1}{R_1} + \dfrac{1}{R_2} + ... \dfrac{1}{R_n})}\$
\$R_1 || R_2 = \dfrac{1}{\dfrac{1}{R_1} + \dfrac{1}{R_2}} = \dfrac{R_1 \times R_2}{R_1 + R_2}\$
Using Ohm's law again, it is straight forward to compute Is:
\$I_S = \dfrac{V_S}{R_1 || R_2} = V_S \times \dfrac{R_1 + R_2}{R_1 \times R_2}\$
Note that this is the same answer as above where we computed the current through each resistor and added them to get \$I_S\$:
\$I_S = I_1 + I_2\$
\$I_S = \dfrac{V_S}{R_1} + \dfrac{V_S}{R_2} = V_S \times \dfrac{1}{R_1} + \dfrac{1}{R_2} = V_S \times \dfrac{R_1 + R_2}{R_1 \times R_2} = V_S \times (R_1 || R_2)\$
Actually, waves have a lot to do with it, until a steady state is achieved. Initially, even the most simple circuit made of a battery, a switch, a wire, and a resistor, is a transmission line, surrounded with electromagnetic waves, and requires a transient analysis to understand. This transient analysis will answer the initial question in this blog, if I understand the question... Even the battery is complex, and initially, until steady state is achieved, requires an analysis that is governed by maxwells eqn's, and more. In years past, DC101 was initially taught using the analogy of water in pipes, etc. Analogies were drawn for inductance and capacitance too. It is a great way to help someone understand DC, if you have five minutes to teach it to them, and ohms law is as far as you will take your student.
It is like a motorway full of cars where the motorway is the conductor and the cars are the electrons. If there are roadworks ahead limiting the motorway from three to one lane, all lanes slow down and the cars 20 mile behind will also not able to go faster on the three lanes section because the cars in front will not let them. |
The key to the question is "Normal"
Today I went for a normal walk
A "normal" in geometry is not something "common" or "ordinary". A geometrical "normal" is always perpendicular to something else.
Let us say that this something is...
...a line that extends from a fixed point, to the walker.
If the walker then walks along the "normal" of this thing, they will...
...always
walk in a circle. The point is the center of the circle. The line in question is the radius of the circle. Walking along the normal of a radius of a circle means you walk along the circle itself.
If you walk this way, one foot will move further than the other.
Now let us do the Maths:
Let us assume that the distance between the walker's feet are, well, one foot, 30 cm, or 0.3 meters. Let us assume they went $x$ number of laps around the circle. So the distances walked is $(r + 0) \cdot x \cdot 2\pi = 3000$ for the inner foot, and $(r + 0.3) \cdot x \cdot 2\pi = 3100$ for the outer.
Divide one by the other and we get:
$\frac{(r + 0) \cdot x \cdot 2\pi}{(r + 0.3) \cdot x \cdot 2\pi} = \frac{3000}{3100} \Rightarrow$
$x\cdot 2\pi$ cannot be zero so we can cancel that factor out
$\frac{r}{r + 0.3} = \frac{30}{31} \Rightarrow$
$31\cdot r = 30\cdot r + 9$
$r = 9$
Hence...
...they walked along a circle that had a radius of 9 meters, with the inner foot directly on the circle. I leave it to the reader to figure out how many laps it was. |
2. Series 24. Year Post deadline: - Upload deadline: -
1. Warm-Up
Jakub's breakfast
Every morning Jakub enjoys his favourite cereals which he pours into a bowl of milk. Assume the bowl is of circulur frustum shape with upper and lower radius $R$ and $r$ respectively ($R$ \geq r$)$, that the cereals are little solid spheres and that before he puts the cereals into the bowl there is milk of height $h$. What is the maximum amount of cereals he can fit into the bowl? You also know that the fraction of volume the cereals occupy inside a fully filled box is $\kappa$.
Magnetic monopole
Let's have a metal plate magnetized in such a way that the upper and lower sides are the north and south poles respectively. We use these plates to create two semispheres with the outer side being the north pole. Now, if we glue these two semispheres together, we effectively get a magnetic monopole, which, as we know, can not exist in our world. Where did we go wrong?
eee
2. Lennard-Jones potential
The interaction of two atoms of an inert gas can be described using the so called Lennard-Jones potenial $U(R) = 4\epsilon((\sigma/R)^12 - (\sigma/R)^6)$. Assume the motion is one-dimensional and determine the equilibrium position without using the tools of calculus. The meaning of the constants $\sigma$ and $\epsilon$ will be explained in the published solution.
eee
3. Percolator
While enjoying his daily coffee, Lukáš decided to tune up his percolator a bit. He placed a bent tube with a short wire wrapped around it to the bottom of the main vessel (see the picture). The wire was placed height $d$ above the bottom and the vessel's water level was in a height $h$. Parameters of the tube were chosen in such a way that the water vapor created by boiling water close to the wire pushed the water above it. What is the power we need to provide to the wire in order to see water coming from the tube in a height $l?$
eee
4. Think or pay
Suppose you are riding a bicycle and want to stop. What are the conditions so that the front wheel is completely blocked and sliding but you are not flying over your handle-bars? What effect does it have on your preceding result if you also use the break on the back wheel?
eee
P. The Smurfs and Darth Vader
If you inhale some helium your voice changes so that you speak like a smurf. Hydrogen has the same effect (watch out smokers!) but it is also possible to obtain the voice of Darth Vader in this way. The most famous substance for this is sulfur hexafluoride. What causes the voice to change? Make also a quantitative guess.
eee
E. Yin and young
Most of you have probably heard about the Young's double slit experiment. Have you, however, ever tried to reproduce this experiment and see the interference patterns for yourselves? There are also mechanical analogies to this experiment. For example you can observe the interference of two waves in water or two sound waves. Choose one or more of these experiments and measure the interference pattern. Then you can calculate the wave length and the speed of wave propagation. Photos of your apparatus will be welcomed!
eee |
Determine whether the series converges or diverges.
$$ \sum _{n=1}^{\infty }\:\left(\frac{19}{n!}\right) $$
I know that this question a lot easier if I use ratio test but I have not learned ratio test yet. The only option I have is divergence, comparison, limit comparison, and integral test. How can I prove that this series converges by using the limited tests.
Thanks in advance. |
I don't quite understand this set of conditions from Griffiths,
Introduction to Electrodynamics (equation 9.74):
$$\begin{align} \epsilon_1 E_1^{\bot} = \epsilon_2 E_2^{\bot},\quad &\mathbf{E}_1^{\parallel} = \mathbf{E}_2^{\parallel}\\ B_1^{\bot} = B_2^{\bot},\quad &\frac{1}{\mu_1}\mathbf{B}_1^{\parallel} = \frac{1}{\mu_2}\mathbf{B_2}^{\parallel} \end{align}$$
"Parallel" and "perpendicular" components can only be measured according to something else, for example a coordinate system. So if these equations do not specify the coordinate system and are given very generally, then how do I interpret the "parallel" and "perpendicular" parts?
If a wave is traveling in the z-direction towards a boundary and is polarized in the x-direction, how does the E-field have a parallel and perpendicular component? |
I'm sorry if I misunderstood your question, but I'll do my best.
There are many (and I mean MANY) packages to accommodate encoding of different "delicate" symbols like rare letters from little known languages. Vietnamese, Chinese, Slovenian (my language; you've probably never heard of it) etc. are all represented by plethora of CTAN packages. So to start, we're going to solve your issue of displaying
á and
š.
Let's start with the easier one,
á. We can get a solid result even
without any additional packages (don't get confused by
amsmath package, I only let it in because without it functions
\forall,
\in,
\iff and
\cup can't be compiled). Here's the code:
\documentclass[13,legalpaper]{article}
\usepackage{amsmath}
\begin{document}
$\forall d \in days : d > 0 \iff (\acute{a}Name \cup šName)$
\end{document}
Here you can see, that
á got substituted by
\acute{a}. But now we have a problem. If you only intend on using this command in math mode, that's fine, but if you want output it in text mode
that won't work. To achieve the latter, we must turn to a different command:
\documentclass[13,legalpaper]{article}
\begin{document}
\'{a}
\end{document}
Here we see that
á is once again replaced, this time with
\'{a}.
Now onto
š. Once again, in math mode, a substitution should suffice:
\documentclass[13,legalpaper]{article}
\usepackage{amsmath}
\begin{document}
$\forall d \in days : d > 0 \iff (\acute{a}Name \check{s}Name)$
\end{document}
And in the normal text:
\documentclass[13,legalpaper]{article}
\usepackage{amsmath}
\begin{document}
\v{s}
\end{document}
Note that if you want to have serif
á and/or
š, you
don't have to use math mode! You can simply italicise the whole thing altogether:
\documentclass[13,legalpaper]{article}
\usepackage{amsmath}
\begin{document}
\textit{\'{a} \v{s}}
\end{document}
BUT THIS IS NOT THE BEST WAY TO GO ABOUT!!!
Now I'm going to bring out the big guns: encoding. Who would want to write
\'{a} and/or
\v{s} every time he/she wants to write such a character in his/her native language? No one. That's why LaTex has a wonderful package called
babel. Let me show you (a quick note: I'm using
slovene package, because I'm from Slovenia (google it) you can use
any version of
bable, but I personally advise you to use the one that corresponds to your native language; hence the "standard"
babel formula:
\usepackage[INSERT YOUR LANGUAGE HERE]{babel}):
\documentclass[13,legalpaper]{article}
\usepackage[slovene]{babel}
\usepackage[utf8]{inputenc}
\begin{document}
áš
\end{document}
Simple as that! I have to mention that you MUST include
\usepackage[utf8]{inputenc} as it indicates the
compilation protocol. All languages with exotic letters must do that as the adherent Tex compiler isn't advanced enough to handle these newer symbols. And once you've done that you're set! Just watch (note: don't get confused by
\setlength{\parindent}{0cm}, it's just a bit of code that removes space before paragraph, so everything is aligned correctly):
\documentclass[13,legalpaper]{article}
\usepackage[slovene]{babel}
\usepackage[utf8]{inputenc}
\begin{document}
\setlength{\parindent}{0cm}
This is a normal text: Upám, da ti je bil ta odgovor v pomoč!\\
This is an italicised text: \textit{Upám, da ti je bil ta odgovor v pomoč!}\\
This is symbol usage in math mode: $\check{z} + \check{s} + \check{c} = \acute{a}$
\end{document}
Important: Do NOT try to write
š,
č,
á etc. in math mode. This will only produce a large amount of errors. Remember: WE USE
ONLY MATH ACCENTS IN MATH MODE!
And a quick tutorial on how to change the font:
\documentclass[13,legalpaper]{article}
\usepackage[slovene]{babel}
\usepackage[utf8]{inputenc}
\usepackage{times}
\begin{document}
\setlength{\parindent}{0cm}
This is a normal text: Upám, da ti je bil ta odgovor v pomoč!\\
This is an italicised text: \textit{Upám, da ti je bil ta odgovor v pomoč!}\\
This is symbol usage in math mode: $\check{z} + \check{s} + \check{c} = \acute{a}$
\end{document}
Note the
\usepackage{times} I added to change text font from Computer Modern to Times New Roman. Easy, isn't it? Well, of course you must first
know font name in order to compile it. If you compile the code, you can see that math font DIDN'T change. If you want to accommodate the latter as well, you can perform some other LaTex tricks, which are a bit more complicated, but I'm sure you'll get through. Just read this brilliant post: how to select math font in document.
I hope this post was helpful. I wish you countless hours of fun with LaTex! |
Abbreviation:
Gset
A
is a structure $\mathbf{A}=\langle A,f_g (g\in G)\rangle$, where $\langle G, \cdot, ^{-1}, 1\rangle$ is a group, such that G-set
$f_1$ is the identity map: $1x=x$ and
the group action associates: $(g\cdot h)x=g(hx)$
Remark: $f_g(x)=gx$ is a unary operation called
. the group action by $g$
If follows from the associativity that $f_{g^{-1}}$ is the inverse function of $f_g$.
This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x ... y)=h(x) ... h(y)$
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct |
There's a way to this in polynomial time. I'll sketch the algorithm (in reverse order ... do step 2 first and step 1 second).if we can find a set of $nk$ agent-task pairs $(i,j)$ such that each task is in exactly $k$ pairs, each agent is in exactly $k$ pairs, and no pair appears more than once, then we can find $k$ assignments that together cover these $nk$...
This can be formulated as an instance of min-cost (or in this case, max-profit) flow.Set up a network as follows. There will be four layers.The first layer is a single node we call the source. The next layer consists of a node for each agent. The next layer has a node for each task. The final layer is one node we call the sink. For each edge, we give a ...
In general, the answer is no. If we put XOR-like restrictions on the out-going edges of a vertex, we can prove that finding a min-cut-max-flow is NP-Hard. The technique is to reduce 3-SAT to it.Let's assume there are $n$ variables $x_1, x_2, ..., x_n$ in the 3-SAT and $m$ clauses $c_1, c_2, ..., c_m$. We create a graph $G(V,E)$ encoding the instance of the ...
The answer to your first question is: yes, there is a simple augmentation. It is described in the standard literature on the stable marriage problem. See the Wikipedia article for references in the literature where this is described. It is also described here: The stable marriage algorithm with asymmetric arrays. See also https://cs.stackexchange.com/a/...
This paper has a painfully detailed table on what you can achieve using (currently known) deterministic, randomized and $\epsilon$-approximation algorithms. To summarize, for the bipartite case (all assuming integer weights bounded by $N$):Deterministic time $O(n^2 \sqrt n \log N)$.Randomized $O(n^{2.373} N)$.$(1 - \epsilon)$-approximation in $O(n^2 \...
The simplest solution (in terms of saving you the time of understanding the literature) is probably going to be to use integer linear programming (ILP / MILP). You can formulate it as an ILP instance, then apply an off-the-shelf ILP solver.Introduce zero-or-one variables $x_{i,j}$, with the goal that $x_{i,j}=1$ means that the $i$th person is assigned to $...
The problem you want to solve is (a slight variation of) maximum weighted matching in general (i.e., not necessarily bipartite) graphs. There are several algorithms with various worst-case bounds:"Data structures for weighted matching and nearest common ancestors with linking" (Gabow 1990) is the best "in theory" with $O(nm + n^2\log n)$ time complexity."...
You may be interested in reading about total unimodularity. An ILP is solvable in polynomial time if the associated matrix is totally unimodular (sufficient but not necessary condition). This explains the tractability of assignment and maximum flow problems.I'm not aware of any "reason" why knapsack is pseudopolynomial time solvable.
You are probably looking for a solution to the following optimization problem.Weighted maximum biparite matching. Given a weighted bipartite graph $G=(U\cup V, E)$ with weights $w\colon U\times V \rightarrow \mathbb{N}$, find a set of edges $A\subseteq U\times V$ such that all edges in $A$ are disjoint (that is, no two edges are adjacent, that is, no group ...
This can be formulated as an instance of minimum-cost flow problem. Have a graph with one vertex per agent, one vertex per task, and one vertex per category. Now add edges:Add an edge from the source to each agent, with capacity 1 and cost 0.Add an edge from each agent to each task, with capacity 1 and cost according to the cost of that assignment.Add ...
The article you linked assumes that the reader knows how to apply the Hungarian algorithm on a similarity matrix because they have note in the introduction to Section 3 that Zager et. al. used the Hungarian algorithm for this purpose in the paper here.Furthermore, there is no requirement in the Hungarian algorithm that necessitates integral entries; ...
Each bit can be either 0 or 1, so you have two choices per bit. That gives you 2^n combinations. E.g. n=1 implies 2^1=2 states, n=2 implies 2^2=4 states. You could arrive at this by 1) making a lexicographic list of all the combinations or 2) using a formula from combinatorics.Your second question seems to address representing integers with binary numbers. ...
For your first question, I do not know any general techniques or rules of thumb that you can use to model arbitrary restrictions in flow networks. Most examples I have seen are generally based on some intuition about the nature of restrictions, and often at first seem arbitrary.For your particular case, I have yet to come up with a good mapping to max-...
As I stated on the CSTheory post, this is solved via maximum matching. The following should give enough intuition to show that each agent $a_i$ has a $q_i$-matching iff a transformed graph $G'$ has a matching. First, construct the graph $G$. Now, for each agent $a_i$ and quota $q_i$, make a new graph $G'$ that has $q_i$ copies of $a_i$. That is to say, if ...
You have an instance of a bipartite matching problem. There are some variations on the problem. I think you're looking for a minimum cost bipartite matching, but maybe you're looking for a stable matching.
Your problem statement is not very clear about whether the constraints are hard or soft.Hard constraintsSuppose the constraints are hard: each triangle must be assigned to one of the closest circles (there might be multiple possibilities, in case of a tie), and vice versa. For example triangle 1 can only be matched to circle 1 or 2; triangle 2 can only ...
This actually has nothing to do with the stable marriage problem; it's an instance of bipartite matching. (It's not related to stable marriage, becuase you don't have an ordering on the preferences of which box each item is matched to; you just have a list of what's allowed or disallowed.)There are efficient algorithms: use the Hungarian algorithm, or any ...
Riley's answer is excellent. It is possible to improve the running time further to $O(mn)$ time, using dynamic programming. This saves a factor of $n$ in the running time.Define $T[i,j]$ to be the total profit of the best assignment for times $i,i+1,\dots,m$, assuming that at time $i$ the agent is assigned to task $j$. We'll compute all of the $T[i,j]$ ...
To build on Paresh's answer, if all the max capacities are one (and everything else is integer), you could also split each node into two so that node (n-) has all the in edges, node (n+) has all the out edges, and (n-) and (n+) are connected with an edge of max capacity 1. Solve this new min-cost network and you are done.If the max capacities are not all ...
I searched for "job shop scheduling uncertain processing times" and came up with this: http://www.waset.org/journals/waset/v64/v64-190.pdf. I hope it helps.I think you will find that the best approximation algorithm is going to depend on the probability distribution of completion times. An algorithm for exponentially distributed times might be very ...
This is a form of assignment problem; in particular, it is an instance of quadratic assignment problem. There are some known techniques available for solving this sort of problem.Using integer linear programmingOne approach is to use integer linear programming.Let $d_{p,q}$ denote the amount of data communicated between processes $p$ and $q$, for each ...
One place to look at is the classic book The stable marriage problem. The link provides a relevant excerpt, showing that the matching produced by the standard Gale–Shapley algorithm is male optimal and female pessimal: any man gets the best possible partner (in his view) he can get in any perfect matching, and any woman gets the worst possible partner (in ...
These are two separate questions.How many possible combinations of "n" bits are there? Well, bit 1 can take any of the two values (so there's 2 possibilities for bit 1); for any of them, bit 2 can take any of the two values (so there's 2*2 possibilities for bits 1,2); for any of the combinations of bits 1 and 2, bit 3 can take any of the two values, etc....
Assuming that you are trying to maximize the seating preferences, this problem is NP-Hard =(.NP-HardnessSpecifically, consider the decision version of this problem: Given a matrix of preferences, is there some way to assign people to seats such that the total score (sum of resulting preferences of professors to their nearby neighbors) obtained is at or ...
Here's one technique to enumerate the best $n$ assignments, for any instance of the assignment problem. I suspect my approach isn't optimal, but it does run in polynomial time: it uses $O(nm)$ invocations of the Hungarian algorithm, where $m$ denotes the number of agents in the problem instance. In your example, $m=26$, so my approach requires $O(n)$ ...
Read the following paper on the generalization of your problem with "makespan" as the objective. The proposed algorithm should work even if $m\neq n$.H. Ma and S. Koenig. "Optimal Target Assignment and Path Finding for Teams of Agents." http://idm-lab.org/bib/abstracts/papers/aamas16a.pdf
First, you can model the task management as a directed graph. Suppose you have a source node $a$, a sink node $b$, and $mn$ nodes, one for each task. We say that $v_{ij}$ represents the $j$th task on the $i$th time period. The edges are as follows.$$E=\{(a,v_{11}),(a,v_{12}),\dots,(a,v_{1n})\}\cup\\\{(v_{ij},v_{(i+1)j'})\vert 1\leq i<m,1\leq j\leq n, 1\...
Your problem can be solved in polynomial time.You mention two possible goals and say you'd be happy with a solution to either. The first goal isn't well-defined, so I'll describe a solution to the second goal. I can see two possible approaches: (a) use integer linear programming (ILP), (b) use network flow. The former will be simple to implement, and ... |
There is a standard approach to these results, that dates back to Lévy and Solovay. The point is that, for any $\lambda$, given an embedding witnessing $\lambda$-supercompactness of $\kappa$, there is a canonical way of lifting the embedding to a $\lambda$-supercompact embedding in $V^Q$ (using simply that $Q$ is "small"). Lévy and Solovay present this in terms of extending ultrafilters in the ground model to ultrafilters in $V^Q$ in canonical ways. Their paper is
Azriel Lévy, Robert M. Solovay.
Measurable cardinals and the continuum hypothesis, Israel J. Math. 5, (1967), 234–248. MR0224458 (37 #57).
Besides the Lévy-Solovay paper or the argument in Jech's book, a modern presentation can be seen, for example, in Cummings's paper in the Handbook. There are also several results by Hamkins and others extending these techniques to other contexts, see for example
Joel David Hamkins, W. Hugh Woodin.
Small forcing creates neither strong nor Woodin cardinals. Proc. Amer. Math. Soc. 128 (10), (2000), 3025–3029. MR1664390 (2000m:03121).
Very briefly, given $j:V\to M$, define $\hat j:V[G]\to M[G]$ by $\hat j(\dot x_G)=j(\dot x)_G$. Standard arguments verify that this is well-defined, elementary, etc. Note that here we are using that $Q\in M$. If $Q$ has size $\kappa$ or larger, there are cases where the extension is still possible, but we need to be more careful. For example, $j(Q)$ is in general strictly larger than $Q$, so the existence of a $j(Q)$-generic over $M$ is no longer for free. The study of this situation leads to what we now call Silver's master conditions. |
Given two free semicirculars X_1 and X_2 and a projection h in the von-Neumann algebra generated by X_1, how does one show that the von-Neumann algebra generated by {X_1, hX_2(1-h)} is a factor? It is easy to show that the two elements in the generating set are free. But I am unable to see what kind of an object hX_2(1-h) is. It appears in the definition of interpolated free group factor in Radulescu's paper (pre-print 1991) on random matrices, amalgamated free products and subfactors of free group factors of non-integer index.
Let me first point out that $X_1$ and $Y=h X_2 (1-h)$ are
not freely independent. This is most easily seen if $h$ has trace 1/2, in which case $Y$ has range and support projections $h$ and $(1-h)$, respectively. But since the support and range projections of $Y$ belong to $W^*(Y)$, it would follow from the assumption that $Y$ and $X$ are free that actually $h$ and $X$ are free. But this is not possible, since they commute.
Now to your question of factoriality. Here is a sketch of the proof. Let us assume for definiteness that $\tau(h) \geq \tau (1-h)$ (otherwise, switch $Y$ and $Y^\ast$). You can then verify that $Y^\ast Y$ and $YY^\ast$ have free Poisson distributions (with different parameters) and that the spectrum of $YY^\ast$ has no atoms. It follows that if you consider the polar decomposition of $Y = V |Y|$, then $V$ is a partial isometry with domain projection $1-h$ and range projection $\leq h$. Using this, you can see that $W^*(X_1,Y)$ is a factor iff $N=h W^*(X_1,Y)h $ is a factor. But $N$ is generated by $hX_1h$ and $Y^\ast Y$; you can prove that these elements are freely independent (in $N$). This either uses a random matrix model (see e.g. Voiculescu's book on free random variables for the proof of the compression formula for free group factors), or can be done directly using operator-valued semicircular systems. Thus $N = W^*(hX_1 H) * W^*(Y^\ast Y)$ which is a free product of two abelian von Neumann algebras, one of which is diffuse and the other not complex numbers. You can then get factoriality (see references in Ueda's paper http://arxiv.org/abs/1011.5017)
One way of thinking about the operator $$ Y=hX_{2}(1-h) $$ is to work with the random matrix models. More specifically, the operators $X_{1}$ and $X_{2}$ can be thought as the limit as $n\to\infty$ of two independent $n\times n$ Hermitian random matrices where the upper triangular parts are formed by i.i.d. Gaussian random variables of zero mean and variance $1/\sqrt{n}$.
Then $Y$ is the limit of the upper right corner of $X_{2}$. For example, let us assume (for notation simplicity only) that $\tau(h)=1/2$ then by the previous argument you can think of $X_{1}$ and $Y$ as:
\[ X_{1} = \begin{pmatrix} x & z \\\ z^{*} & y \end{pmatrix} \]
\[ Y = \begin{pmatrix} 0 & w \\\ 0 & 0 \end{pmatrix} \]
where $x$ and $y$ are semicircular operators and $z$ and $w$ are circular operators and all of them are free. This will help you to understand the operator $Y$ and deduce all you need (joint moments, factoriality, etc) from the von Neumann algebra generated by $\{X_{1},Y\}$. Note that as Dima is showing you, the elements $X_{1}$ and $Y$ are
not free over the the algebra of complex numbers. However, represented as two by two matrices of operators as above they are free over the algebra $M_{2}(\mathbb{C})$. |
If you would like to submit an interactive comment (short comment, referee comment, editor comment, or author comment) for immediate non-peer-reviewed publication in the interactive discussion of a paper recently published in CPD, please locate this paper on the CPD papers in open discussion web page and follow the appropriate links there.
Short comments can be submitted by every registered member of the scientific community (free registration is accessible via the log-in link). They are restricted to a maximum length of 10 pages and have to be submitted within 8 weeks after publication of the discussion paper in CPD. For details see interactive public discussion.
If you want to use LaTeX commands in your comment, you need to "activate LaTeX commands" by clicking on the appropriate button just above the text input window.
The following template is a simple ASCII text file with a typical layout for interactive comments and some frequently used LaTeX commands. It can be viewed, edited, copied, and pasted into the text field of the comment submission form using any standard text editor: comment_example.txt.
LaTeX ignores extra spacing between words. If you want to force a line break, please use a double backslash \\ in the appropriate place. For separating paragraphs, use two hard returns.
Italic text may be created by putting the text into curly braces with a \it after the opening brace.
Typing "{\it
This is italic} and this is not." will produce " This is italic and this is not."
Remember to include the empty space between the \it and the rest of text. Bold face text is produced in a similar way using \bf.
Typing "{\bf This is bold} and this is not." will produce "
This is bold and this is not."
Again, remember to include the empty space between the \bf and the rest of text.
To create a subscript, type a dollar sign, an underscore, an open curly brace, the character(s) you want to be subscripted, a close curly brace, and another dollar sign.
Typing "H$_{2}$SO$_{4}$" will produce "H
2SO 4" and typing "T$_{ice}$" will produce "T ice".
Creating a superscripts follows the same procedure with the difference that you need to put a caret sign instead of the underscore.
Typing "cm$^{-3}$" will produce "cm
-3" and typing "T$^{ice}$" will produce "T ice".
Some characters have a special function in LaTeX; if you want to use them as a normal character you need to put a backslash in front of them.
\% \$ \& \# \_ \{ \}
In particular the percent sign is used to introduce commented text in LaTeX, so ALWAYS put the backslash in front of it or else some of your text will disappear.
Greek symbols can be used by putting the special commands listed below between two dollar signs:
\alpha, \beta, \gamma, \delta, \epsilon, \nu, \kapp,a \lambda, \mu, \pi, \omega, \sigma, etc.
Typing "$\mu$m" will produce "µm".
Similarly, upper-case Greek letters can be produced:
\Gamma, \Delta, \Lambda, \Sigma, \Omega, \Theta, \Phi, etc.
Some frequently used mathematical symbols are produced in the same way as Greek symbols:
Typing $<$ will produce <.
Typing $>$ will produce >. Typing $=$ will produce =. Typing $\times$ will produce ×. Typing $\pm$ will produce ±. Typing $\sim$ will produce ~. Typing $^\circ$ will produce °. Typing $\rightarrow$ will produce an arrow pointing to the right as frequently used in chemical reactions.
Simple equations are produced by putting all numbers, symbols, and signs between two dollar signs.
Typing "$E = m c^{2}$" will produce "E = m c
2". Typing "$P_{t} = P_{0} A^{kT}$" will produce "P t = P 0 A kT". |
When we say "pick a random integer", the integers are in the range $[1, N]$.
Problem:
Consider the following instructions:
Pick a random integer.
Find the greatest common divisor of all the integers which have been picked so far.
If the greatest common divisor is not 1, go back to the first step to pick a random integer.
When this process ends, let $X$ be the total number of integers you have picked. I would like to find $E(X)$ in terms of $N$.
Example of this process: (where $N=4$)
I picked the random integer, 2. $\gcd(2)=2$ I picked another random integer, 4. $\gcd(2, 4) = 2$ I picked another random integer, 2. $\gcd(2, 4, 2) = 2$ I picked another random integer, 3. $\gcd(2, 4, 2, 3) = 1$
In total, I picked $X=4$ integers, $2, 4, 2, 3$.
My ideas:
A easy upper bound can be shown to be $N$, which is the expected number of numbers to be chosen before getting a 1.
Let $M(g, N)$ be the expected number of integers I will need after getting $g$ as the greatest common divisor of the integers so far.
We have $E(X)=1+\frac{\sum_{i=1}^NM(g, N)}{N}$.
Small cases:
$N=1\rightarrow E(X)=1$
$N=2\rightarrow E(X)=2$ |
I have a problem at the intersection of a range of topics: exponential programming, semi-definite programming and computer science, that I am having trouble finding a decent method for solving.
Take $A_i\in\mathbb{R}^{d\times d}$ with $A_i = A^T_i$, $i\in\mathcal{I}$ and $\mathcal{I}$ is a finite set of indices. $-\infty < \text{tr}(A_i)< 0,\ \forall i\in\mathcal{I}$. We also have $b_i\in\mathbb{R}^+$.
We seek $X_i\in\mathbb{R}^{d \times d}$ that solves the optimization problem
\begin{align} &\min \sum_i b_i e^{\text{tr}(A_i X_i)} \\ \text{s.t.} & \sum_i e^{\text{tr}(X_i)} \leq \mathcal{C} \\ & \| X_i \|^2_{Fr} \leq \alpha\\ & X_i = X^T_i \end{align}
This can be solved using CVX, and thus satisfies Disciplined Convex Programming. The problem is that because it is semi-definite programming on the exponential cone, it requires an approximation of the cone, thus is quite slow. I have also used the large blunt object that is NLOPT which performs quickly, but this seems unsatisfactory given it doesn't really exploit the convex structure.
My question is: What other methods could I reasonably attack this problem with? In particular ones that might work in parallel (the real set $\mathcal{I}$ is sufficiently large that the problem is distributed across many computers). |
Faddeeva Package From AbInitio
Revision as of 05:07, 31 October 2012 (edit)
Stevenj (Talk | contribs)
(→Faddeeva / complex error function)
← Previous diff
Revision as of 05:09, 31 October 2012 (edit)
Stevenj (Talk | contribs)
(→Usage)
Next diff →
Line 27: Line 27: :<math>\mathrm{erfi}(z) = -i\mathrm{erf}(iz) = -i[e^{z^2} w(z) - 1]</math>; for '''real''' ''x'', <math>\mathrm{erfi}(x) = e^{x^2} \mathrm{Im}[w(x)] = \frac{\mathrm{Im}[w(x)]}{\mathrm{Re}[w(x)]}</math> (imaginary error function) :<math>\mathrm{erfi}(z) = -i\mathrm{erf}(iz) = -i[e^{z^2} w(z) - 1]</math>; for '''real''' ''x'', <math>\mathrm{erfi}(x) = e^{x^2} \mathrm{Im}[w(x)] = \frac{\mathrm{Im}[w(x)]}{\mathrm{Re}[w(x)]}</math> (imaginary error function) :<math>F(z) = \frac{i\sqrt{\pi}}{2} \left[ e^{-z^2} - w(z) \right]</math>; for '''real''' ''x'', <math>F(x) = \frac{\sqrt{\pi}}{2}\mathrm{Im}[w(x)]</math> ([[w:Dawson function|Dawson function]]) :<math>F(z) = \frac{i\sqrt{\pi}}{2} \left[ e^{-z^2} - w(z) \right]</math>; for '''real''' ''x'', <math>F(x) = \frac{\sqrt{\pi}}{2}\mathrm{Im}[w(x)]</math> ([[w:Dawson function|Dawson function]]) - Note that in the case of erf and erfc, we provide different equations for positive and negative Re(''z''), in order to avoid numerical problems arising from multiplying exponentially large and small quantities. For erfi and ''F'', there are simplifications that occur for real ''x'' as noted. Furthermore, if you want to compute the Dawson function ''F'' for real ''z''=''x'', you can obtain the imaginary part of ''w''(''x'') directly without computing the real part, by calling: + Note that in the case of erf and erfc, we provide different equations for positive and negative Re(''z''), in order to avoid numerical problems arising from multiplying exponentially large and small quantities. For erfi and ''F'', there are simplifications that occur for real ''x'' as noted. Furthermore, if you want to compute e.g. erfi or the Dawson function ''F'' for real ''z''=''x'', you can obtain the imaginary part of ''w''(''x'') directly without computing the real part, by calling: extern double ImFaddeeva_w(double x); extern double ImFaddeeva_w(double x); Revision as of 05:09, 31 October 2012
Contents Faddeeva / complex error function
Steven G. Johnson has written free/open-source C++ code (with wrappers for other languages) to compute the
scaled complex error function w( z) = e − z2erfc(− iz), also called the Faddeeva function(and also the plasma dispersion function), for arbitrary complex arguments zto a given accuracy. Download the source code from: http://ab-initio.mit.edu/Faddeeva_w.cc (updated 30 October 2012)
Given the Faddeeva function, one can easily compute Voigt functions, the Dawson function, and similar related functions. Our implementation includes special-case optimizations for purely real or imaginary
z, making its performance competitive with specialized implementations of (e.g.) the Dawson function, erfcx, and erfi. Usage
To use the code, add the following declaration to your C++ source (or header file):
#include <complex> extern std::complex<double> Faddeeva_w(std::complex<double> z, double relerr=0);
The function
Faddeeva_w(z, relerr) computes
w( z) to a desired relative error
relerr.
Omitting the
relerr argument, or passing
relerr=0 (or any
relerr less than machine precision ε≈10
−16), corresponds to requesting machine precision, and in practice a relative error < 10 −13 is usually achieved. Specifying a larger value of
relerr may improve performance (at the expense of accuracy).
You should also compile
Faddeeva_w.cc and link it with your program, of course.
In terms of
w( z), some other important functions are: (scaled complementary error function) (complementary error function) (error function) ; for real x, (imaginary error function) ; for real x, (Dawson function)
Note that in the case of erf and erfc, we provide different equations for positive and negative Re(
z), in order to avoid numerical problems arising from multiplying exponentially large and small quantities. For erfi and F, there are simplifications that occur for real x as noted. Furthermore, if you want to compute e.g. erfi or the Dawson function F for real z= x, you can obtain the imaginary part of w( x) directly without computing the real part, by calling: extern double ImFaddeeva_w(double x);
which computes Im[
w( x)] efficiently. Wrappers: Matlab, GNU Octave, and Python
Wrappers are available for this function in other languages.
Matlab (also available here): A function
Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the
relerrargument is optional) can be downloaded from Faddeeva_w_mex.cc (along with the help file Faddeeva_w.m. Compile it into an octave plugin with:
mex -output Faddeeva_w -O Faddeeva_w_mex.cc Faddeeva_w.cc GNU Octave: A function
Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the
relerrargument is optional) can be downloaded from Faddeeva_w_oct.cc. Compile it into a MEX file with:
mkoctfile -DMPICH_SKIP_MPICXX=1 -DOMPI_SKIP_MPICXX=1 -s -o Faddeeva_w.oct Faddeeva_w_oct.cc Faddeeva_w.cc Python: Our code is used to provide
scipy.special.wofzin SciPy starting in version 0.12.0 (see here).
Algorithm
This implementation uses a combination of different algorithms. For sufficiently large |
z|, we use a continued-fraction expansion for w( z) similar to those described in Walter Gautschi, "Efficient computation of the complex error function," SIAM J. Numer. Anal. 7(1), pp. 187–198 (1970). G. P. M. Poppe and C. M. J. Wijers, "More efficient computation of the complex error function," ACM Trans. Math. Soft. 16(1), pp. 38–46 (1990); this is TOMS Algorithm 680.
Unlike those papers, however, we switch to a completely different algorithm for smaller |
z|: Mofreh R. Zaghloul and Ahmed N. Ali, "Algorithm 916: Computing the Faddeyeva and Voigt Functions," ACM Trans. Math. Soft. 38(2), 15 (2011). Preprint available at arXiv:1106.0151.
(I initially used this algorithm for all z, but the continued-fraction expansion turned out to be faster for larger |
z|. On the other hand, Algorithm 916 is competitive or faster for smaller | z|, and appears to be significantly more accurate than the Poppe & Wijers code in some regions, e.g. in the vicinity of | z|=1 [although comparison with other compilers suggests that this may be a problem specific to gfortran]. Algorithm 916 also has better relative accuracy in Re[ z] for some regions near the real- z axis. You can switch back to using Algorithm 916 for all z by changing
USE_CONTINUED_FRACTION to
0 in the code.)
Note that this is SGJ's
independent re-implementation of these algorithms, based on the descriptions in the papers only. In particular, we did not refer to the authors' Fortran or Matlab implementations (respectively), which are under restrictive "semifree" ACM copyright terms and are therefore unusable in free/open-source software.
Algorithm 916 requires an external complementary error function erfc(
x) function for real arguments x to be supplied as a subroutine. More precisely, it requires the scaled function erfcx( x) = e erfc( x2 x). Here, we use an erfcx routine written by SGJ that uses a combination of two algorithms: a continued-fraction expansion for large xand a lookup table of Chebyshev polynomials for small x. (I initially used an erfcx function derived from the DERFC routine in SLATEC, modified by SGJ to compute erfcx instead of erfc, by the new erfcx routine is much faster.)
Similarly, we also implement special-case code for real-
z, where the imaginary part of w is Dawson's integral. Like erfcx, this is also computed by a continued-fraction expansion for large | x|, a lookup table of Chebyshev polynomials for small | x|, and finally a Taylor expansion for very small | x|. Test program
To test the code, a small test program is included at the end of
Faddeeva_w.cc which tests
w( z) against several known results (from Wolfram Alpha) and prints the relative errors obtained. To compile the test program,
#define FADDEEVA_W_TEST in the file (or compile with
-DFADDEEVA_W_TEST on Unix) and compile
Faddeeva_w.cc. The resulting program prints
SUCCESS at the end of its output if the errors were acceptable.
License
The software is distributed under the "MIT License", a simple permissive free/open-source license:
Copyright © 2012 Massachusetts Institute of Technology Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
There is an answer to a similar question here discussing the relationship between stationarity and the recursive equation for a model. Some additional remarks are pertinent to your case.
Firstly, it is important to understand that the recursive equation defining the ARMA($p,q$) process is not sufficient to fully specify the process, even with specification of the distribution of the underlying noise series $\boldsymbol{Z} = \{ Z_t| t \in \mathbb{Z} \}$. The recursive equations lock in the auto-correlation of the process, and specification of the distribution of the underlying noise series $\boldsymbol{Z}$ is then sufficient to give an asymptotic marginal distribution for the observable values. However, the full joint distribution for the observable series $\boldsymbol{X} = \{ X_t| t \in \mathbb{Z} \}$ also depends on the distribution of the "starting values". In some cases this is specifically defined, and in cases where it is not, it is usual to take this to mean that you are using the unique stationary distribution.
ARMA($p,q$) model with white noise: You have specified an underlying white noise series so you have distribution $Z_t \sim \text{IID N}(0, \sigma^2)$ for the underlying series. Assuming that $\max| \phi_i | <1$ you have asymptotic stationarity with limiting distribution:
$$X_\infty \sim \text{N} \Bigg( 0, \sigma^2 \sum_{i=0}^\infty \psi_i^2 \Bigg) \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \psi(B) \equiv \frac{\phi(B)}{\theta(B)}.$$
To get an ARMA process with strict stationarity, you would set $X_1, \cdots , X_p$ to have a joint normal distribution with zero mean, the above variance, and auto-correlation that is consistent with your specified ARMA model. This would give you strict stationarity, in the sense that the joint distribution of observable values does not change when shifted in time.
If you were to use a different joint distribution for your "starting values", this would lead to a non-stationary model. It would still be asymptotically stationary with the above limiting distribution, but the condition of strict stationarity would not hold.
Intuitive explanation: The recursive formula for an ARMA model specifies the relationship that each observable value has with previous values. There is an infinite class of possible models that are consistent with this recursive specification (each represented by a full joint distribution for which that recursive equation holds). For a model with auto-regressive coefficients inside the unit circle, there is a unique strictly stationary model in this class, but there are also an infinite number of other models that are non-stationary, but still have an asymptotic limiting distribution. The basic thing to remember for the intuitive explanation is that the ARMA equation, which is merely a recursive equation does not fully specify the model. |
Abbreviation:
EqRel
An
is a structure $\mathbf{X}=\langle X,\equiv\rangle$ such that $\equiv$ is a equivalence relation (i.e. $\equiv\ \subseteq X\times X$) that is binary relation on $X$
reflexive: $x\equiv x$
symmetric: $x\equiv y\Longrightarrow y\equiv x$
transitive: $x\equiv y\text{ and }y\equiv z\Longrightarrow x\equiv z$
Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{X}$ and $\mathbf{Y}$ be equivalence relations. A morphism from $\mathbf{X}$ to $\mathbf{Y}$ is a function $h:A\rightarrow B$ that is a homomorphism: $x\equiv^{\mathbf X} y\Longrightarrow h(x)\equiv^{\mathbf Y}h(y)$
An
is a qoset that is equivalence relation : $x\equiv y\Longrightarrow y\equiv x$ symmetric
Example 1:
Equivalence relations are in 1-1 correspondence with partitions.
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &2\\ f(3)= &3\\ f(4)= &5\\ f(5)= &7\\ \end{array}$ $\begin{array}{lr} f(6)= &11\\ f(7)= &15\\ f(8)= &22\\ f(9)= &30\\ f(10)= &42\\ \end{array}$
The number of (labelled) equivalance relations on an $n$ element set given by a sum of Stirlings formula (of the second kind).
The number of (nonisomorphic) equivalence relations is the number of partition patterns (= number of integer partitions).
[[Preordered sets]] supervariety |
This question relates to a specific paper by Eric Budish, published in 2011 in JPE, but I've tried to put all relevant information in this question. On page 1072, he defines
budget constraint hyperplanes as follows:
Let $H(i,x) = \{\mathbf{p}:\mathbf{p} \cdot x = b_i \}$ denote the hyperplane in $M$-dimensional price space along which agent $i$ can exactly afford bundle $x$. As prices cross $H(i,x)$ from below, bundle $x$ goes from being affordable for $i$ to being unaffordable for $i$.
$\textbf{p}$ represents a price vector for the $M$ goods, which, importantly, are indivisible. $b_i$ is agent $i$'s budget. I think the rest is sufficiently self-explanatory. My confusion is with this subsequent statement:
Importantly, the number of such hyperplanes is finite because the number of agents and the number of bundles are finite. This is an advantage of having only indivisible goods.
This I don't see. For example, suppose $M=2$. Then aren't all $\textbf{p}=(\alpha b_i, (1-\alpha)b_i)$ such that $\alpha \in [0,1]$ hyperplanes meeting that definition, and thus I have infinitely many?
Having said that, a hyperplane is the set of those price vectors, not each of the vectors, so maybe the full set of price vectors I've just described together define just one hyperplane? I suppose my issue is understanding what a hyperplane is in this setting, and how the indivisibility gives us just a finite set of hyperplanes to work with. Any guidance would be very much appreciated. |
Given the Schwarzschild metric with $(-,+,+,+)$ signature,
$$\text ds^2=-\left(1-\frac{2M}{r}\right)dt^2+\left(1-\frac{2M}{r}\right)^{-1}dr^2+r^2(d\theta^2+\sin^2\theta\,d\phi^2)$$
the lack of dependence of the metric on $t$ and $\phi$ allow us to read off the Killing vectors $K_1=\partial_t$ and $K_2=\partial_{\phi}$. These vectors, in their coordinate representations, are given by
$$K_1=\left(-\left(1-\frac{2M}{r}\right),0,0,0\right)$$
$$K_2=\left(0,0,0,r^2\sin^2\theta\right)$$
How does one immediately read off those vector components for $K_1$ and $K_2$? What is the logic behind reading them off? How would I "read off the Killing vectors" if I, while maintaining no explicit dependence on $t$ or $\phi$, added some off-diagonal terms to the metric? Please help me intuitively understand what's going on here. |
The textbook Elements of Information Theory gives us an example:
For example, if we knew the true distribution p of the random
variable, we could construct a code with average description length
H(p). If, instead, we used the code for a distribution q, we would
need H(p) + D(p||q) bits on the average to describe the random
variable.
To paraphrase the above statement, we can say that if we change the information distribution(from q to p) we need D(p||q) extra bits on average to code the new distribution.
An illustration
Let me illustrate this using one application of it in natural language processing.
Consider that a large group of people, labelled B, are mediators and each of them is assigned a task to choose a noun from
turkey,
animal and
book and transmit it to C. There is a guy name A who may send each of them an email to give them some hints. If no one in the group received the email they may raise their eyebrows and hesitate for a while considering what C needs. And the probability of each option being chosen is 1/3. Toally uniform distribution(if not, it may relate to their own preference and we just ignore such cases).
But if they are given a verb, like
baste, 3/4 of them may choose
turkey and 3/16 choose
animal and 1/16 choose
book. Then how much information in bits each of the mediators on average has obtained once they know the verb? It is:
\begin{align*}D(p(nouns|baste)||p(nouns)) &= \sum_{x\in\{turkey, animal, book\}} p(x|baste) \log_2 \frac{p(x|baste)}{p(x)} \\&= \frac{3}{4} * \log_2 \frac{\frac{3}{4}}{\frac{1}{3}} + \frac{3}{16} * \log_2\frac{\frac{3}{16}}{\frac{1}{3}} + \frac{1}{16} * \log_2\frac{\frac{1}{16}}{\frac{1}{3}}\\&= 0.5709 \space \space bits\\\end{align*}
But what if the verb given is
read? We may imagine that all of them would choose
book with no hesitatation, then the average information gain for each mediator from the verb
read is:
\begin{align*}D(p(nouns|read)||p(nouns)) &= \sum_{x\in\{book\}} p(x|read) \log_2 \frac{p(x|read)}{p(x)} \\&= 1 * \log_2 \frac{1}{\frac{1}{3}} \\& =1.5849 \space \space bits \\\end{align*}We can see that the verb
read can give the mediators more information. And that's what relative entropy can measure.
Let's continue our story. If C suspects that the noun may be wrong because A told him that he might have made a mistake by sending the wrong verb to the mediators. Then how much information in bits can such a piece of bad news give C?
1) if the verb given by A was
baste:
\begin{align*}D(p(nouns)||p(nouns|baste)) &= \sum_{x\in\{turkey, animal, book\}} p(x) \log_2 \frac{p(x)}{p(x|baste)} \\&= \frac{1}{3} * \log_2 \frac{\frac{1}{3}}{\frac{3}{4}} + \frac{1}{3} * \log_2\frac{\frac{1}{3}}{\frac{3}{16}} + \frac{1}{3} * \log_2\frac{\frac{1}{3}}{\frac{1}{16}}\\&= 0.69172 \space \space bits\\\end{align*}
2) but what if the verb was
read?\begin{align*}D(p(nouns)||p(nouns|baste)) &= \sum_{x\in\{book, *, *\}} p(x) \log_2 \frac{p(x)}{p(x|baste)} \\&= \frac{1}{3} * \log_2 \frac{\frac{1}{3}}{1} + \frac{1}{3} * \log_2\frac{\frac{1}{3}}{0} + \frac{1}{3} * \log_2\frac{\frac{1}{3}}{0}\\&= \infty \space \space bits\\\end{align*}
Since C never know what would the other two nouns be and any word in the vocabulary would be possible.
We can see that the KL divergence is asymmetric.
I hope I am right, and if not please comment and help correct me. Thanks in advance. |
This is best understood in the framework of commutative rings.
There's a functor $$F:\mathbf{CRing} \leftarrow \mathbf{CRing}$$ given as follows: $$F(R) = R[i]/(i^2 +1).$$ We can think of $F$ as adjoining an element $i$ with the property that $i^2+1=0$. Informally, $i = \sqrt{-1}$.
There's also functor $$G:\mathbf{CRing} \leftarrow \mathbf{CRing}$$ given as follows: $$G(R) = R[j]/(j \cdot 0 - 1).$$ We can think of $F$ as adjoining an element $j$ with the property that $j \cdot 0-1=0$. Informally, $j = \frac{1}{0}$.
However, it can be seen that $G(R)$ is always the trivial ring:
$$G(R) = R[j]/(j \cdot 0 - 1) = R[j]/(0-1) = R[j]/(-1) = R[j]/R[j] \cong 1$$
So we cannot get anything useful out of $G$.
From a slightly different vantage point, main the difference between $F$ and $G$ is this: there's an obvious morphism $f_R:F(R) \leftarrow R$, and an obvious morphism $g_R:G(R) \leftarrow R$. But, whereas the morphism $f_R$ is injective for all rings $R$, on the other hand, the morphism $g_R$ is
never injective, unless $R$ is the trivial ring, because $G(R)$ is always the trivial ring. So in particular, whereas $\mathbb{C} = F(\mathbb{R})$ can be viewed as an extension of $\mathbb{R}$, on the other hand, we cannot view $G(\mathbb{R})$ as an extension of $\mathbb{R}$. In fact, we cannot get anything useful like this at all. |
Prime Number Theorem/Historical Note Historical Note on Prime Number Theorem
He also posited the suggestion that it could be approximated by the Eulerian logarithmic integral $\displaystyle \map \Li x = \int_2^x \frac {\d t} {\map \ln t}$.
It took another century before a proof was found. $\dfrac 7 8 < \dfrac {\map \pi x} {x / \ln x} < \dfrac 9 8$
for all sufficiently large $x$.
He also proved that if the limit of the expression in question
does exist, then its value must be $1$. Since then, several complete proofs have been discovered.
The original theorem of Hadamard used in that proof is given on $\mathsf{Pr} \infty \mathsf{fWiki}$ as
Ingham's Theorem on Convergent Dirichlet Series, which is used in Order of Möbius Function, an essential part of the above proof.
Their proof did not make use of any analytic function theory, and relied entirely on basic properties of logarithms.
Dispute over whether to publish their results jointly or separately created a life-long feud between the two mathematicians.
Sources 1986: David Wells: Curious and Interesting Numbers... (previous) ... (next): $-1$ and $i$ 1992: George F. Simmons: Calculus Gems... (previous) ... (next): Chapter $\text {A}.25$: Gauss ($1777$ – $1855$) 1992: George F. Simmons: Calculus Gems... (previous) ... (next): Chapter $\text {A}.31$: Chebyshev ($1821$ – $1894$) 1992: George F. Simmons: Calculus Gems... (previous) ... (next): Chapter $\text {A}.32$: Riemann ($1826$ – $1866$): Footnote $6$ 1992: George F. Simmons: Calculus Gems... (previous) ... (next): Chapter $\text {B}.16$: The Sequence of Primes 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): $-1$ and $i$ 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: prime number theorem |
How can I evaluate the marginal cumulative distribution function of a set of random variables for which I do not have the CDF in closed form. I can, however, simulate from a joint distribution involving this set of variables.
To be more specific, assume I want to evaluate the CDF of $(X_1,X_2)$ but I only have a way to simulate from $(X_1,X_2,X_3)$.
Obviously I can approximate the CDF of $(X_1,X_2,X_3)$ by obtaining a large number of simulations and checking how many observations fall below the desired threshold. But how to get the CDF of $(X_1,X_2)$?. Can I just simply throw $X_3$ away and use the same procedure as for the joint PDF?
Obviously I cant get $F_X(x) = P(X \leq x) = \lim_{y \to \infty} P(X \leq x, Y \leq y) = \lim_{y \to \infty} F_{XY} (x, y)$ because no closed form CDF is available. I also do not want to involve the pdf. |
Forgot password? New user? Sign up
Existing user? Log in
∫0∞ln(x)1−x2dx\int _{ 0 }^{ \infty }{ \frac { \ln { ( } x) }{ 1-x^{ 2 } } dx } ∫0∞1−x2ln(x)dx
The above integral is equal to −aπbc\dfrac{-a\pi^b}{c}c−aπb for positive integers a,ba,ba,b and ccc with aaa and ccc being coprime. Evaluate a+b+ca+b+ca+b+c.
Note: You are given that ζ(2)=π26\zeta(2) = \dfrac{\pi^2}{6} ζ(2)=6π2.
inspired by Vishwak Srinivasan
Problem Loading...
Note Loading...
Set Loading... |
Sunrise and Sunset calculator calculates the exact time of sunset and sunrise of a given day of Gregorian calendar. It is is an online calender tool to calculate sunset and sunrise at your location or any part of the world for any present, future or past date.It is necessary to follow the next steps:
Enter latitude, longitude, time zone and date in the corresponding formats: $m/d/y$. The latitude values must be in the range $[-90^o,90^o]$ while the longitude values must be in the range $[-180^o,180^o]$. The months must be in the interval $[1,12]$, the days must be in the interval $[1,31]$ and the years must be in the interval $[1900, 2019]$;
Press the "CALCULATE" button to make the computation;
Sunrise and sunset calculator will find the times of sunrise and sunset for the selected date.
Input: Two real numbers, the first in the range $[-90^o,90^o]$, the second value in the range $[-180^o,180^o]$, timezone and date $m/d/y$. These box contain the calender interface; Output: The result is in time format $HH:MM\quad XM$, i.e. a $12$ hour system where seconds are not given, $XM$ represents $AM$ or $PM$.
Sunrise is the moment when the Sun appears on the horizon in the morning. Sunset is the disappearance of the Sun below the horizon in the evening.
Latitude is a coordinate which represents the northsouth position of a point on the Earth. Latitude is an angle which ranges from the Equator to North or South at the poles. It is usually denoted by $\phi$ or $\psi$ and tt is measured in degrees.Latitude takes value from the interval $\phi\in[-90^o,90^o]$, where $0^o$ represents the Equator.
Longitude is a coordinate which represents the eastwest position of a point on the Earth. It is an angular measurement, expressed in degrees and denoted by $\lambda$. Meridians connect points with the same longitude.The prime meridian passes through the Royal Observatory, Greenwich, England. It has the longitude of $0^o$. The longitude takes value from the interval $\phi\in[-180^o,180^o]$.
The grade school students may use this sunrise and sunset calculator to find what time is sunset and sunrise today or any other specific date.
Use of Sunrise & Sunset Time Calculation
Because of various life's needs, usually we want to find sunset and sunrise for some date. Many people are watching sunset and sunrise because they have a good influence on their mental and physical abilities. Sunrise and sunset calculator will be very important for fishermen, photographers, pilots, etc. The calculator would be very useful for grade school students (K-12 education) to understand the concept of geographical coordinates. |
For a shorter proof, here are a few things we need to know before we start:$X_1, X_2 , ..., X_n$ are independent observations from a population with mean $\mu$ and variance $\sigma^{2}$$\mathbb E(X_i) = \mu$ , $\mathbb{Var}(X_i)= \sigma^{2}$$\mathbb E(X^2) = \sigma^{2} + \mu^{2}$$\mathbb{Var}(X)=\mathbb E(X^2)-\mathbb [E(X)]^2$$\mathbb E(\bar{X}^2) ...
I think what you need is that if $U(x,y)$ is homothetic then$$\forall \alpha \in \mathbb{R}_{++}, \forall (x,y) : \hskip 6pt\frac{\frac{\partial U(x,y)}{\partial x}}{\frac{\partial U(x,y)}{\partial y}} =\frac{\frac{\partial U(\alpha \cdot x,\alpha \cdot y)}{\partial x}}{\frac{\partial U(\alpha \cdot x,\alpha \cdot y)}{\partial y}}$$and love.
I know that during my university time I had similar problems to find a complete proof, which shows exactly step by step why the estimator of the sample variance is unbiased.The proof I used can be found under http://economictheoryblog.wordpress.com/2012/06/28/latexlatexs2/The proof itself is not very complicated but rather long. That also the reason why ...
From Varian (7th edition):Consumer behavior: at least chapters 1–6; preferably also chapters 7, 8, and 12.Perfect competition and Theory of production: chapters 15, 16, and 18-23.Monopoly price discrimination: chapter 25 (you might want to look at chapter 24; it seems a bit odd to study price discrimination without first looking at a non-discriminating ...
My prof is always telling us, if we want to pursue PhD level Econ in the future, we should master the full content of the following book:Microeconomic Theory. Andreu Mas—Colell Michael D. Whinston and. Jerry R. Green. New York Oxford OXFORD UNIVERSITY PRESS 1995.He also mentioned that there's main difference in viewpoint between graduate-level and top-...
Don't commit the cardinal mistake of equating preferences with choices.In the context of Expected Utility Theory, the fact that a risk-averse agent ($RA$) would choose $N$ over $M$ implies that$$E[u_{RA}(N)] > E[u_{RA}(M)]$$The fact that a risk-neutral agent ($RN$) could choose $M$ over $N$ implies that$$E[u_{RN}(N)] < E[u_{RN}(M)] \implies ...
Hidden information concerns characteristics that are unobservable by one side of the market. For example, a consumer's willingness to pay, a worker's productivity, the quality of a used car all fall under this category. The characteristics in question are typically assumed to be fixed or very costly to modify.Moral hazard concerns actions that are ...
It's great that you're developing an interest in economics. I would suggest Mankiw's Principles of Economics to start with. I believe it meets both your requirements and covers the two major areas of economics, microeconomics and macroeconomics, so you would get a decent overview of this field of study. Good luck!
In the preface, Takayama writes that the book was written with the intention to keep the prerequisites to a minimum: elementary calculus and matrix algebra.Perhaps he was exaggerating a little, but I suspect, after skimming the table of contents, that knowledge of the aforementioned subjects and experience working with (i.e. reading/understanding and ...
(i) $A$ & $B$If player 1 play $A$ with probability $p$ and $B$ with probability $(1-p)$, where $0<p<1$, then player 2's expected payoff from playing$D$ is $4p+4(1-p) = 4$$E$ is $6p + 2(1-p) = 4p + 2$$F$ is $6p + 4(1-p) = 2p + 4$Since payoff from playing $F$ is more than the payoff from playing any other strategy for player 2, he will ...
The following argument assumes that we are dealing only with interior solutions. Let $(x_1,x_2,x_3)$ be an optimal demand bundle at prices $(p_1,p_2,p_3)$ and income $m$ and assume that $(x_1,x_2,x_3)$ is an optimal demand bundle at prices $(p_1',p_2,p_3)$ and income $m$. We want to show that one of the following three cases holds:$x_2=x_2'$ and $x_3=x_3'$...
In an intertemporal maximization problem, we seek to find the optimal sequence of the control and the state variables. It is the recursive nature of the problem that permits us to consider a "typical" point in time and just one condition per variable.For each such problem, we need to find out (carefully) in how many distinct periods a specific ...
Let's improve the "answers per question" metric of the site, by providing a variant of @FiveSigma 's answer that uses visibly the i.i.d. assumption (showing also its necessity).We want to prove the unbiasedness of the sample-variance estimator,$$s^2 \equiv \frac{1}{n-1}\sum\limits_{i=1}^n(x_i-\bar x)^2$$using an i.i.d. sample of size $n$, from a ...
As far as I remember, Varian's book is aimed at second year undergraduates who are normally studying a micro 2 module (or something similar).As a result, he does tend to assume some prior knowledge so it would be beneficial for you to plug the gaps in your knowledge (assuming you haven't) before you proceed with this book.Even though you are studying ...
Note that I am not 100% sure. But in my understanding, we haveYear 1Price for a product in the US : $p_{US}=v$ \$Exchange rate: $x$ pesos for $1$\$Price of the product in the Philipines: $p_{Ph})=v.x$ pesosYear 2Price for the same product in the US : $p^\prime_{US} = (1+\alpha_v)v$ \$. The price increased due to the inflation $\alpha_v$.Nominal ...
I've just solved this problem. First of all, your solution does not make too much sense, as in a simple interest rate rule it must hold that the sum of all coefficients must be greater than one. In your case this means that $\phi>1$. Therefore, the series would converge not to zero but to $\frac{1}{\phi-1}$. Second, an interest rate rule should try to ...
While textbooks are the best way to learn the material (MC=MR etc), here are some suggestions for improving your intuitive understanding of economics.BooksThe Undercover Economist by Tim HarfordThe Armchair Economist by Steven E. LandesburgBlogsMarginal RevolutionNoahpinionConversable EconomistThe Enlightened Economist (Great for book ...
In part a, if $B = D_1 + D_2$, then the SGPE should be$\left\lbrace D_1 = \frac{W}{3},\ D_2 = \frac{W}{3}, \left\lbrace P = \alpha (D_1 + D_2), \ S = (1 - \alpha) (D_1 + D_2) \right\rbrace \right\rbrace$Don't say $P = \alpha \frac{2W}{3}$. That's an action, and the second stage best respond should a strategy (function) to make the equilibrium subgame ...
We'll first find manager's strategy. Manager of the charity chooses $S$ and $P$ by solving the following problem :\begin{eqnarray*} \max_{S, P} & \ \frac{P^a S^{1-a}}{B^a} \\ \text{s.t.} & \ P+S = B \end{eqnarray*}where $B = D_1+ D_2$.Solving it we get the manager's strategy as a function of donations $D_1, D_2$ as :\begin{eqnarray*} P &=&...
"Endogenous Growth" is actually the short version of saying "Endogenous Technology Growth"Exogenous (Technology) Growth ModelsThe rate technological progress $g$ is Exogenously given.In both Solow and RCK, we can find $A_t = (1 + g)^t A_0 \ \ $(or $A(t) = A(0) e^{gt}$ if in continuous time). $Y$ increases over time because $A$ increases over time. This ...
Prove that $\Pi_j(x_j)$ is strictly concave in $x_j$.$\Pi_j(x_j) = G(x_j) + F\left(\frac{x_j}{y}\right) =G(x_j) + F\left(R_j(x_j)\right) $Differentiating it we get$\Pi_j'(x_j) =G'(x_j) + F'\left(R_j(x_j)\right)R_j'(x_j) $Differentating $\Pi_j'(x_j)$, we get$\Pi_j''(x_j) =G''(x_j) + F''\left(R_j(x_j)\right)(R_j'(x_j))^2 + F'\left(R_j(x_j)\right)R_j''...
Firm $i$'s profits $(\pi_i)$ as a function of its own price $(p_i)$ and the other firm's price $(p_j)$ are as follows :\begin{eqnarray*} \pi_i(p_i, p_j) = \begin{cases} (p_i-200)\min(1000- p_i, 300) & \text{if } p_i < p_j \\ (p_i-200)\min\left(\frac{1000- p_i}{2}, 300\right) & \text{if } p_i = p_j \\ 0 & \text{if } p_i > p_j\end{cases} \...
Roughly speaking, $X_1$ and $X_2$ are affiliated because they have the common component $T$. That is, if $X_1$ is large, $X_2$ tends to be large as well, because a large $X_1$ makes a large $T$ likelier than without this information.The variables are affiliated if for the joint density $f_{X_1,X_2}$$$f_{X_1,X_2} (x',y) f_{X_1,X_2} (x,y') \geq f_{X_1,X_2} ...
Seminars and conferences are the best ways to get exposure to working papers, in addition to following specific authors. (Some of my colleagues also follow economists on Twitter, where they sometimes post new working papers.)In addition, subscribe to the email alerts from the journals in your field/interest. This will get you up to date with what comes out ...
I assume what you're asking based on your comments is: "How can I visualize indifference curves for 3 goods?" I can think of three options:1) Use a tool like Matlab, or its open-source equivalent, Octave, to plot 3 dimensional indifference curves. Here is a tutorial on how to do that.2) Make a series of 2-dimensional indifference curves for two of the ...
I would recommend Tim Harford's "The Undercover Economist" for an easy way to get some exposure to how economists think, without having to go through all of the dry academic mathematics etc.To get a sense of the shape of the economics discipline, you might also like to look at "Economics: A Very Short Introduction" by Partha Dasgupta.Lastly, if you ...
INITIAL ANSWER March 24Ok. Let's answer this without answering. Your moral obligation to this community, in case it matters to you, is to report back with your work and your answer.1) In Economics we use the difference of natural logarithms to express (approximately) something specific. It is essentially stated in the body of the exercise.2) An ...
If the Phillipine peso falls in value against the USD by 5% in a year, but the domestic inflation rate in the Phillipines is 10%, compared to 2% in the USA, the nominal exchange rate has fallen (by 5%), but the real exchange rate has risen by 3%.Could anyone help me explain why "the real exchange rate has risen by 3%."?A Word of caution: it is ...
First things first, we must dispel the misconception that you will find in economics some magic formula to "beating the stock market" or some other way to make easy money. I explain why why here.Learning economics can help you to become more wealthy in several prosaic ways:In the same way that learning to be a doctor or lawyer can make you rich. If you ... |
Among the many equivalent formulations of Leopoldt's conjecture, this one is probably the shortest: For any number field $K$, prime number $p$, finite set $S$ of primes of $K$ containing the primes above $p$, one has
Leopoldt's conjecture: $H^2(G_{K,S},\mathbb{Q}_p)=0$.
Here $G_{K,S}$ is as usual the Galois group of the maximal algebraic extension of $K$ un ramified outside $S$ and places at infinity, the $H^2$ is continuous cohomology.
Now, one of the most natural way to get a class in an $H^2$ is as a cup-product of two classes in an $H^1$. For example, if $\chi : G_{K,S} \rightarrow Q_p^\ast $ is a continuous character, then there is a cup-product map $$H^1(G_{K,S},\chi) \times H^1(G_{K,S},\chi^{-1}) \rightarrow H^2(G_{K,S},\mathbb{Q}_p),$$ which, according to Leopoldt's conjecture, should be zero.
Is it any easier to prove that the above morphism is zero than to prove Leopoldt's conjecture itself ?
I would also be interested to know the answer in special cases (of $K$, $\chi$, $p$) where Leopoldt's conjecture is not known. |
A "perfectly efficient" computer can mean many things, but, for the purposes of this answer, let's take it to mean a
reversible computer (explained further as we go).
The theoretical lower limit to energy needs in computing is the Landauer Limit, which states that the
forgetting of one bit of information requires the input of work amounting $k\,T\,\log 2$ so as to comply with the second law of thermodynamics. If the computer is reversible, i.e. its state at all times can be inferred its state at any other time, then there is no theoretical lower limit to its energy needs. By state here we mean the computer theoretical state of the CPU, not the physical quantum state (the former being a very small part of the latter; microscopic laws are reversible so that the full quantum state at any time can always in theory be inferred from the full quantum state at any time). An example of a nonreversible computation is one where you add two numbers and write the result over the memory formerly occupied by the addends. The two addends cannot be inferred from the computer's state ( i.e. the sum) after the addition has taken place. Briefly, the reason for this situation is that if your calculation forgets, Nature does not, so if you erase memory, then that "erased" information must somehow wind up encoded in the full quantum state of the computer since microscopic laws are indeed reversible. The only way a system can "absorb more information", i.e. fully encode its past in its quantum state, is by accessing more and more quantum states, and that almost always means by getting hotter [see 1]. So, somewhere along the line you have to add energy to make this happen, and eventually you'll need to cool the computer to keep it working. The second law of thermodynamics then shows that if we want to keep the computer at a constant macrostate, we need to input the amount of work prescribed by Landauer's principle to do so[see ref. 2].
Now let's look at your problem. Counting can clearly be made into a reversible computation: each step is invertible and you can imagine simply clocking a simple digital counter backwards to achieve this. So in theory we could build a quantum (or other reversible) computer to count with no energy input
whilst it is counting. However, when tallying up the forgetting of information, one needs to take into account initialization. That is, you need to begin with initialized registers to count with. You start your machine up by initializing them all to nought ..... but that means that there is a quantum state of each register that is "forgotten" as the machine is initialized. So, if you need memory of $N$ bits for your counting, you need to come up with $N\,k\,T\,\log 2$ joules to get your reversible computer initialized. Wikipedia tells me the Milky Way's mass is estimated to be $10^{12}$ solar masses, or about $2\times 10^{30}\times 10^{12}\times 10^{17} =2\times 10^{59}$ joules. If you can cool your computer to the temperature of the Cosmic Background Microwave Radiation, or $2.7{\rm K}$, then the Landauer limit implies you can buy the initialization of $2\times 10^{59} / (2.7\times 1.38\times 10^{-23}\times \log 2) \approx 8\times 10^{81}$ bits. You can't run your computer below $2.7{\rm K}$ since it would then need energy for artificial cooling below its environment.
So that's then your rough answer: in theory you could count to the number :
$$2^{8\times 10^{81}}$$
with a reversible implementation of a counter given the stated energy budget.
Another limit that may be of interest in from the cryptographic viewpoint is the Bremmermann Limit, which limits how fast computations can evolve into their successive steps.
It should be noted how difficult it is to achieve the Landauer limit. If our counter forgets even one bit per counting cycle, the limit reduces to the still colossal $2\times 10ˆ{81}$. Yockey [see reference 3] claims in the early chapters of his book that the phenomenon of DNA replication during cell division thought of as a computer algorithm is the most efficient computation known, and consumes roughly one order of magnitude more energy than the Landauer limit, that is, roughly $10 k\,T$ per forgotten bit. In the light of the Landauer limit, modern computers are staggeringly inefficient. 32Gbyte of RAM being overwritten at 1GByte per second and consuming 5 watts at 300K in being so (these are the figures for the computer these words are being written on) represents a forgetting that is eleven orders of magnitude more wasteful ($5 / (8\times 10^9 \times k \times 300\,\log 2)\approx 2\times 10^{11}$) than the Landauer limit.
References and Footnotes:
[1]: To deepen your understanding of this statement, try working out and plotting the Shannon entropy of specification of the state of an ensemble of $N$ quantum harmonic oscillators at thermodynamic equilibrium as a function of temperature (answer: $\left(\frac{e^{\beta_\omega } \beta_\omega }{1-e^{\beta_\omega }}+\log \left(e^{\beta_\omega }-1\right)\right)/\log (2)$ bits per oscillator, where $\beta_\omega = \hbar\omega/(k\,T)$). You can immediately see what's going on: the Boltzmann probability distribution is here proportional to $p(n)\propto\exp\left(-(n+\frac{1}{2}) \frac{\hbar\,\omega}{k\,T}\right)$ and the tail gets longer, "accessing more states" as $T$ rises).
[2] An excellent review paper for these concepts isCharles Bennett, "The Thermodynamics of Computation: A Review", Int. J. Theo. Phys.,
21, No. 12, 1982)
[3] "Information Theory, Evolution, and the Origin of Life", Hubert P. Yockey As a non biologist I don't feel qualified to judge this text. I did feel, however, that I understood the early chapters whence I gleaned the assertion about the efficiency of DNA replication well enough to be reasonably confident in the assertion's soundness, but I found most of the text beyond Chapter 2 utterly incomprehensible. |
I want to give you a general argument for the negative result: A general $n$-qubit density matrix cannot be contructed from this type of measurements.
The following observation will be needed:
Let $G$ be a compact Lie group acting on a vector space $V$ via a representation $\pi$. Let $\rho$ be a density matrix on $V$.
Consider the quantum characteristic function:
$\phi(g) = \mathrm{Tr}(\rho \pi(g)), \ \ g\in G$,
When $\pi$ is irreducible, then the density matrix can be reconstructed by virtue of the Peter-Weyl theorem:
$\rho_{ij} = \int \phi(g)\pi_{ij} (g) d\mu(g)\ \ i,j = 1,.,.,.,\mathrm{dim}(V)$
However, in the reducible case, there exists a basis in which all matrix representatives are block diagonal.Thus, in this basis, the matrix elements of the density matrix between basis vectors of different irreducible factors cannot be reconstructed, even given the quantum characteristic function at all points of the group manifold.
To apply that to our case consider the group $G = \bigotimes_{i=1}^{n} U(1) \bigotimes SU(2)$.
Where every Abelian factor acts on a different qubit by phase multiplication and the $SU(2)$ factor acts diagonally on the qubits.
Observing that an SU(2) group element acting on a qubit can be parameterized in terms of the Euler angles:
$g = e^{i\frac{\psi}{2}}P_1(\theta, \phi) +e^{-i\frac{\psi}{2}}P_0(\theta, \phi)$.
Then the quantum characteristic function can be obtained from the series of measurement expectations by a Fourier transform with respect to the Abelian coordinates and $\psi$ and vice versa.
But the group $G$ acts reducibly on the $n$-qubit, since every Abelian factor acts reducibly and $SU(2)$ acts irreducibly only on symmetric powers of qubits (thus reducibly on the full tensor power), which completes the argument. |
At first I thought maybe sodium sulphate in contact with water produces sulphuric acid which absorbs water but I do not think it is actually a valid reason.
Sodium sulphate reacts readily with water at room temperature to form hydrates up to sodium sulphate decahydrate, $\ce{NaSO4\cdot10H2O}$. $$\ce{NaSO4 + 10H2O -> NaSO4\cdot10H2O}$$ This means that $\ce{NaSO4}$ can absorb up to 10 mol of water for every 1 mol of salt that is used, making it one of the most effective drying agents in terms of sheer capacity.
Using data from this source we can calculate the Gibbs energy change for the above reaction at different temperatures. $$\mathrm{\Delta G_{20^\circ C} = -1.33~kJ~mol^{-1}}$$ $$\mathrm{\Delta G_{30^\circ C} = 1.28~kJ~mol^{-1}}$$
Since the entropy change for the reaction is negative, we can see that the cooler the solution you are trying to dry is, the more effective $\ce{NaSO4}$ will be as a drying agent. |
The motivation for this question is that I want "toy examples" of how to prove/disprove the existence of multiplicative structures on examples of spectra. The class of examples I am thinking of is the Moore spectrum. For concreteness this is defined as a spectrum $X$ such that $\pi_n(X) = 0$ for $n <0$ $H_n(X)= 0$ for $n >0$ and $H_0(X) = R$ for some ring $R$.
There are some curious phenomenon that happens:
On one extreme, the Mod 2 Moore spectrum has no unital multiplication at all (by simple arguments in, say, Difficulties with the mod 2 Moore Spectrum) The Mod 3 Moore spectrum is not $A_{\infty}$ by Massey product arguments. The comment here on top of page 838: http://www.math.uni-bonn.de/people/schwede/rigid.pdf says that the mod $p$ Moore spectrum for $p \geq 5$ is homotopy associative by folklore (I would like to see an argument for this too!) On another extreme, since we can model the $\mathbb{Z}[q^{-1}]$ by localizing the sphere spectrum they are $E_{\infty}$.
In this light, my questions are:
First and foremost, I would love to see a proof of the folklore result above about $p \geq 5$ Is there a "general pattern" about multiplicative structures of the Moore spectrum as the ring/abelian group varies |
The Sharpe Ratio
The Sharpe Ratio is perhaps the most widely used statistic for summarizing the achieved(or backtested!) past performance of some asset--mutual fund, hedge fund, trading strategy,
etc.Defined as the mean return divided by the standard deviation (or \(\frac{\mu}{\sigma}\)), theSharpe Ratio roughly measures the expected return per unit risk, with the idea that an investorwill tailor their investment to a maximum level of risk. Expressed in 'annual' units (which is,'per square root year'), a value of 1 or so should be considered very good for a mutual fund,while a value of 2 would be astounding. These interpretations are my own, but the SharpeRatio can be used to bound the probability of a loss; indeed for a mutual fund with an annual Sharpe of 1, a year over yearloss is a "1 sigma event", destined to happen approximately 16 percent of the time. For a Sharpeof 2, however, that probability is around 2 percent.
The Sharpe Ratio is connected to the statistician's \(t\) statistic, defined roughly as\(\sqrt{n}\frac{\mu}{\sigma}\). Statisticians have spent decades thinkingabout the \(t\) test, how it responds to violations of model assumptions (independence,heteroskedasticity, autocorrelation, non-normality,
etc.), and how to make it robust to thoseassumptions. Much of those findings can be translated into facts about the Sharpe Ratio. We find more connections when we generalize to higher dimensions or take into accountconditioning information: connections between theMarkowitz Portfolio and the multivariate analogue of the \(t\) statistic, Hotelling's \(T^2\), andso on.
If you want to learn more about the Sharpe Ratio, about its use, its distribution as a sample statistic, I recommend you:
Read A Short Sharpe Course, a free self-contained set of notes on the Sharpe Ratio produced by the author of this site. This work is only one third complete, still lacking the sections on market timing and the portfolio problem. Read up on some of the historical research on the Sharpe Ratio. Check out our blog. |
The Akra-Bazzi method
The Akra-Bazzi method gives asymptotics for recurrences of the form: $$ T(x) = \sum_{1 \le i \le k} a_i T(b_i x + h_i(x)) + g(x) \quad \text{for $x \ge x_0$} $$ This covers the usual divide-and-conquer recurrences, but also cases in which the division is unequal. The "fudge terms" $h_i(x)$ can cater for divisions that don't come out exact, for example. The conditions for applicability are:
There are enough base cases to get the recurrence going The $a_i$ and $b_i$ are all constants For all $i$, $a_i > 0$ For all $i$, $0 < b_i < 1$ $\lvert g(x) \rvert = O(x^c)$ for some constant $c$ as $x \rightarrow \infty$ For all $i$, $\lvert h_i(x) \rvert = O(x / (\log x)^2)$ $x_0$ is a constant
Note that $\lfloor b_i x \rfloor = b_i x - \{b_i x\}$, and as the sawtooth function $\{ u \} = u - \lfloor u \rfloor$ is always between 0 and 1, replacing $\lfloor b_i x \rfloor$ (or $\lceil b_i x \rceil$ as appropiate) satisfies the conditions on the $h_i$.
Find $p$ such that: $$ \sum_{1 \le i \le k} a_i b_i^p = 1 $$ Then the asymptotic behaviour of $T(x)$ as $x \rightarrow \infty$ is given by: $$ T(x) = \Theta \left( x^p \left( 1 + \int _1^x \frac{g(u)}{u^{p + 1}} du \right) \right) $$
Examples
As an example, take the recursion for $n \ge 5$, where $T(0) = T(1) = T(2) = T(3) = T(4) = 17$: $$ T(n) = 9 T(\lfloor n / 5 \rfloor) + T(\lceil 4 n / 5 \rceil) + 3 n \log n $$ The conditions are satisfied, we need $p$: $$ 9 \left( \frac{1}{5} \right)^p + \left( \frac{4}{5} \right)^p = 1 $$ As luck would have it, $p = 2$. Thus we have: $$ T(n) = \Theta \left( n^2 \left(1 + \int_1^n \frac{3 u \log u}{u^3} du \right) \right) = \Theta(n^2)$$
Another example is the following for $n \ge 2$: $$ T(n) = 4 T(n / 2) + n^2 / \lg n $$ We have $g(n) = n^2 / \ln n = O(n^2)$, check. We have that there is a single $a_1 = 4$, $b_1 = 1 / 2$, which checks out. Assuming that the $n / 2$ is really $\lfloor n / 2 \rfloor$ and/or $\lceil n / 2 \rceil$, the implied $h_i(n)$ also check out. So we need: $$ a_1 b_1^p = 4 \cdot (1 / 2)^p = 1 $$ Thus $p = 2$, and: $$ T(n) = \Theta\left(n^2 \left( 1 + \int_2^n \frac{u^2 du}{u^3 \ln u} \right) \right) = \Theta\left(n^2 \left( 1 + \int_2^n \frac{du}{u \ln u} \right) \right) = \Theta(n^2 \ln \ln n) $$ (The integral as given with lower limit 1 diverges, but the lower limit should really be the $n$ for which the recurrence starts being valid; check the original paper.)
(The help of maxima with the algebra is gratefully acknowledged) |
On the DNA Computer Binary Code
In any finite set we can define a
, a partial order in different ways. But here, a partial order is defined in the set of four DNA bases in such a manner that a Boolean lattice structure is obtained. A Boolean lattice is an algebraic structure that captures essential properties of both set operations and logic operations. This partial order is defined based on the physico-chemical properties of the DNA bases: hydrogen bond number and chemical type: of purine {A, G} and pyrimidine {U, C}. This physico-mathematical description permits the study of the genetic information carried by the DNA molecules as a computer binary code of zeros (0) and (1). binary operation 1. Boolean lattice of the four DNA bases
In any four-element Boolean lattice every element is comparable to every other, except two of them that are, nevertheless, complementary. Consequently, to build a four-base Boolean lattice it is necessary for the bases with the same number of hydrogen bonds in the DNA molecule and in different chemical types to be complementary elements in the lattice. In other words, the complementary bases in the DNA molecule (
G ≡C and A= T or A= U during the translation of mRNA) should be complementary elements in the Boolean lattice. Thus, there are four possible lattices, each one with a different base as the maximum element. 2. Boolean (logic) operations in the set of DNA bases
The Boolean algebra on the set of elements
X will be denoted by $(B(X), \vee, \wedge)$. Here the operators $\vee$ and $\wedge$ represent classical “OR” and “AND” term-by-term. From the Boolean algebra definition it follows that this structure is (among other things) a logical operations in which any two elements $\alpha$ and $\beta$ have upper and lower bounds. Particularly, the greater lower bound of the elements $\alpha$ and $\beta$ is the element $\alpha\vee\beta$ and the least upper bound is the element $\alpha\wedge\beta$. partially ordered set This equivalent partial ordered set is called. Boolean lattice In every Boolean algebra (denoted by $(B(X), \vee, \wedge)$) for any two elements , $\alpha,\beta \in X$ we have $\alpha \le \beta$, if and only if $\neg\alpha\vee\beta=1$, where symbol “$\neg$” stands for the logic negation. If the last equality holds, then it is said that $\beta$ is deduced from $\alpha$. Furthermore, if $\alpha \le \beta$ or $\alpha \ge \beta$ the elements and are said to be comparable. Otherwise, they are said not to be comparable.
In the set of four DNA bases, we can built twenty four isomorphic Boolean lattices [1]. Herein, we focus our attention that one described in reference [2], where the DNA bases
G and C are taken as the maximum and minimum elements, respectively, in the Boolean lattice. The logic operation in this DNA computer code are given in the following table:
OR AND $\vee$ G A U C $\wedge$ G A U C G G A U Ç G G G G G A A A C C A G A G A U U C U C U G G U U C C C C C C G A U C
It is well known that all Boolean algebras with the same number of elements are isomorphic. Therefore, our algebra $(B(X), \vee, \wedge)$ is isomorphic to the Boolean algebra $(\mathbb{Z}_2^2(X), \vee, \wedge)$, where $\mathbb{Z}_2 = \{0,1\}$. Then, we can represent this DNA Boolean algebra by means of the correspondence: $G \leftrightarrow 00$; $A \leftrightarrow 01$; $U \leftrightarrow 10$; $C \leftrightarrow 11$. So, in accordance with the operation table:
$A \vee U = C \leftrightarrow 01 \vee 10 = 11$ $U \wedge G = U \leftrightarrow 10 \wedge 00 = 00$ $G \vee C = C \leftrightarrow 00 \vee 11 = 11$ A Boolean lattice has in correspondence a directed graph called Hasse diagram, where two nodes (elements) $\alpha$ and $\beta$ are connected with a directed edge from $\alpha$ to $\beta$ (or connected with a directed edge from $\beta$ to $\alpha$) if, and only if, $\alpha \le \beta$ ($\alpha \ge \beta$) and there is no other element between $\alpha$ and $\beta$. 3. The Genetic code Boolean Algebras
Boolean algebras of codons are, explicitly, derived as the direct product $C(X) = B(X) \times B(X) \times B(X)$. These algebras are isomorphic to the dual Boolean algebras $(\mathbb{Z}_2^6, \vee, \wedge)$ and $(\mathbb{Z}_2^6, \wedge, \vee)$ induced by the isomorphism $B(X) \cong \mathbb{Z}_2^2$, where $X$ runs over the twenty four possibles ordered sets of four DNA bases [1]. For example:
CAG $\vee$ AUC = CCC $\leftrightarrow$ 110100 $\vee$ 011011 = 111111
ACG $\wedge$ UGA = GGG $\leftrightarrow$ 011100 $\wedge$ 100001 = 000000
$\neg$ (CAU) = GUA $\leftrightarrow$ $\neg$ (110110) = 001001
The Hasse diagram for the corresponding Boolean algebra derived from the direct product of the Boolean algebra of four DNA bases given in the above operation table is:
In the Hasse diagram, chains and anti-chains are located. A Boolean lattice subset is called a chain if any two of its elements are comparable but, on the contrary, if any two of its elements are not comparable, the subset is called an anti-chain. In the Hasse diagram of codons shown in the figure, all chains with maximal length have the same minimum element GGG and the maximum element CCC. It is evident that two codons are in the same chain with maximal length if and only if they are comparable, for example the chain: GGG $\leftrightarrow$ GAG $\leftrightarrow$ AAG $\leftrightarrow$ AAA $\leftrightarrow$ AAC $\leftrightarrow$ CAC $\leftrightarrow$ CCC
The Hasse diagram symmetry reflects the role of hydrophobicity in the distribution of codons assigned to each amino acid. In general, codons that code to amino acids with extreme hydrophobic differences are in different chains with maximal length. In particular, codons with
U as a second base will appear in chains of maximal length whereas codons with A as a second base will not. For that reason, it will be impossible to obtain hydrophobic amino acid with codons having U in the second position through deductions from hydrophilic amino acids with codons having A in the second position.
There are twenty four Hasse diagrams of codons, corresponding to the twenty four genetic-code Boolean algebras. These algebras integrate a symmetric group isomorphic to the symmetric group of degree four $S_4$ [1]. In summary, the DNA binary code is not arbitrary, but subject to logic operations with subjacent biophysical meaning.
References Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Morgado E, Grau R. A genetic code Boolean structure. I. The meaning of Boolean deductions. Bull Math Biol, 2005, 67:1–14. |
Modeling a system
Visual Cue Narration
Show first slide
In this tutorial, we will see how to model a physical system using the ODE representation of the system and then convert it to a more familiar transfer function representation in Scilab.
Show slide with Mass-Spring-Damper
The system that we will attempt to model will be a mass-spring-damper based system. We assume that the values of the mass, the spring constant and the damping coefficient are known to us.
We know from basic Systems theory that the system can be modeled using the ODE:
Change slide to show this equation:
<math> m\ddot{x}(t) + b\dot{x}(t) + k x(t) = F(t) </math>
The first element in the expression gives the force due to the mass from Newton's second law. The second element gives the force that is exerted in order to move the damper. The third element gives the force that is exerted by a spring that is displaced from its mean position.
By taking the Laplace transform of this equation, we get the following equation:
<math> m s^2 X(x) + b s X(s) + k X(s) = F(s) </math>
which can be simplified by rearranging the terms around the equality to:
<math> \frac{X(s)}{F(s)} = \frac{1}{m s^2 + b s + k} </math>
We use a scaling factor of k to obtain our transfer function:
<math> G(s) = \frac{k}{m s^2 + b s + k} </math>
Switch to Scilab and enter the following values:
m = 1 b = 10 k = 20 s = %s
Our system is represented in the transfer function form as:
System = k/(m*s^2 + b*s + k)
Switch to presentation
We compare the system with the standard representation of a second order model. Using this comparison, we can obtain the values of the natural frequency omega_n and damping factor zeta. We compute these in Scilab.
Switch to Scilab:
wn = sqrt(k/m) zeta = (b/m)/(2*wn)
We note that this system is overdamped since zeta is greater than one.
We have successfully modeled this system in Scilab.
Now, we must analyze this sytem to understand its characteristics and to decide how we must proceed in order to control the system and obtain the response we desire. |
I know the series, $4-{4\over3}+{4\over5}-{4\over7}...$ converges to $\pi$ but I have heard many people say that while this is a classic example, there are series that converge much faster. Does anyone know of any?
The BBP formula is another nice one: $$ \pi = \sum_{k=0}^\infty \left[ \frac{1}{16^k} \! \left( \frac{4}{8k+1} - \frac{2}{8k+4} - \frac{1}{8k+5} - \frac{1}{8k+6} \right) \right] $$ It can be used to compute the $n$th hexadecimal digit of $\pi$ without computing the preceding $n{-}1$ digits.
The series $$ \sum_{n=0}^{\infty} \frac{(2n)!!}{(2n+1)!!} \left(\frac{1}{2}\right)^n = \frac{\pi}{2}$$ converges quickly. Here $!!$ is the double factorial defined by $0!! = 1!! = 1$ and $n!! = n (n-2)!!$
This is series is not too hard to derive. Start by defining $$f(t) = \sum _{n=0}^{\infty } \frac{(-1)^n}{(2n+1)}t^n.$$ Note that $f(1) = \pi/4$ is the series you referenced. Now we take what is called the Euler Transform of the series which gives us $$ \left(\frac{1}{1-t}\right)f\left(\frac{t}{1-t}\right) = \sum _{n=0}^{\infty } \left(\sum _{k=0}^n {n \choose k}\frac{(-1)^k}{(2k+1)}\right)t^n.$$
Now $$\sum _{k=0}^n {n \choose k}\frac{(-1)^k}{(2k+1)} = \frac{(2n)!!}{(2n+1)!!}$$ for hints on how to prove this identity see Proving a binomial sum identity $\sum _{k=0}^n \binom nk \frac{(-1)^k}{2k+1} = \frac{(2n)!!}{(2n+1)!!}$. Now put $t = 1/2$ and the identity follows. Showing the error term for the nth partial sum is less than $(1/2)^n$ is not too difficult.
I think you may find interesting to browse the webpage of Jon Borwein, which I would call the standard reference for your question. In particular, take a look at the latest version of his talk on "The life of pi" (and its references!), which includes many of the fast converging algorithms and series used in practice for high precision computations of $\pi$, such as the one from this Summer.
Just to give people an idea on convergence rates, here is a plot of $-\log_{10}\left|\frac{S_n-\pi}{\pi}\right|$ versus $n$ , where $S_n$ is the nth partial sum of the series in question, for three of the series featured in the answers to this question (note the vertical scale):
The three series are, from top to bottom, $\arctan(1)$ (the series mentioned by the OP), $2\arcsin\left(\sqrt{\frac12}\right)$ (the series mentioned by yjj in his answer), and the series by Ramanujan I mentioned in the comments (I didn't include the series by the Chudnovsky brothers, since that converges even faster than the Ramanujan series, and that makes for boring plots).
Here is a really nice one due to Simon Plouffe. There are many similar examples in his linked paper.
$$\pi = 72\sum_{n=1}^\infty \frac{1}{n(e^{n\pi} - 1)} - 96\sum_{n=1}^\infty \frac{1}{n(e^{2n\pi} - 1)} + 24\sum_{n=1}^\infty \frac{1}{n(e^{4n\pi} - 1)} .$$
What I like about it is that I can see at a glance that the series converge rapidly without having to make some mental estimate of the size of factorials.
You should take a look at the paper: Some New Formulas for π by Gert Almkvist, Christian Krattenthaler, and Joakim Petersson,
Experiment. Math. Volume 12, Number 4 (2003), 441-456.
Your series may be written as $$\frac{\pi}{4}=\sum_{k=0}^{\infty}\left(\frac{1}{4k+1}-\frac{1}{4k+3}\right)$$
Its truncation approximations improve if the zero relation (http://oeis.org/A176563) $$0=\sum_{k=0}^{\infty}\left(\frac{1}{4k+1}-\frac{3}{4k+2}+\frac{1}{4k+3}+\frac{1}{4k+4}\right)$$
is added to obtain $$\frac{\pi}{4}=\sum_{k=0}^{\infty}\left(\frac{2}{4k+1}-\frac{3}{4k+2}+\frac{1}{4k+4}\right)$$ $$=\frac{3}{4}\sum_{k=0}^{\infty}\frac{1}{(4k+1)(2k+1)(k+1)}$$
Although this is the slowest series in all answers, it illustrates how an absolutely convergent series of unit fractions for $\frac{\pi}{3}$ may be obtained by summing up two conditionally convergent series that have been regrouped.
This simple series also explains Why is $\pi$ so close to $3$? by taking the first term out of the summation.
Here's a formula which I found embedded in an old C program. I don't know where this comes from, but it converges to Pi very quickly, about 16 correct digits in just 22 iterations:
$\pi = \sum_{i=0}^{\infty}{ \frac{6(\prod{2j-1})} {(\prod{2j})(2i+1)(2^{2i+1})}}$
(The products are from 1 to i, so that for i=0 the products are empty, essentially 1/1. For i=1, the products are 1/2. For i=2, the products are (1*3)/(2*4). For i=3, the products are (1*3*5)/(2*4*6). Etc, ad infinitum.)
I have no idea of the provenance of that formula, but on running the C program, it produces:
Index = 0 Sum = 3.000000000000000Index = 1 Sum = 3.125000000000000Index = 2 Sum = 3.139062500000000Index = 3 Sum = 3.141155133928572Index = 4 Sum = 3.141511172340030Index = 5 Sum = 3.141576715774867Index = 6 Sum = 3.141589425319122Index = 7 Sum = 3.141591982358383Index = 8 Sum = 3.141592511157862Index = 9 Sum = 3.141592622870617Index = 10 Sum = 3.141592646875561Index = 11 Sum = 3.141592652105887Index = 12 Sum = 3.141592653258738Index = 13 Sum = 3.141592653515338Index = 14 Sum = 3.141592653572930Index = 15 Sum = 3.141592653585950Index = 16 Sum = 3.141592653588912Index = 17 Sum = 3.141592653589590Index = 18 Sum = 3.141592653589746Index = 19 Sum = 3.141592653589782Index = 20 Sum = 3.141592653589790Index = 21 Sum = 3.141592653589792Index = 22 Sum = 3.141592653589793
Which is 16 correct significant figures in just 22 iterations, which is actually pretty darn fast. Many serieses which converge to Pi do so with infuriating slowness, requiring 1000 iterations to get 3.1429384 which wrong after the first 3 digits. But not THIS formula! It generates almost as many good digits as iterations.
The convergence can be arbitrary fast unless you don't specify what kind of series you are looking. Let $k$ be a positive integer, $a_n=\pi/k$ if $n\leq k$ and zero elsewhere. Then $\sum_{n=1}^\infty a_n$ converges to $\pi$ after $k$ summands.
We have:
$\pi=\displaystyle\sum^{\infty}_{n=0}{\frac{n!\left(2n\right)!\left(25n-3\right)}{2^{n-1}\left(3n\right)!}}$
Produces a digit or more of pi per term.
protected by t.b. Jul 30 '11 at 10:53
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.